text
stringlengths
0
1.22M
$\gamma$ measurements at LHCb J. Nardulli On behalf of the LHCb collaboration Science and Technology Facility Council, Rutherford Appleton Laboratory, Didcot, UK Abstract The LHCb collaboration has studied various promising ways to determine the Unitarity Triangle angle $\gamma$. Three complementary methods will be considered. The potential of the $B\to DK^{(*)}$ decays has been studied by employing the combined Gronau-London-Wyler (GLW) and the Atwood-Dunietz-Soni (ADS) methods, making use of a large sample of simulated data. $\gamma$ can also be extracted with a time-dependent analysis of $B_{s}\to D_{s}K$ decays, provided that the $B_{s}$ mixing phase is measured independently. In addition, the combined measurement of the $B^{0}\to\pi^{+}\pi^{-}$ and $B^{0}_{s}\to K^{+}K^{-}$ time-dependent CP asymmetries allows the determination of $\gamma$, up to U-spin flavour symmetry breaking corrections. For each method the expected sensitivities to the angle $\gamma$ are presented. I Introduction LHCb aims to study CP violation and rare $B$-meson decays with high precision, using the Large Hadron Collider (LHC), where all species of $B$-mesons are produced in 14 $\mathrm{TeV}$ $pp$ collisions bib:lhcb1 ; bib:lhcb2 . In these events the $b\bar{b}$ pairs are frequently produced in the same forward (or backward) direction. The LHCb detector is a single-arm spectrometer with a forward coverage from 10 $\mathrm{mrad}$ to 300 $\mathrm{mrad}$ in the horizontal plane (i.e., the bending plane of the magnet). The acceptance lies between 10-250 $\mathrm{mrad}$ in the vertical plane (non-bending plane). The detector layout in the bending plane is shown in Fig. 1. II Event Selection LHCb will collect large samples of all types of $B$ mesons and baryons. These samples will allow the precise measurement of all the three ($\alpha$, $\beta$ and $\gamma$) angles of the Unitarity triangle and of the $B_{s}$ mixing phase. Here, three complementary methods to extract the Unitarity Triangle angle $\gamma$ will be considered. As it will become clear in the next sections, the three complementary methods considered for the extraction of the Unitarity Triangle angle $\gamma$ use different techniques to determine $\gamma$. However similar approaches are used in the event selection and in the selection at the trigger level. In all cases, samples of signal and background events are simulated through the LHCb apparatus; the selection is then optimized in order to maximize the signal efficiency and minimize the background contribution. The main selection criteria used take into account that the large $B$ mass produces decay products with high $p_{T}$ and that the long $B$ lifetime produces tracks with large impact parameter with respect to the primary vertex. Furthermore particle identification cuts are used for the $\pi/K$ identification and cuts on the significance of the flight distance of the $B$ are used in order to have a $B$ detached from the primary vertex. All the following results are for an integrated luminosity of 2 fb${}^{-1}$, which corresponds to one nominal year of data taking. III Extracting $\gamma$ from $B\to DK$ decays Interfering tree diagrams in the $B^{\pm}\to\tilde{D}K^{\pm}$ decays allow the determination of the Unitarity Triangle angle $\gamma$. Here, $\tilde{D}$ can be a $D^{0}$ or a $\overline{D^{0}}$ and the $D^{0}$ and the $\overline{D^{0}}$ are reconstructed in a common final state. In LHCb bib:guy this is done by employing the combined Gronau-London-Wyler (GLW) bib:glw and the Atwood-Dunietz-Soni (ADS) bib:ads methods. In the first method the information of the CP even eigenstates, where a $\tilde{D}$ decays to $\pi^{+}\pi^{-}$ or $K^{+}K^{-}$ is used. Note that reconstruction of the CP odd eigenstates, which include neutral particles, is rather challenging in LHCb and is not considered in this analysis. The ADS method exploits the interference between the favoured and double Cabibbo suppressed decay modes of the neutral D mesons to state such as $K\pi$ or $K\pi\pi\pi$. Using the ADS method the following equations can be written for the decay rates for the neutral $B^{0}\to D^{0}(\to K\pi)K^{*0}$ channels bib:kazu (very similar equation can be written for the charged decays, see also bib:patel ) : $$\displaystyle\Gamma(B^{0}\to(K^{+}\pi^{-})_{D}K^{*0})$$ $$\displaystyle=$$ $$\displaystyle N_{K\pi}(1+(r_{B}r_{D})^{2}+2r_{B}r_{D}\cdot$$ (1) $$\displaystyle\cos(\delta_{B}+\delta_{D}+\gamma))\;\;\;,$$ $$\displaystyle\Gamma(B^{0}\to(K^{-}\pi^{+})_{D}K^{*0})$$ $$\displaystyle=$$ $$\displaystyle N_{K\pi}(r_{B}^{2}+r_{D}^{2}+2r_{B}r_{D}\cdot$$ (2) $$\displaystyle\cos(\delta_{B}-\delta_{D}+\gamma))\;\;\;,$$ $$\displaystyle\Gamma(\overline{B^{0}}\to(K^{-}\pi^{+})_{D}\overline{K^{*0}})$$ $$\displaystyle=$$ $$\displaystyle N_{K\pi}(1+(r_{B}r_{D})^{2}+2r_{B}r_{D}\cdot$$ (3) $$\displaystyle\cos(\delta_{B}+\delta_{D}-\gamma))\;\;\;,$$ $$\displaystyle\Gamma(\overline{B^{0}}\to(K^{+}\pi^{-})_{D}\overline{K^{*0}})$$ $$\displaystyle=$$ $$\displaystyle N_{K\pi}(r_{B}^{2}+r_{D}^{2}+2r_{B}r_{D}\cdot$$ (4) $$\displaystyle cos(\delta_{B}-\delta_{D}-\gamma))\;\;\;,$$ where $$r_{B}=\frac{|A(B^{0}\to D^{0}K^{*0})|}{|A(B^{0}\to\overline{D^{0}}K^{*0})|}$$ is the ratio of the magnitudes between the two amplitudes of the B-decay and similarly $$r_{D}=\frac{|A(D^{0}\to K^{+}\pi^{-})|}{|A(\overline{D^{0}}\to K^{+}\pi^{-})|}% \;\;\;.$$ Furthermore $\delta_{B}$ represents the strong phase difference between the two B-decays while $\delta_{D}$ represents the strong phase difference between the two D-decays. $N_{K\pi}$ gives the overall normalization and represents the total number of $B^{0}\to(K\pi)_{D}K^{*0}$ events. It can be seen that there are two favoured rates (1) and (3) and two suppressed rates (2) and (4). Further information can be added by including the decays to CP-eigenstates, such as $\pi^{+}\pi^{-}$ or $K^{+}K^{-}$ and so: $$\displaystyle\Gamma(B^{0}\to D_{CP}K^{*0})$$ $$\displaystyle=$$ $$\displaystyle N_{CP}(1+r_{B}^{2}+$$ (5) $$\displaystyle 2r_{B}\cos(\delta_{B}+\gamma))\;\;\;,$$ $$\displaystyle\Gamma(\overline{B^{0}}\to D_{CP}\overline{K^{*0}})$$ $$\displaystyle=$$ $$\displaystyle N_{CP}(1+r_{B}^{2}+$$ (6) $$\displaystyle 2r_{B}\cos(\delta_{B}-\gamma))\;\;\;.$$ Here, $N_{CP}$ represents the total number of $B^{0}\to D_{CP}K^{*0}$ events. Note that $r_{D}$ is known and is equal to 0.060 $\pm$ 0.003 bib:pdg and $N_{CP}$ can be calculated from $N_{K\pi}$ taking into account the different efficiencies and branching ratios. Furthermore, for what concerns $r_{B}$, for the neutral $B$, values smaller than 0.6 with 95% probability have been measured at Babar bib:babarneutro , while for the charged $B$ the latest world average value is $r_{B}=0.10\pm 0.02$ bib:ut . $\delta_{B}$ is unknown for the neutral $B$-meson, while its value is fixed at 130${}^{\circ}$ for the charged $B$-meson bib:babar ; bib:belle . $\delta_{D}$ is assumed to be in the range [-25${}^{\circ}$; +25${}^{\circ}$] due to a limit set by the CLEO-c collaboration bib:cleoc . In total there are 5 unknowns ($\gamma$, $r_{B}$, $\delta_{B}$, $\delta_{D}$ and $N_{K\pi}$) and 6 observables. III.1 Sensitivity to $\gamma$ As a result of the event selection, the annual yields together with the total efficiencies and with the background to signal ratios are listed in Table 1  bib:kazu ; bib:patel . Both for the charged and neutral $B$-meson decays a standalone Monte Carlo (MC) simulation was used to generate the event yields and fit the unknown parameters. All the unknown parameters have been scanned as shown in Table 2. The input value of $\gamma$ has been fixed at 60${}^{\circ}$. For the charged $B$-meson with one year of data at nominal luminosity (2fb${}^{-1}$), the angle $\gamma$ can be determined with a precision in the range $8.2^{\circ}-9.6^{\circ}$, depending on the value of the strong phase $\delta_{D}$ bib:patel . For the neutral $B$ the angle $\gamma$ can be determined with a precision smaller than $10^{\circ}$, depending on the value of the strong phase $\delta_{B}$ and for $r_{B}$ values bigger than 0.3 bib:kazu . IV Extracting $\gamma$ from $B^{0}_{s}\to D_{s}K$ decays The relations between the $B^{0}$-meson mass eigenstates $|B_{H,L}\rangle$ and their flavour eigenstates $|B^{0}\rangle$ and $\overline{|B^{0}}\rangle$, can be expressed in terms of linear coefficients $p$ and $q$: $$|B_{H,L}\rangle=p|B^{0}\rangle\mp q\overline{|B^{0}}\rangle\;\;\;.$$ (7) The difference in mass and decay rates are defined as: $$\Delta m=m_{H}-m_{L}\;\;\;\;,\;\;\;\Delta\Gamma=\Gamma_{H}-\Gamma_{L}\;\;\;.$$ (8) The average mass and decay rate are defined as: $$m=\frac{m_{H}+m_{L}}{2}\;\;\;\;,\;\;\;\Gamma=\frac{\Gamma_{H}+\Gamma_{L}}{2}\;% \;\;.$$ (9) The decay rate at a time $t$ of an originally produced $|B^{0}\rangle$ to a final state $f$ is given by: $$\Gamma_{B^{0}\to f}(t)=|\langle f|T|B^{0}(t)\rangle|^{2}\;\;\;,$$ (10) where T is the transition matrix element. The time evolution of the flavour eigenstates $|B^{0}\rangle$ and $|\overline{B^{0}}\rangle$ is then given by the four decay equations: $$\displaystyle\Gamma_{B\to f}(t)$$ $$\displaystyle=$$ $$\displaystyle\left|A_{f}\right|^{2}(1+|\lambda_{f}|^{2})\frac{e^{-\Gamma t}}{2}\cdot$$ $$\displaystyle(\cosh{\frac{\Delta\Gamma t}{2}}+D_{f}\sinh{\frac{\Delta\Gamma t}% {2}}+$$ $$\displaystyle C_{f}\cos{\Delta mt}-S_{f}\sin{\Delta mt})\;\;\;,$$ $$\displaystyle\Gamma_{\overline{B}\to f}(t)$$ $$\displaystyle=$$ $$\displaystyle\left|A_{f}\right|^{2}\left|\frac{p}{q}\right|^{2}(1+|\lambda_{f}% |^{2})\frac{e^{-\Gamma t}}{2}\cdot$$ $$\displaystyle(\cosh{\frac{\Delta\Gamma t}{2}}+D_{f}\sinh{\frac{\Delta\Gamma t}% {2}}$$ $$\displaystyle-C_{f}\cos{\Delta mt}+S_{f}\sin{\Delta mt})\;\;\;,$$ $$\displaystyle\Gamma_{\overline{B}\to\overline{f}}(t)$$ $$\displaystyle=$$ $$\displaystyle\left|\overline{A}_{\overline{f}}\right|^{2}(1+|\overline{\lambda% }_{\overline{f}}|^{2})\frac{e^{-\Gamma t}}{2}\cdot$$ $$\displaystyle(\cosh{\frac{\Delta\Gamma t}{2}}+D_{\overline{f}}\sinh{\frac{% \Delta\Gamma t}{2}}$$ $$\displaystyle+C_{\overline{f}}\cos{\Delta mt}-S_{\overline{f}}\sin{\Delta mt})% \;\;\;,$$ $$\displaystyle\Gamma_{B\to\overline{f}}(t)$$ $$\displaystyle=$$ $$\displaystyle\left|\overline{A}_{\overline{f}}\right|^{2}\left|\frac{p}{q}% \right|^{2}(1+|\overline{\lambda}_{\overline{f}}|^{2})\frac{e^{-\Gamma t}}{2}\cdot$$ $$\displaystyle(\cosh{\frac{\Delta\Gamma t}{2}}+D_{\overline{f}}\sinh{\frac{% \Delta\Gamma t}{2}}$$ $$\displaystyle-C_{\overline{f}}\cos{\Delta mt}+S_{\overline{f}}\sin{\Delta mt})% \;\;\;,$$ where $$\displaystyle D_{f}$$ $$\displaystyle=$$ $$\displaystyle\frac{2Re{\lambda_{f}}}{1+|\lambda_{f}|^{2}}\;\;\;,$$ $$\displaystyle C_{f}$$ $$\displaystyle=$$ $$\displaystyle\frac{1-|\lambda_{f}|^{2}}{1+|\lambda_{f}|^{2}}\;\;\;,$$ $$\displaystyle S_{f}$$ $$\displaystyle=$$ $$\displaystyle\frac{2Im{\lambda_{f}}}{1+|\lambda_{f}|^{2}}\;\;\;.$$ (12) $A_{f}$ and $\overline{A}_{\overline{f}}$ are the decay amplitudes (e.g. $A_{f}=\langle f|T|B^{0}\rangle$) and $$\lambda_{f}=\frac{q}{p}\frac{\overline{A}_{f}}{A_{f}}\;\;\;,\;\;\;\lambda_{% \overline{f}}=\frac{q}{p}\frac{\overline{A}_{\overline{f}}}{A_{\overline{f}}}% \;\;\;.$$ (13) For the $B_{s}^{0}\to D_{s}^{\mp}K^{\pm}$ decay channels (see Feynman diagrams in Fig. 2) a $B^{0}_{s}$, as well as a $\overline{B^{0}_{s}}$, can decay directly to $D_{s}^{-}K^{+}$ or $D_{s}^{+}K^{-}$. In addition, these relations hold for the decay amplitudes: $|A_{f}|=|\overline{A}_{\overline{f}}|$ and $|A_{\overline{f}}|=|\overline{A}_{f}|$. Assuming $|\frac{q}{p}|=1$, it is possible to write $|\lambda_{f}|=|\overline{\lambda}_{\overline{f}}|$. The terms $\lambda_{f}$ and $\overline{\lambda}_{\overline{f}}$ are calculated as $$\displaystyle\lambda_{D^{-}_{s}K^{+}}$$ $$\displaystyle=$$ $$\displaystyle\left(\frac{q}{p}\right)_{B_{s}}\frac{\overline{A}_{D^{-}_{s}K^{+% }}}{A_{D^{-}_{s}K^{+}}}=\left(\frac{V_{tb}^{*}V_{ts}}{V_{tb}V_{ts}^{*}}\right)% \left(\frac{V_{ub}V_{cs}^{*}}{V_{cb}^{*}V_{us}}\right)\cdot$$ $$\displaystyle\left|\frac{A_{2}}{A_{1}}\right|e^{i\Delta_{s}}=|\lambda_{D^{-}_{% s}K^{+}}|e^{i(\Delta_{s}-(\gamma+\phi_{s}))}\;,$$ $$\displaystyle\overline{\lambda}_{D^{+}_{s}K^{-}}$$ $$\displaystyle=$$ $$\displaystyle\left(\frac{p}{q}\right)_{B_{s}}\frac{A_{D^{+}_{s}K^{-}}}{% \overline{A}_{D^{+}_{s}K^{-}}}=\left(\frac{V_{tb}V_{ts}^{*}}{V_{tb}^{*}V_{ts}}% \right)\left(\frac{V_{ub}^{*}V_{cs}}{V_{cb}V_{us}^{*}}\right)\cdot$$ $$\displaystyle\left|\frac{A_{2}}{A_{1}}\right|e^{i\Delta_{s}}=|\lambda_{D^{-}_{% s}K^{+}}|e^{i(\Delta_{s}+\gamma+\phi_{s})}\;,$$ where $|A_{2}/A_{1}|$ is the ratio of the hadronic amplitudes, which is expected to be of order unity, $\Delta_{s}$ is the strong phase difference between $A_{1}$ and $A_{2}$ and $\gamma+\phi_{s}$ is the weak phase. The $\phi_{s}$ angle originates from $B^{0}_{s}$ mixing and can be measured directly using the $B_{s}^{0}\to J/\psi\phi$ decay bib:lfernandez . IV.1 Sensitivity to $\gamma+\phi_{s}$ As a result of the event selection, the annual yields together with the total efficiencies and with the background to signal ratios are listed in Table 3. In a toy MC study multidimensional probability functions (PDFs) are constructed. These are supposed to mimic the outcome of an analysis of data acquired at LHCb. So first the behaviour of the experiment is described building different PDFs. In the toy in use for the sensitivity studies of the $B^{0}_{s}\to D_{s}K$ channels several PDFs describing the mass distribution, the proper time acceptance, the flavour tagging and the particle identification response have been built both for the signal events and for the background events according to the studies done with the full Geant 4 simulation bib:geant . The final PDF is built as the product of all the different PDFs and finally a likelihood fit to the generated data is performed to extract the parameters and their errors. In this toy not only the $B^{0}_{s}\to D_{s}K$ decay channels are considered, but also the topologically similar $B^{0}_{s}\to D_{s}\pi$ decay channels. The $B^{0}_{s}\to D_{s}\pi$ decay channel has a larger branching ratio and consequently a higher annual yield (140000 events) and it allows for the determination of the $\Delta m_{s}$ parameter. A summary of the input parameters that are used is given in Tab. 4. The final sensitivity results are shown in Tab. 5. It can be seen that with one year of data taking at nominal luminosity a sensitivity on $\gamma+\phi_{s}$ of 10.3${}^{\circ}$ can be obtained. A more detailed study can be found in bib:scohen . V Extracting $\gamma$ from $B^{0}\to\pi^{+}\pi^{-}$ and $B^{0}_{s}\to K^{+}K^{-}$ In this section the way to extract $\gamma$ through the combined measurement of the $B^{0}\to\pi^{+}\pi^{-}$ and $B^{0}_{s}\to K^{+}K^{-}$ CP asymmetries and under the assumption of invariance of the strong interaction under the $d$ and $s$ quarks exchange (U-spin symmetry) bib:fleis is described. In Fig. 3 the $B_{(s)}^{0}\to h^{+}h^{\prime-}$ tree diagrams are shown. For a neutral $B$-meson decaying into a CP eigenstate $f$, the time-dependent CP asymmetry is given by: $$\displaystyle{\cal A}_{CP}(t)$$ $$\displaystyle=$$ $$\displaystyle\frac{\Gamma(\overline{B^{0}}_{d/s}(t)\to f)-\Gamma(B^{0}_{d/s}(t% )\to f)}{\Gamma(\overline{B^{0}}_{d/s}(t)\to f)+\Gamma(B^{0}_{d/s}(t)\to f)}$$ (15) $$\displaystyle=$$ $$\displaystyle\frac{-C_{CP}\cos\Delta mt+S_{CP}\sin\Delta mt}{\cosh\frac{\Delta% \Gamma}{2}t-A_{CP}^{\Delta\Gamma}\sinh\frac{\Delta\Gamma}{2}t},$$ where $\Gamma(\overline{B^{0}}_{d/s}(t)\to f)$ and $\Gamma(B^{0}_{d/s}(t)\to f)$ are the decay rates of the initial $\overline{B}$ and $B$ states respectively, and $\Delta m$ and $\Delta\Gamma$ are the mass and width differences between the two mass eigenstates. As shown in bib:fleis $C_{CP}$ and $S_{CP}$ for the $B^{0}\to\pi^{+}\pi^{-}$ and $B^{0}_{s}\to K^{+}K^{-}$ can be written as functions of the Unitarity Triangle angle $\gamma$, and of the two hadronic parameters $d$ and $\theta$ (which parametrize respectively the magnitude and phase of the penguin-to-tree amplitude ratio), as follows $$\displaystyle C_{\pi\pi}$$ $$\displaystyle=$$ $$\displaystyle\frac{2d\sin\theta\sin\gamma}{1-2d\cos\theta\cos\gamma+d^{2}}\;\;,$$ $$\displaystyle S_{\pi\pi}$$ $$\displaystyle=$$ $$\displaystyle\frac{\sin{(\phi_{d}+2\gamma})-2d\cos{\theta}\sin{(\phi_{d}+% \gamma})+d^{2}\sin{\phi_{d}}}{1-2d\cos{\theta}\cos{\gamma}+d^{2}}\;\;,$$ $$\displaystyle C_{KK}$$ $$\displaystyle=$$ $$\displaystyle-\frac{2d^{\prime}\sin{\theta^{\prime}}\sin{\gamma}}{1+2d^{\prime% }\cos{\theta^{\prime}}\cos{\gamma}+d^{\prime 2}}\;\;,$$ $$\displaystyle S_{KK}$$ $$\displaystyle=$$ $$\displaystyle\frac{\sin{(\phi_{s}+2\gamma})+2d^{\prime}\cos{\theta^{\prime}}% \sin{(\phi_{s}+\gamma})+d^{\prime 2}\sin{\phi_{s}}}{1+2d^{\prime}\cos{\theta^{% \prime}}\cos{\gamma}+d^{\prime 2}}\;\;,$$ where $\phi_{d}$ and $\phi_{s}$ are the $B^{0}_{d}/\overline{B^{0}_{d}}$ and $B^{0}_{s}/\overline{B^{0}_{s}}$ mixing phases which in LHCb will be measured from $B_{d}^{0}\to J/\psi K_{S}$ decay and from $B_{s}^{0}\to J/\psi\phi$ respectively bib:lfernandez . This system of four equations and five unknowns ($d,d^{\prime},\theta,\theta^{\prime}$ and $\gamma$) can be solved with the help of the U-spin symmetry, as a consequence $d=d^{\prime}$ and $\theta=\theta^{\prime}$. This results in an over-constrained system of three unknowns and four equations. V.1 Sensitivity to the CP asymmetries and to $\gamma$ As a result of the event selection, the annual yields together with the total efficiencies and with the background to signal ratios are listed in Table 6. As for the $B^{0}_{s}\to D_{s}K$ decays a toy MC is used to mimic the outcome of an analysis of data acquired at LHCb. In this toy, used for the CP sensitivity studies of the $B_{(s)}^{0}\to h^{+}h^{\prime-}$ channels, several PDFs describing the mass distribution, the proper time acceptance, the flavour tagging and the particle ID response have been built both for the signal events and for the background events according to the studies done with the full Geant 4 simulation bib:sens . The final PDF is built as the product of all the different PDFs and a likelihood fit to the generated data is performed to extract the parameters and their errors. The extraction of $\gamma$ is then performed using a Bayesian approach in three different U-spin scenarios: • Assuming perfect U-spin symmetry: $d=d^{\prime}$ and $\theta=\theta^{\prime}$ . • With a weaker assumption on the U-spin symmetry: $d=d^{\prime}$ and no constraint on $\theta$ and $\theta^{\prime}$. • With an even weaker assumption on the U-spin symmetry: $\xi=d^{\prime}/d=$ [0.8,1.2] and no constraint on $\theta$ and $\theta^{\prime}$. In the three different cases the sensitivity on $\gamma$ is taken considering the 68% probability interval of the resulting PDF distribution for $\gamma$. Fig. 4 shows an example of a resulting PDF for $\gamma$ obtained assuming perfect U-spin symmetry. The 68% and 95% probability intervals are visible. The 68% probability interval corresponds to a sensitivity of 4${}^{\circ}$. The sensitivity in the second and in the third scenarios increases and varies between 7${}^{\circ}$ and 10${}^{\circ}$. It is important to note that the extraction of $\gamma$ by means of the $B^{0}\to\pi^{+}\pi^{-}$ and $B^{0}_{s}\to K^{+}K^{-}$ decays uses not only tree diagrams but also loop diagrams and is therefore sensitive to new physics. VI Conclusions In these proceedings three complementary methods for the extraction of the Unitarity Triangle angle $\gamma$ have been discussed. The potential of the $B\to DK^{(*)}$ decays has been studied by employing the combined Gronau-London-Wyler (GLW) and the Atwood-Dunietz-Soni (ADS) methods. For the charged $B$-meson, with one year of data at nominal luminosity, the angle $\gamma$ can be determined with a precision in the range $8^{\circ}-10^{\circ}$, depending on the value of the strong phase $\delta_{D}$ bib:patel . For the neutral $B$, the angle $\gamma$ can be determined with a precision smaller than $10^{\circ}$, depending on the value of the strong phase $\delta_{B}$ and for $r_{B}$ values bigger than 0.3 bib:kazu . It has been shown that the angle $\gamma$ can also be extracted with a time-dependent analysis through the $B_{s}\to D_{s}K$ decays, provided that the $B_{s}$ mixing phase is measured independently. With one year of data taking at nominal luminosity, a sensitivity on $\gamma+\phi_{s}$ of 10${}^{\circ}$ can be obtained. The combined measurement of the $B^{0}\to\pi^{+}\pi^{-}$ and $B^{0}_{s}\to K^{+}K^{-}$ time-dependent CP asymmetries allows the determination of the Unitarity Triangle angle $\gamma$, up to U-spin flavour symmetry breaking corrections. Here a sensitivity of 10${}^{\circ}$, with one year of data taking at nominal luminosity, can also be obtained, but the final results depend on the assumption on the breaking of the U-spin symmetry. This method uses not only tree diagrams but also loop diagrams and is therefore sensitive to new physics. Other methods, including $B\to D(KK\pi\pi)K$ and $B\to D(K_{s}\pi^{+}\pi^{-})K$ decays, to extract $\gamma$ at LHCb have been studied. More details can be found in bib:15 ; bib:13 . References (1) LHCb Collaboration, LHCb Technical Proposal, CERN-LHCC/1998-004. (2) LHCb Collaboration, LHCb Technical Design Report, CERN-LHCC/2003-030. (3) G. Wilkinson, CERN-LHCb/2005-066. (4) M. Gronau and D. London, Phys. Lett. B253,(1991) 483. M. Gronau and D. Wyler, Phys. Lett. B265, (1991) 172. (5) M. Atwood, I. Dunietz and A. Soni, Phys. Rev. Lett. 78 (1997) 3257. (6) K. Akiba et al., CERN-LHCb/2007-050. (7) M. Patel, CERN-LHCb/2006-066. M. Patel, CERN-LHCb/2008-011. (8) Particle Data Group, S. Eidelman et al., Phys. Lett. B592 (2004) 677. (9) Babar Collaboration, ArXiv 0805.2001. (10) V. Sordini(results obtained by the UTFit collaboration), The CKM angle $\gamma$- B-factories results review, Rencontres de Moriond Electroweak, 2008. (11) Babar Collaboration, hep-ex/0607104. (12) A. Poluetkov et al., Belle Collaboration, Phys. Rev. D70, 072003 (2004). (13) J. Rosner et al., Phys. Rev. Lett. 100, 221801 (2008). (14) L. Fernandez, CERN-LHCb/2006-047. (15) See http://www-spires.dur.ac.uk/cgi-bin/spiface/hep/www?j=NUIMA,A506,250. (16) S. Cohen et al., CERN-LHCb/2007-041. (17) R. Fleisher, Phys Rev. Lett. B 459, 306 (1999). (18) A. Carbone et al., CERN-LHCb/2007-059. (19) See http://www.utfit.org (20) J. Libby, CERN-LHCb 2007-141. (21) J. Libby et al., CERN-LHCb 2007-098.
Can a single neuron learn quantiles? Edgardo Solano-Carrillo German Aerospace Center (DLR) Institute for the Protection of Maritime Infrastructures Bremerhaven - Germany Edgardo.SolanoCarrillo@dlr.de Abstract A novel non-parametric quantile estimation method for continuous random variables is introduced, based on a minimal neural network architecture consisting of a single unit. Its advantage over estimations from ranking the order statistics is shown, specifically for small sample size. In a regression context, the method can be used to quantify predictive uncertainty under the split conformal prediction setting, where prediction intervals are estimated from the residuals of a pre-trained model on a held-out validation set to quantify the uncertainty in future predictions. Benchmarking experiments demonstrate that the method is competitive in quality and coverage with state-of-the-art solutions, with the added benefit of being more computationally efficient. 1 Introduction Estimating how uncertain artificial intelligence systems are of their predictions is crucial for their safe applications (Varshney & Alemzadeh, 2016; Amodei et al., 2016). Quantifying uncertainty is then as important as designing good predictive models. It is an open problem, in part due to its ambiguous character: if predictions are interpreted as subjective opinions, evidence-based theory (Sensoy et al., 2018; Josang et al., 2018; Shi et al., 2020) typically measures uncertainty in entropic terms. On the other hand, if uncertainty is understood as a synonym for the variability of the predictive distribution, prediction intervals (Khosravi et al., 2011a) best summarize this in quantile terms. Prediction intervals express uncertainty in terms of confidence probabilities, of which humans have a natural cognitive intuition (Cosmides & Tooby, 1996; Juanchich & Miroslav, 2020) of guidance for decision making. As such, their use to quantify uncertainty is standardized across a wide variety of safety-critical regression applications, including medicine (IntHout et al., 2016), economics (Chudý M. et al., 2020), finance (Huang & Hsu, 2020); as well as in the forecasting of electrical load (Quan et al., 2014), solar energy (Galván et al., 2017), gas flow (Sun et al., 2017), wind power (Wang et al., 2017), and many other forecasting problems (Makridakis et al., 2020). In classification applications, there is less consensus on the use of a confidence-based (and hence intuitive) measure of uncertainty. For image classification, for instance, dozens of different uncertainty measures exist (Tagasovska & Lopez-Paz, 2019). Nevertheless, despite the diversity of methods to quantify uncertainty accross prediction categories, making uncertainty inferences has converged to a mainstream strategy: the same machine learning model that predicts a given target simultaneously learns the associated uncertainties. These models are often underspecified (D’Amour et al., 2020), giving unreliable predictions under stress tests, and also under distribution shift (Ovadia et al., 2019; Hendrycks & Dietterich, 2019). This unreliability is therefore translated (by design) to how these models assess uncertainty. A different strategy is then considered in this work: model how to predict, as usual, but measure the associated predictive uncertainty during validation, using a confidence-based learning method. This model-agnostic approach to uncertainty estimation is aligned with rising trends in the post-hoc explainability of deep learning models (Barredo Arrieta et al., 2020). That is, a predictive deep learning model is considered as a black box and a second system estimates how uncertain the black box is. Our main motivation in this work is finding a neural network to estimate uncertainty which is (by design) the most transparent one that is possible, therefore not having itself any associated uncertainty due to model specification. This leads us to the extreme case of a non-parametric quantile estimator consisting of a single neuron. A number of synthetic and real-world experiments demonstrate that the proposed quantile estimator has similar accuracy but better efficiency than some state-of-the-art methods. 2 Quantile estimation Let $E$ be a real random variable with distribution $F$, so that $\Pr(E\leq\varepsilon)=F(\varepsilon)$. For any $p\in(0,1)$, a $p$-th quantile of $F$ is a number $r_{p}$ satisfying $F(r_{p}-)\leq p\leq F(r_{p})$, where the left limit is $F(r_{p}-):=\lim_{z\uparrow r_{p}}F(z)$. For all continuous distribution functions $F$, of interest here, this becomes $$F(r_{p})=p.$$ (1) If $F$ is a strictly increasing function, then there is only one number satisfying (1). It defines the quantile function $r_{p}=F^{-1}(p)$. It is our purpose in this work to introduce an efficient non-parametric method to estimate it, and apply it to typical regression problems with no discrete component of $F$. 2.1 Proposed estimator. To be able to use a neural network, and at the same time obtain a non-parametric quantile estimator, a single neuron is considered whose weight $w_{p}$ coincides with the quantile to be learned. Using an independently drawn sample $\bm{\varepsilon}=(\varepsilon_{1},\varepsilon_{2},\cdots,\varepsilon_{m})$ of $E$ of size $m$, this neuron activates itself to output the empirical distribution function $F_{m}(w_{p})=\tfrac{1}{m}\sum_{i=1}^{m}\mathbbm{1}(\varepsilon_{i}\leq w_{p})$, making an error $\mathcal{L}(w_{p})=[F_{m}(w_{p})-p]^{2}$ in reaching its target $p$. After properly initializing $w_{p}$, this neuron is trained with gradient descent after smoothing the indicator function $\mathbbm{1}(\varepsilon_{i}\leq w_{p})\sim\sigma(\beta(w_{p}-|\varepsilon_{i}|))$ using a sigmoid $\sigma(x)=(1+\exp(-x))^{-1}$; this approximation becomes exact as $\beta\rightarrow\infty$.111Since $\nabla\mathcal{L}(w_{p})\sim\beta$, in practice, the value of $\beta$ can be jointly selected with the learning rate $lr$. For most of the experiments in this work, $\beta=10^{3}$ with $lr=0.005$ work well. From Borel’s law of large numbers, $F_{m}(w_{p})$ almost surely tends to $F(w_{p})$ for infinite sample size. In this limit, our neuron is trained by minimizing $\mathcal{L}(w_{p})=[F(w_{p})-p]^{2}$, so its weight $w_{p}$ converges to the global minimum $r_{p}$ by (1). Therefore, the proposed quantile estimator is asymptotically consistent. For reasons that become clearer later, it is called a Prediction Interval Metric (PIM). Further theoretical details and link to the source code may be found in the supplementary material. 2.2 Comparison to estimation from the order statistics. Quantile estimation from ranking the order statistics is a pretty standard technique with at most $O(m)$ complexity. It considers the sample $\bm{\varepsilon}=(\varepsilon_{1},\varepsilon_{2},\cdots,\varepsilon_{m})$ and starts by constructing the order statistics $\varepsilon_{(k)}$ as the $k$-th smallest value in $\bm{\varepsilon}$, with $k=1,\cdots,m$. If $mp$ is not an integer, then there is only one value of $k$ for which $(k-1)/m<p<k/m$; this is called the rank. Since $F_{m}(\varepsilon)=k/m$ for $\varepsilon_{(k)}\leq\varepsilon<\varepsilon_{(k+1)}$, then a unique $p$-th quantile is estimated as $\varepsilon_{(k)}$. However, if $mp$ is an integer, an interval of $p$-th quantiles of $F_{m}$ exists with endpoints $\varepsilon_{(k)}$ and $\varepsilon_{(k+1)}$, the rank becoming a real-valued index. How to select a representative value from such an interval? One posibility would be to take the midpoint $(\varepsilon_{(k)}+\varepsilon_{(k+1)})/2$. This is equivalent to Laplace’s “Principle of Insufficient Reason” as an attempt to supply a criterion of choice (Janes, 1957), that is, since there is no reason to think otherwise, the events: the best representative value is $\varepsilon_{(k)}$ or the best representative value is $\varepsilon_{(k+1)}$, are equally likely. Hyndman & Fan (1996) compiled a taxonomy of nine interpolation schemes used by a number of statistical packages. They all add to the arbitrariness of selection of a representative value. Since $\Delta F_{m}(\varepsilon)\sim 1/m$ as $\Delta\varepsilon\sim\varepsilon_{(k+1)}-\varepsilon_{(k)}$, this arbitrariness has a major impact for small sample size, as shown in Fig. 1, where the confidence interval function $I(p)=r_{(1+p)/2}-r_{(1-p)/2}$ of a standard normal random variable is estimated using all interpolation methods provided by the numpy library. As observed, PIM does not have such a selection bias and can be more accurate for small sample size (see suplementary material for more experiments). 2.3 Conditional quantiles In regression analysis, one is interested in explaining the variations of a target random variable $Y$ taking values $y\in\mathbb{R}$ in terms of feature random variables $X$ taking values $x\in\mathbb{R}^{d}$. It is usually assumed, either implicitly or explicitly, that a deterministic map $f$ exists, explaining such variations as $y=f(x)+\varepsilon_{\textrm{obs}}(x)$, up to some additive noise $\varepsilon_{\textrm{obs}}(x)$ inherent to the data observation process. Empirically, this map is estimated by choosing a statistical model $\hat{f}(x)$ (e.g. a neural network), which approximates the target as $$y=\hat{f}(x)+\varepsilon(x).$$ (2) In so doing, the predictive model makes the error $\varepsilon(x)=\varepsilon_{\textrm{obs}}(x)+\varepsilon_{\textrm{epis}}(x)$ consisting of the aleatoric part $\varepsilon_{\textrm{obs}}(x)$ and an epistemic part $\varepsilon_{\textrm{epis}}(x)=f(x)-\hat{f}(x)$ which entails an uncertainty due to the lack of knowledge of $f(x)$. This could be because we are not sure how to select $\hat{f}$ (model specification) or because the shape of $f$ for unexplored regions of feature space might be significantly different from that inferred from the training set (distributional changes). The random variable $E$ defined previously is now conditioned on $X$, which is denoted as $E|X$. It will be understood to take the error values $\varepsilon(x)$ in (2), and is relocated to satisfy $\textrm{median}(E|X)=0$. We say that the errors are homoskedastic if $E$ is independent of $X$, otherwise they are heteroskedastic. There are then two ways to calculate the aleatoric uncertainty of the target variable $Y$: 1. Estimating the conditional quantile function $\mu_{p}(x)$ which assigns pointwise the smallest $\mu$ for which $\Pr(Y\leq\mu|x)=p$ and, from this, computing the prediction intervals $[\hat{\mu}_{(1-p)/2}(x),\,\hat{\mu}_{(1+p)/2}(x)]$ quantifying the uncertainty of the target at confidence level $p$. 2. Estimating the conditional quantile function $r_{p}(x)$ of the error variable $E|X$ and computing the corresponding prediction intervals $[\hat{f}(x)-\hat{r}_{p}(x),\,\hat{f}(x)+\hat{r}_{p}(x)]$ quantifying the uncertainty of the target at confidence level $p$. This assumes that $\hat{f}$ is a good approximator of the median of $Y|X$. Approaches of type 1 are known as quantile regression. They enlarge the model $\hat{f}\rightarrow(\hat{f}_{L},\hat{f}_{U})$ to fit the endpoints of the prediction intervals, i.e. $\hat{f}_{L;p}(x)=\hat{\mu}_{(1-p)/2}(x)$ and $\hat{f}_{U;p}(x)=\hat{\mu}_{(1+p)/2}(x)$. Given a training set $\{(x_{i},y_{i}):i\in\mathcal{I}\}$, we consider two state-of-the-art methods of this kind • Simultaneous Quantile Regression (SQR): this minimizes the average pinball loss $\tfrac{1}{|\mathcal{I}|}\sum_{i\in\mathcal{I}}l_{p}(\varepsilon_{i})$ for $\varepsilon_{i}=y_{i}-\hat{f}_{L;p}(x_{i})$ and for $\varepsilon_{i}=y_{i}-\hat{f}_{U;p}(x_{i})$ simultaneously in the same model, where $l_{p}(\varepsilon_{i})=p\,\varepsilon_{i}\,\mathbbm{1}(\varepsilon_{i}\geq 0)+(p-1)\,\varepsilon_{i}\,\mathbbm{1}(\varepsilon_{i}<0)$. This is enough for our purpose, since it has less execution steps than the state-of-the-art SQR of Tagasovska & Lopez-Paz (2019). Yet, it works better than standard quantile regression which estimates each quantile separately. • Quality Driven (QD) method (Pearce et al., 2018): In the training subset indexed by $\mathcal{C}=\{i:\hat{f}_{L;p}(x_{i})\leq y_{i}\leq\hat{f}_{U;p}(x_{i})\}$, this minimizes the captured mean prediction interval width (MPIW), which is expressed as $\textrm{MPIW}_{\textrm{capt}}=\tfrac{1}{|\mathcal{C}|}\sum_{i\in\mathcal{C}}[\hat{f}_{U;p}(x_{i})-\hat{f}_{L;p}(x_{i})]$, subject to the prediction interval coverage proportion (PICP) satisfying $\textrm{PICP}:=|\mathcal{C}|/|\mathcal{I}|\geq p$. The rationale is that prediction intervals of good quality (Khosravi et al., 2011b) have $\textrm{MPIW}_{\textrm{capt}}$ as small as posible and enough coverage. For approaches of type 2, the training set has to be split into two disjoint subsets: a proper training set $\{(x_{i},y_{i}):i\in\mathcal{I}_{1}\}$ and a calibration (or validation) set $\{(x_{i},y_{i}):i\in\mathcal{I}_{2}\}$. The proper training set is used to fit $\hat{f}(x)$, which is then evaluated on the validation set to compute the errors $\varepsilon_{i}:=\varepsilon(x_{i})=y_{i}-\hat{f}(x_{i})$ and their quantiles. This is the setting used in the split conformal prediction literature (Lei et al., 2018), where sample quantiles are estimated by ranking the order statistics. However, in this literature, a single $\hat{r}_{p}$ is obtained from $\{\varepsilon_{i}:i\in\mathcal{I}_{2}\}$. PIM may also be applied within this setting, obtaining $\hat{r}_{p}(x)$ which could vary with $x$. In order to obtain variable $\hat{r}_{p}(x)$ with PIM, a different neuron $u_{i}$ has to be used for each position $\{x_{i}:i\in\mathcal{I}_{2}\}$. If the data-generating distribution is known (or may be properly approximated), this is used to sample extra targets $\{y_{i;k}:k\in\mathcal{J}\}$ not known to $\hat{f}$, so PIM estimates the quantiles from the neuron $u_{i}$ having access to the errors $\varepsilon_{i;k}=y_{i;k}-\hat{f}(x_{i})$ for $k\in\mathcal{J}$. A synthetic example of this is shown in Fig. 3. If the data-generating distribution is unknown, but serial correlations are important (i.e. the order of $i$ in $\mathcal{I}$ matters), then PIM may be applied if relative rather than absolute positions in feature space are relevant. This is done by having $|\mathcal{T}|$ different neurons $u_{j}$ learn quantiles from samples $W_{i}=[\varepsilon_{i},\varepsilon_{i+1},\cdots,\varepsilon_{i+|\mathcal{T}|-1}]$ of the joint error distribution for $i=1,2,\cdots,|\mathcal{I}_{2}|-|\mathcal{T}|+1$, provided there are enough samples in the validation set. By rolling the window $W_{i}$, a new sample from the joint distribution is obtained, and the neuron $u_{j}$ is trained with all the errors observed at the $j$-th position of all windows. A real-world example of this is shown in Fig. 3. These two examples are described in more detail next. Synthetic experiment. The aim here is to compare the accuracy and computational efficiency of PIM against QD and SQR. For this, consider a one-dimensional data-generating process described by $y(x)=0.3\sin(x)+\varepsilon_{\textrm{obs}}(x)$ where $X\sim U(-2,2)$. For the error associated with observation, two cases are considered — both having scale $\sigma(x)=0.2\,x^{2}$. The first is Gaussian error $E_{\textrm{obs}}|X\sim N(0,\sigma^{2}(x))$, exemplifying a symmetric, unimodal distribution. The second is $E_{\textrm{obs}}|X\sim\textrm{Beta}(a,b,\textrm{loc}=0,\textrm{scale}=\sigma(x))$, exemplifying a skewed, bimodal distribution when $a<1$ and $b<1$. For concreteness, take $a=0.2$ and $b=0.3$. The conditional quantiles of the target may be expressed as $\mu_{p}(x)=0.3\sin(x)+\sigma(x)\mu_{p}$, where $\mu_{p}$ is the quantile function of the standardized error variable (computed by most statistical libraries for known distributions). From this, prediction intervals [$\mu_{(1-p)/2}(x)$, $\mu_{(1+p)/2}(x)$] may be calculated for the two cases of interest — bounded by the black thick lines in Fig. 3 for $p=0.95$. For visual aid of the symmetry/skeweness of the distributions, the median $\mu_{0.5}(x)$ is also shown as thin lines. In a single trial of the experiment, a neural network $\hat{f}$ with 100 hidden units and output layer with one unit is trained by sampling 500 pairs $P=\{(x_{i},y_{i}):i\in\mathcal{I}\}$, shown as black points in Fig. 3. Since this experiment is synthetic, there is no need to split the training set. Instead, $\hat{f}$ is evaluated on a grid $G=\{x_{j}:j\in\mathcal{T}\}$ disjoint to $P$, partitioning $[-2,2]$ in 500 intervals of equal length. PIM is trained222For skewed distributions, two neurons independently learn $\hat{r}_{p}^{L}$ and $\hat{r}_{p}^{U}$ from $\varepsilon\leq 0$ and $\varepsilon>0$ respectively; the prediction intervals estimated as $[\hat{f}-\hat{r}_{p}^{L},\hat{f}+\hat{r}_{p}^{U}]$. on $G$ by sampling $|\mathcal{J}|=1000$ values of $y(x_{j})$ for each $x_{j}$ in $G$, each neuron $u_{j}$ learning quantiles from $\{\varepsilon_{j,k}=y_{k}(x_{j})-\hat{f}(x_{j}):k\in\mathcal{J}\}$. QD and SQR both train a neural network with 100 hidden units and output layer with two units $(\hat{f}_{L},\hat{f}_{U})$. The model has the same hyperparameters as $\hat{f}$. However, for a fair comparison, the training set of $(\hat{f}_{L},\hat{f}_{U})$ is $P$ augmented with $|\mathcal{J}|$ more pairs disjoint to $G$. The results for a trial are shown in Fig. 3. The experiment is repeated for 10 trials. For each of them, the time taken to train + evaluate the models $(\hat{f}_{L},\hat{f}_{U})$ — and to train and evaluate $\hat{f}$ + train PIM — is measured and normalized by the total duration of the 10 experiments. This together with the RMSE between estimations and ideal values is shown in Table 1 for the case of normally distributed noise. As observed, the quantile estimation using PIM has less time and parameter complexity and thus more computationally efficient. In terms of accuracy, $\hat{f}+\textrm{PIM}$ ranks in between SQR and QD despite the fact that $\hat{f}$ is trained with less data and has less parameters than $(\hat{f}_{L},\hat{f}_{U})$. Real-world experiment The aim here is to demonstrate that PIM may be used in realistic contexts where varying prediction intervals are needed, specially when the uncertainty of interest is related to relative rather than absolute positions in feature space. As an illustration, the uncertainty in the prediction of the stock price of General Electric is considered. A LSTM model $\hat{f}$ learns to map features of the last $T=10$ observations to the next $h=|\mathcal{T}|=30$ target close prices. This is done in a training set with the first 9840 samples of daily data from 1962 to 2001. The trained model $\hat{f}$ is evaluated on a validation set consisting of the next 4218 samples, where PIM learns prediction intervals corresponding to $h$ consecutive predictions, using neurons $u_{j}$ for $j\in\mathcal{T}$. These are placed around the predictions on the held-out test set shown in Figure 3. These are compared with a popular baseline (Hyndman & Athanasopoulos, 2018), consisting of bounds $\pm\,z_{p}\,\hat{\sigma}_{h}$ derived by assuming that the errors are normally distributed, with $z_{p}$ being the z-score. If the forecasts of all $h$ future prices in the test set are assumed to coincide with the average of the past $T$ observations (which is roughly the case in Fig. 3), then it can be shown that $\hat{\sigma}_{h}=\hat{\sigma}\sqrt{1+1/T}$, where $\hat{\sigma}$ is the standard deviation of the $h$ error samples in the test set. Apart from PIM having better coverage than the baseline and having narrower prediction interval widths at the beginning of the test sequence (hence better quality), this example shows how PIM captures the epistemic uncertainty resulting from $\hat{f}$ knowing better that predictions for tomorrow should be close (by continuity of $f$) to observations today, giving rise to the cone-shaped uncertainty region. This information is cheap to obtain: while the inference time of the LSTM is about $3.9$ sec, PIM only takes about $0.4$ sec to obtain the prediction intervals from the validation set. 3 Would you use a single neuron to estimate uncertainty? It has been shown that training a model $\hat{f}$ that learns the mean — hopefully equal or close to the median — of the target distribution and using PIM to estimate its quantiles is more efficient and has similar accuracy than having a bigger model $(\hat{f}_{L},\hat{f}_{U})$ learning the boundaries of the prediction intervals directly. Also, the quantile estimation for small sample size can be more accurate using PIM than ranking the order statistics. With all these benefits, would you use it in your applications? The answer to this depends on the dataset. Since the training set has to be split into a proper training set $\{(x_{i},y_{i}):i\in\mathcal{I}_{1}\}$ to fit $\hat{f}$ and a validation set $\{(x_{i},y_{i}):i\in\mathcal{I}_{2}\}$ to train PIM, the resulting size $|\mathcal{I}_{2}|$ of the validation set might not be enough for PIM to get accurate results. Also, the amount of heteroskedasticity in the dataset may invalidate using a single neuron learning $\hat{r}_{p}$. The effect of these two factors is investigated next for real-world datasets, having the results from QD as a baseline. That is, we follow the experimental protocol established by Hernández-Lobato & Adams (2015) for the popular UCI regression benchmark. This assigns $90\%$ of the data (from 10 different datasets) for training uncertainty estimation models and $10\%$ for testing them, in an ensemble of mostly 20 random shuffles of the train-test partition. To apply PIM, the $80\%$ of the resampled training set of each dataset is used to train the nominal neural network $\hat{f}$, which is evaluated on the remaining $20\%$, where PIM is trained from the corresponding prediction errors. The best between [$\hat{f}-\hat{r}_{p},\hat{f}+\hat{r}_{p}$] and [$\hat{f}-\hat{r}_{p}^{L},\hat{f}+\hat{r}_{p}^{U}$], in addressing coverage and quality in the test sets, is chosen. Therefore, the test MPIW of the QD method is compared to either $2\hat{r}_{p}$ or $\hat{r}_{p}^{L}+\hat{r}_{p}^{U}$, depending on which is smaller, and which PICP (which is nothing but the $F_{m}$ of section 2.1) is closer to the nominal $p=0.95$. As in the synthetic experiments of the previous section, note that the QD model, besides having more outputs $\hat{f}_{L}$ and $\hat{f}_{U}$ (hence more weights), is trained on more data than $\hat{f}$. A measure of heteroskedasticity of the datasets is needed in order to better understand the resulting estimations. For this, a White test (White, 1980) is done in every fold used to train PIM. This looks for linear dependency of the variance $\mathbb{E}(\xi^{2})$ of residuals $\xi$ (from a linear regression of $y$ on $x$) on all features in $x$ and their interactions. The proportion of significant tests in the ensemble, according to the p-value of the F-statistic, is denoted by $P_{\textrm{SIG}}$ and reported as a percentage. This gives the percentage of times that the null hypothesis of homoskedastic residuals is rejected; giving a sense of residual variability among the validation folds of the ensemble but not how “strong” that variability is within a fold. To quantify the degree of variability of the residuals $\xi_{i}$ used for the White tests, the normalized power spectral entropy (PSE) of such residuals is proposed $$\textrm{PSE}=-\dfrac{1}{\log|\mathcal{I}_{2}|}\sum_{i\in\mathcal{I}_{2}}p_{i}\log p_{i},$$ (3) where $p_{i}=|\xi_{i}|^{2}/\sum_{i}|\xi_{i}|^{2}$ normalizes the square amplitude of the $i$-th spectral component of $\xi$ (found by a fast Fourier transform). The intuition is that patterns in $\xi$ have a low entropy $\textrm{PSE}\rightarrow 0$ whereas homoskedastic-like residuals (e.g. white noise) have high entropy $\textrm{PSE}\rightarrow 1$. The results of the comparison with QD are shown in Table 2, where -ENS is appended to the acronyms of the methods to mean that the results are averages over the ensemble. The first observation is that, despite all datasets being heteroskedastic, the uncertainty estimations made by PIM in the validation sets generalize well into the test sets (better or similar to QD in the shaded cases). The cases where PIM fails to converge to the desired PICP are those with significant variability among the validation folds (low $P_{\textrm{SIG}}$) or appreciable presence of patterns in the errors (low PSE), as expected. Data size is also important since Kin8nm is comparable to Concrete in terms of $P_{\textrm{SIG}}$ and PSE, but the former is more than 8 times bigger than the latter. This data-size dependence is evident from the table, where PIM has good performance mostly for the lower half datasets. For the biggest dataset, PIM does not excel presumably due to better calibrated predictions of QD’s $(\hat{f}_{L},\hat{f}_{U})$ over the $\hat{f}$ feeding PIM — expected from the flexibility of the former in terms of more network weights and more data to train them. The second observation is that, in the successful cases, the test PICP achieved by PIM coincides with the intended confidence level $p=0.95$. As seen, this is not necessarily the case for QD, since it targets $\textrm{PICP}\geq p$. However, for a continuous target variable $Y$ — as in many regression problems of practical interest — $\textrm{PICP}=p$ must be (asymptotically) satisfied at the $p$-quantile. Restricting $\textrm{PICP}\geq p$ in these cases may lead to the same quantile estimation describing different confidence levels; the quantile-crossing phenomenon (Takeuchi et al., 2006) that should be avoided. The results above show that using PIM with a single neuron may give accurate estimation of high-quality prediction intervals even for heteroskedastic datasets with weak serial correlations (i.e. patterns) of the prediction error samples. Therefore, in the exploratory data analysis of a given application, heteroskedasticity tests in the dataset may reveal whether or not to leverage from the efficiency of using a single neuron for uncertainty estimation. 4 Related work The split conformal prediction literature (Lei et al., 2018) uses a setting similar to PIM, estimating quantiles by ranking the order statistics of prediction errors from a given $\hat{f}$. They give finite sample coverage guarantees by assuming exchangeability of the training samples, which may not apply, for instance, for non-stationary stochastic processes typically found in real-world time series. Recently, Romano et al. (2019) have extended this framework to produce varying $\hat{r}_{p}(x)$ by combining the split conformal prediction formalism with quantile regression from $(\hat{f}_{L},\hat{f}_{U})$. As shown in this work, PIM has the advantage of being more flexible to high-quality quantile estimation from small sample sizes compared to ranking the order statistics. This may be important in those cases for which the size of the validation set is a small fraction of the training set. Recent research on uncertainty estimation focuses on studying the different sources of uncertainty. Prediction intervals capture the aleatoric uncertainty associated to the noisy data observation process (Pearce et al., 2018; Tagasovska & Lopez-Paz, 2019; Salem et al., 2020). Epistemic uncertainty involves model specification and data distributional changes; the latter maintaining recent interest (Malinin & Gales, 2018; Hafner et al., 2019; Tagasovska & Lopez-Paz, 2019; Li & Hoiem, 2020; Zhe Liu et al., 2020; Charpentier et al., 2020; Postels et al., 2020); the former being less studied. Examples in deep learning of uncertainty due to model specification include network-depth uncertainty (Antorán et al., 2020) and uncertainty over the number of nodes in model selection (Ghosh et al., 2019). The incompleteness in problem formalization behind machine learning models — intimately connected to model specification — leads to a need for their interpretability (Doshi-Velez & Kim, 2017). This need became more urgent in 2016, when the European Parliament published the General Data Protection Regulation, demanding (among other clauses) that, by May 2018, all algorithms have to provide “meaningful explanations of the logic involved” when used for decision making significantly affecting individuals (right of people to an explanation). Consequently, techniques to explain AI models started to permeate the literature. Ribeiro et al. (2016) made a case for model-agnostic interpretability of machine learning; while Rudin (2019) argued in favor of designing predictive models that are themselves interpretable. Different reviews arose (Gilpin et al., 2018; Guidotti et al., 2018) to clarify concepts and classify the increasing body of related research. The current understanding (Barredo Arrieta et al., 2020) is that a model is interpretable if, by itself, is understandable (e.g. linear/logistic regression, decision trees, $K$-nearest neighbors, rule-based learners, Bayesian models). If not, it needs post-hoc explainability (e.g. tree ensembles; SVM; multi-layer, convolutional and recurrent neural networks). Post-hoc explainability is done by feature relevance analysis or visualization techniques, but most often by a second simplified, and hence interpretable, model which mimics its antecedent. Our approach to uncertainty estimation goes along lines similar to the latter: a predictive model $\hat{f}$ is considered as a black box and a second system (at least a single, completely transparent, neuron) estimates how uncertain the black box is. Although using a model to learn from a black box has been explored in a context related to uncertainty, e.g. calibration of neural networks using Platt scaling (Kuleshov et al., 2018), it was not until recently that such a method is directly used to upgrade any black-box predictive API with an uncertainty score (Brando et al., 2020). However, their wrapper is based on deep neural networks and hence is not interpretable. PIM is not a parametric model, but uses globally interpretable neural networks. 5 Conclusion In this work, a non-parametric method to estimate predictive uncertainty of a pre-trained model is introduced. The method is competitive with state-of-the-art solutions in quality, with the additional benefit of giving uncertainty estimates more efficiently (i.e. no need to add extra layers or outputs to a predictive model). Although the method does not predict the uncertainty of new data samples based on their feature values — perhaps making it less attractive, as it deviates from the established machine learning paradigm — it does give a sense of a safer uncertainty estimation, just because it does not inherit the learning biases of the predictive models. This makes it suitable for rather explaining how uncertain a given model (treated as a black-box) is, and hence serving as a reliable guide to decision-making. Extensions of the method to estimate uncertainty in the classification setting is given in the supplementary material. References Amodei et al. (2016) Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., and Mané, D. Concrete Problems in AI Safety. arXiv e-prints, art. arXiv:1606.06565, Jun 2016. Antorán et al. (2020) Antorán, J., Urquhart Allingham, J., and Hernández-Lobato, J. M. Depth uncertainty in neural networks. In 34th Conference on Neural Information Processing Systems, 2020. Barredo Arrieta et al. (2020) Barredo Arrieta, A. et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion, 58:82–115, 2020. Brando et al. (2020) Brando, A., Torres, D., Rodríguez-Serrano, J. A., and Vitrià, J. Building uncertainty models on top of black-box predictive apis. IEEE Access, 8:121344–121356, 2020. Charpentier et al. (2020) Charpentier, B., Zügner, D., and Günnemann, S. Posterior Network: Uncertainty Estimation without OOD Samples via Density-Based Pseudo-Counts. In 34th Conference on Neural Information Processing Systems, 2020. Chudý M. et al. (2020) Chudý M., Karmakar S., and Wu W. B. Long-term prediction intervals of economic time series. Empirical Economics, 58(1):191–222, 2020. Cosmides & Tooby (1996) Cosmides, L. and Tooby, J. Are humans good intuitive statisticians after all? rethinking some conclusions from the literature on judgement under uncertainty. Cognition, 58(1):1–73, 1996. D’Amour et al. (2020) D’Amour, A. et al. Underspecification Presents Challenges for Credibility in Modern Machine Learning. arXiv e-prints, art. arXiv:2011.03395, November 2020. Doshi-Velez & Kim (2017) Doshi-Velez, F. and Kim, B. Towards A Rigorous Science of Interpretable Machine Learning. arXiv e-prints, pp.  arXiv:1702.08608, 2017. Galván et al. (2017) Galván, I. M. et al. Multi-objective evolutionary optimization of prediction intervals for solar energy forecasting with neural networks. Information Sciences, 418-419:363–382, 2017. Ghosh et al. (2019) Ghosh, S., Yao, J., and Doshi-Velez, F. Model selection in bayesian neural networks via horseshoe priors. Journal of Machine Learning Research, 20(182):1–46, 2019. Gilpin et al. (2018) Gilpin, L. H. et al. Explaining explanations: An overview of interpretability of machine learning. In IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp.  80–89, 2018. Guidotti et al. (2018) Guidotti, R. et al. A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 2018. Guo et al. (2017) Guo, C., Pleiss, G., Sun, Y., and Weinberger, K. Q. On Calibration of Modern Neural Networks. arXiv e-prints, art. arXiv:1706.04599, Jun 2017. Hafner et al. (2019) Hafner, D. et al. Reliable uncertainty estimates in deep neural networks using noise contrastive priors. In Uncertainty in Artificial Intelligence (UAI), 2019. Hendrycks & Dietterich (2019) Hendrycks, D. and Dietterich, T. Benchmarking neural network robustness to common corruptions and perturbations. In Proceedings of the International Conference on Learning Representations, 2019. Hernández-Lobato & Adams (2015) Hernández-Lobato, J. M. and Adams, R. P. Probabilistic backpropagation for scalable learning of bayesian neural networks. In Proceedings of the 32nd International Conference on Machine Learning, 2015. Huang & Hsu (2020) Huang, S.-F. and Hsu, H.-L. Prediction intervals for time series and their applications to portfolio selection. REVSTAT – Statistical Journal, 18(1):131–151, 2020. Hyndman & Athanasopoulos (2018) Hyndman, R. and Athanasopoulos, G. Forecasting: principles and practice. OTexts, 2018. ISBN 9780987507112. Hyndman & Fan (1996) Hyndman, R. J. and Fan, Y. Sample quantiles in statistical packages. American Statistician, 50:361, 1996. IntHout et al. (2016) IntHout, J. et al. Plea for routinely presenting prediction intervals in meta-analysis. BMJ Open, 6(7), 2016. Janes (1957) Janes, E. T. Information theory and statistical mechanics. Physical Review, 106:620–630, 1957. Josang et al. (2018) Josang, A., Cho, J., and Chen, F. Uncertainty characteristics of subjective opinions. In 21st International Conference on Information Fusion, pp. 1998–2005, 2018. Juanchich & Miroslav (2020) Juanchich, M. and Miroslav, S. Do people really prefer verbal probabilities? Psychological Research, 84(8):2325–2338, 2020. Khosravi et al. (2011a) Khosravi, A. et al. Comprehensive review of neural network-based prediction intervals and new advances. IEEE transactions on neural networks, 22:1341–56, 2011a. Khosravi et al. (2011b) Khosravi, A. et al. Lower upper bound estimation method for construction of neural network-based prediction intervals. IEEE Transactions on Neural Networks, 22:337–346, 2011b. Krishnan & Tickoo (2020) Krishnan, R. and Tickoo, O. Improving model calibration with accuracy versus uncertainty optimization. In Proceedings of the 34th Conference on Neural Information Processing Systems, 2020. Kuleshov et al. (2018) Kuleshov, V., Fenner, N., and Ermon, S. Accurate uncertainties for deep learning using calibrated regression. In Proceedings of the 35th International Conference on Machine Learning, 2018. Kumar et al. (2019) Kumar, A., Liang, P. S., and Ma, T. Verified uncertainty calibration. In Advances in Neural Information Processing Systems, 2019. Küppers et al. (2020) Küppers, F. et al. Multivariate confidence calibration for object detection. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2020. Lei et al. (2018) Lei, J., G’Sell, M., Rinaldo, A., Tibshirani, R. J., and Wasserman, L. Distribution-free predictive inference for regression. Journal of the American Statistical Association, 113(523):1094–1111, 2018. Li & Hoiem (2020) Li, Z. and Hoiem, D. Improving confidence estimates for unfamiliar examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. Lin’kov (2005) Lin’kov, Y. N. Lectures in Mathematical Statistics: Parts 1 and 2. American Mathematical Society, 2005. ISBN 978-0821837320. Makridakis et al. (2020) Makridakis, S., Spiliotis, E., and Assimakopoulos, V. The m4 competition: 100,000 time series and 61 forecasting methods. International Journal of Forecasting, 36(1):54–74, 2020. Malinin & Gales (2018) Malinin, A. and Gales, M. Predictive uncertainty estimation via prior networks. In Proceedings of the 32nd Conference on Neural Information Processing Systems, 2018. Ovadia et al. (2019) Ovadia, Y. et al. Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift. In Advances in Neural Information Processing Systems 32, pp. 13991–14002. 2019. Pearce et al. (2018) Pearce, T. et al. High-quality prediction intervals for deep learning: A distribution-free, ensembled approach. In Proceedings of the 35th International Conference on Machine Learning, 2018. Petropoulos et al. (2018) Petropoulos, F., Hyndman, R. B., and Bergmeir, C. Exploring the sources of uncertainty: Why does bagging for time series forecasting work? European Journal of Operational Research, 268:545–554, 2018. Postels et al. (2020) Postels, J. et al. Quantifying Aleatoric and Epistemic Uncertainty Using Density Estimation in Latent Space. arXiv e-prints, art. arXiv:2012.03082, December 2020. Quan et al. (2014) Quan, H., Dipti, S., and Khosravi, A. Uncertainty handling using neural network-based prediction intervals for electrical load forecasting. Energy, 73:916–925, 2014. Ribeiro et al. (2016) Ribeiro, M., Singh, S., and Guestrin, C. Model-Agnostic Interpretability of Machine Learning. In ICML Workshop on Human Interpretability in Machine Learning, 2016. Romano et al. (2019) Romano, Y., Patterson, E., and Candes, E. Conformalized quantile regression. In Advances in Neural Information Processing Systems, volume 32, 2019. Rudin (2019) Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206–215, 2019. Salem et al. (2020) Salem, T. S., Langseth, H., and Ramampiaro, H. Prediction intervals: Split normal mixture from quality-driven deep ensembles. In Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence, 2020. Sensoy et al. (2018) Sensoy, M., Kaplan, L., and Kandemir, M. Evidential deep learning to quantify classification uncertainty. In Advances in Neural Information Processing Systems, pp. 3179–3189, 2018. Shi et al. (2020) Shi, W. et al. Multifaceted uncertainty estimation for label-efficient deep learning. In Proceedings of the 34th Conference on Neural Information Processing Systems, 2020. Sun et al. (2017) Sun, X., Wang, Z., and Hu, J. Prediction Interval Construction for Byproduct Gas Flow Forecasting Using Optimized Twin Extreme Learning Machine. Mathematical Problems in Engineering, 2017:5120704, 2017. Tagasovska & Lopez-Paz (2019) Tagasovska, N. and Lopez-Paz, D. Single-model uncertainties for deep learning. In Advances in Neural Information Processing Systems 32, pp. 6414–6425. Curran Associates, Inc., 2019. Takeuchi et al. (2006) Takeuchi, I. et al. Nonparametric quantile estimation. Journal of Machine Learning Research, 7:1231, 2006. Varshney & Alemzadeh (2016) Varshney, K. R. and Alemzadeh, H. On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products. arXiv e-prints, art. arXiv:1610.01256, October 2016. Wang et al. (2017) Wang, J. et al. Wind power interval prediction based on improved pso and bp neural network. Journal of Electrical Engineering and Technology, 12(3):989–995, 2017. White (1980) White, H. A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica, 48:817–838, 1980. Zhang (2019) Zhang, J. Estimating confidence intervals on accuracy in classification in machine learning. Master’s thesis, University of Alaska, 2019. Zhe Liu et al. (2020) Zhe Liu, J. et al. Simple and principled uncertainty estimation with deterministic deep learning via distance awareness. In Proceedings of the 34th Conference on Neural Information Processing Systems, 2020. Supplementary information A theoretical ground is given here to key aspects of the paper together with extra details about the numerical experiments333The source code can be found at https://github.com/sola-ed/pim-uncertainty. Moreover, PIM is applied to binary classification in order to give more support to the claim that it can be more accurate than ranking the order statistics for small sample size. This is done, on real-world datasets, by comparing confidence intervals for the accuracy of classification, as estimated by PIM, with the corresponding estimations using the Bootstrap method. The material in the following is organized as follows: section S.1 summarizes the main theoretical assumptions behind PIM, making it an asymptotically consistent estimator. A heuristic for identification of finite-sample convergence is discussed in section S.1.1. Details about the regression experiments using the UCI datasets are given in section S.2. Finally, section S.3 applies PIM in the classification context. It starts in section S.3.1 with the problem formulation and proceeds with the experimental results comparing PIM with the Bootstrap method. A proof of the main theoretical result of the section is given in S.3.2, and details of the experiments are found in section S.3.4. S.1 PIM as a consistent estimator The main result is summarized in Theorem 1 below. In order to prove it, we go in steps by first showing that for the distribution functions of interest, the $p$-th quantile is unique, given the confidence level $p$. This uniqueness guarantees that the loss function of PIM has asymptotically only one minimum and then gradient descent will converge to it, given small enough learning rates. The uniqueness is proved in the following lemma: Lemma 1. Let $F(\varepsilon)=\Pr(E\leq\varepsilon)$ be a strictly increasing and continuous distribution function and $p\in(0,1)$. Then, the $p$-quantile $r_{p}$ is unique. Proof. A $p$-quantile of $F$ is a number $r_{p}$ satisfiying $F(r_{p})\leq p$ and $F(r_{p}+\epsilon)\geq p$, for $\epsilon\rightarrow 0^{+}$ Lin’kov (2005). Since $F$ is continuous, Bolzano’s theorem states that there is at least one point in the interval $[r_{p},r_{p}+\epsilon]$ where $F(r_{p})-p=0$. That there is only one such point clearly follows from $F$ being strictly increasing. Therefore, as $\epsilon\rightarrow 0^{+}$, $r_{p}$ becomes the unique value where $F(r_{p})=p$. ∎ Lemma 2. Let $\{\varepsilon_{i}\}$, with $i=1,2,\cdots,m$, be a sequence of independent draws of the random variable $E$, according to the distribution $F(\varepsilon)=\Pr(E\leq\varepsilon)$. With $\mathbbm{1}$ being the indicator function, define $F_{m}(r_{p})=\tfrac{1}{m}\sum_{i=1}^{m}\mathbbm{1}(\varepsilon_{i}\leq r_{p})$. Then, for all $r_{p}$ and with probability one, $F_{m}(r_{p})$ converges to $F(r_{p})$ in the limit $m\rightarrow\infty$. Proof. This is Borel’s law of large numbers. ∎ Theorem 1. Let $F(\varepsilon)=\Pr(E\leq\varepsilon)=\int_{-\infty}^{\varepsilon}\rho(x)dx$ be a strictly increasing and continuous error distribution function associated to the random variable $E$, with $\rho$ being the corresponding probability density function. If $m$ samples are independently drawn from it, then PIM (with $\beta\rightarrow\infty$) evaluated on these samples, converges to the unique value $r_{p}$ for which $F(r_{p})=p$, when $m\rightarrow\infty$. Proof. In the limit $\beta\rightarrow\infty$, the $F_{m}(r_{p})$ in PIM coincides with the $F_{m}(r_{p})$ in Lemma 2. Using this Lemma, the loss function in PIM is asymptotically $\mathcal{L}_{p}(\varepsilon)=(F(\varepsilon)-p)^{2}$. Its gradient is $\nabla_{\varepsilon}\mathcal{L}_{p}(\varepsilon)=2[F(\varepsilon)-p]\,\rho(\varepsilon)$. Since $\varepsilon$ is in the support of $E$, $\rho(\varepsilon)\neq 0$, then gradient descent, with a small enough learning rate, leads PIM to converge to the value of $\varepsilon$ for which $F(\varepsilon)-p=0$. By Lemma 1, there is only one such value, being the $p$-quantile $r_{p}$. ∎ S.1.1 Convergence for finite validation sets It is observed in the numerical experiments that the loss in PIM smoothly decreases and saturates about a small value. By using early stopping during optimization, the optimal value $\hat{r}_{p}$ is taken as the point where this saturation takes place. It is argued in this section why such heuristic approach makes sense. For this, it is convenient to think of the current value $w_{p}$ of the weight of the single neuron as following a trajectory parameterized by the epochs. In practice, $w_{p}$ is updated when the optimizer processes a batch and, at the end of a training epoch, all batches have been processed. The training epochs can then be thought of as values achieved by a continuous variable $t$, which changes as $w_{p}$ goes from its initial value, along a smooth trajectory $w_{p}(t)$, to the optimal value $\hat{r}_{p}$. Withouth loss of generality, it is supposed that these trajectories have no turning points, i.e. they monotonically increase or decrease the initial value $w_{p}(0)$ towards $\hat{r}_{p}$. Furthermore, the rate at which this happens is bounded: $$|\nabla_{t}w_{p}(t)|\leq c_{p},\hskip 14.22636pt\textrm{with}\;\;0<c_{p}<\infty.$$ (4) From the proof of Theorem 1 for infinite sample size, $\nabla_{\varepsilon}\mathcal{L}_{p}(\varepsilon)=2[F(\varepsilon)-p]\,\rho(\varepsilon)$, so from (4), $$|\nabla_{t}\mathcal{L}_{p}(t)|\leq 2c_{p}|F(w_{p}(t))-p|\,\rho(w_{p}(t)).$$ (5) A hypothetical algorithm, running with infinite validation set, will start at $t=0$, from $w_{p}(0)$, with sucessive updates generated (assuming a plain SGD optimizer) as $$w_{p}(t+dt)=w_{p}(t)-\eta\nabla_{t}\mathcal{L}_{p}(t),$$ (6) where $\eta$ is the learning rate and $dt=B/m$, with $B$ and $m$ being the batch and validation set sizes, respectively. The sizes $B$ and $m$ can be selected so that $dt$ is fixed, and arbitrarily small, when $B\rightarrow\infty$ and $m\rightarrow\infty$. The convergence of PIM to the optimal value $\hat{r}_{p}$ can be considered, in practical terms, as related to the saturation of the loss function $\mathcal{L}_{p}(t)$. Given a small enough tolerance $\sigma_{p}$, the algorithm is said to converge to $\hat{r}_{p}$ at epoch $t_{*}$ if $|\nabla_{z}\mathcal{L}_{p}(t_{*})|\leq\sigma_{p}$, at which point the loss has saturated. If PIM is stopped at $t_{*}$, the error committed in estimating the $p$-quantile of $F$ is, up to first order in $\sigma_{p}$, $$|r_{p}-\hat{r}_{p}|=\dfrac{\sigma_{p}}{2c_{p}[\rho(r_{p})]^{2}},$$ (7) which is obtained by evaluating (5) at $t_{*}$, writing $\hat{r}_{p}=w_{p}(t_{*})$, and expanding $F$ and $\rho$ around $r_{p}$, giving $|\nabla_{t}\mathcal{L}_{p}(t_{*})|\leq\sigma_{p}$. Finite validation sets. In this case, the trajectories are not generated by (6) anymore. Here, $dt$ is not arbitrarily small, i.e. $\min\{dt\}=1/m$, which happens when a batch contains only one data sample.444In practice, the batch size was taken to be equal to the sample size though in order to exploit the asymptotic properties behind PIM. The trajectories are still considered smooth and with bounded speed, but now the values of $w_{p}$ updated by PIM are more sparse. These trajectories are generated by the loss $\mathcal{L}_{p}^{m}(t)=[F_{m}(t)-p]^{2}$, i.e. $$w_{p}(t+dt)=w_{p}(t)-\eta\nabla_{t}\mathcal{L}_{p}^{m}(t).$$ Assuming the same constants $c_{p}$ serve as upper bounds to all the possible speeds, $$|\nabla_{t}\mathcal{L}_{p}^{m}(t)|\leq 2c_{p}|F_{m}(w_{p}(t))-p|\,|F_{m}^{\prime}(w_{p}(t))|.$$ (8) Saturation of the loss is understood as making (8) as small as possible. Clearly, since the gradient $F_{m}^{\prime}(w_{p}(x))$ is bounded and does not vanish, this saturation happens at the epoch $t_{*}$ of closest approach between $F_{m}(w_{p}(t))$ and $p$, that is, $$t_{*}=\operatorname*{arg\,min}_{t\in[0,\infty)}|F_{m}(w_{p}(t))-p|.$$ Again, denoting $\hat{r}_{p}=w_{p}(t_{*})$, and using the triangle inequality, $$|F_{m}(\hat{r}_{p})-p|\leq|F_{m}(\hat{r}_{p})-F(\hat{r}_{p})|+|F(\hat{r}_{p})-p|,$$ the right-hand side approaching zero, by Lemma 2 and Theorem 1, as more data is considered in the validation set. This explains why early stopping was used throughout the numerical experiments, by automatically detecting $t_{*}$ and retrieving the corresponding $\hat{r}_{p}$. S.2 Details of models on UCI regression datasets The baseline model has one relu-activated hidden layer with 50 units, except for Protein and Song Year, having 100 units. The ensembles are 20 repetitions of the experiments, except for Protein and Song Year, for which only 5 and 1 repetitions are considered, respectively. Hyperparameter optimization is done using the Hyperband tuner in Keras. For this, two protocols were tried and the best of the two, for each dataset, reported: 1. Optimization of learning rate, decay rate, weight-initialization variance, and dropout rate (using the Adam optimizer). 2. Optimization of weight-initialization variance, weight decay, initial learning rate, and the decay rate of its subsequent exponential decay in a learning rate schedule (using the AdamW optimizer). In both protocols, the mean square error loss is used. However, in the second protocol, the CWC value (Khosravi et al., 2011a) is added to the metric used in the validation set for model selection in the hyperparameter optimization process. This value is calculated as $\textrm{CWC}=\textrm{NMPIW}(1+\gamma e^{-\eta(\textrm{PICP-p})})$, where NMPIW is the MPIW normalized to the range of the target variable, $\gamma=\mathbbm{1}(\textrm{PICP}<p)$ and $\eta$ is a constant taken as 0.1. The PICP and MPIW are calculated by PIM. S.3 PIM for classification In classification problems, the target $Y$ is a discrete random variable, but these can be framed so that the prediction error $E|X$ is still a continuous random variable accessible to PIM. The aim of this section is twofold: • Give additional demonstration that PIM can be more accurate for small sample sizes than ranking the order statistics, by using real-world datasets for binary classification. • Demonstrate that using PIM to estimate confidence intervals for the accuracy of a classifier is more efficient than standard computations based on the Bootstrap method. To better illustrate the problem consider a sample $\{\hat{f}(x_{i})\in[0,1]:i\in\mathcal{I}_{2}\}$ of predictions from a binary classifier. Denoting by $[\![y]\!]$ the rounding operation, the accuracy of the classifier is $$\textrm{ACC}=\dfrac{1}{|\mathcal{I}_{2}|}\sum_{i\in\mathcal{I}_{2}}\mathbbm{1}([\![\hat{f}(x_{i})]\!]=y_{i}).$$ (9) How do we estimate a confidence interval for the accuracy? The simplest way is by using the normal approximation to the binomial result: $$\delta_{p}(\textrm{ACC})_{\mathcal{N}}=z_{p}\sqrt{\hat{\mu}_{\textrm{ACC}}(1-\hat{\mu}_{\textrm{ACC}})\,/\,|\mathcal{I}_{2}|},$$ (10) where $z_{p}$ is the z-score and $\hat{\mu}_{\textrm{ACC}}$ is an estimation of the mean $\mu_{\textrm{ACC}}$ of the distribution of accuracies. Clearly, using ACC in (9) as a substitute for $\hat{\mu}_{\textrm{ACC}}$ is rough; that is the standard way of getting confidence intervals for accuracy from one sample of predictions. The standard estimation can be improved by resampling the proper training and validation sets, $\mathcal{I}_{1}$ and $\mathcal{I}_{2}$, respectively. That is, by the de Moivre-Laplace central limit theorem, all the so-obtained values of ACC are asymptotically normally distributed around $\mu_{\textrm{ACC}}$, so their mean $\hat{\mu}_{\textrm{ACC}}$ is an unbiased estimator of $\mu_{\textrm{ACC}}$. When the resampling is done with repetition (a.k.a. the Bootstrap method), in order to allow for enough data, confidence intervals can be estimated by ranking the order statistics of all ACC, instead of using (10). One of the main observations in this section is that, provided that $\hat{f}$ is well calibrated, PIM can estimate confidence intervals $\delta_{p}(\textrm{ACC})$ for the accuracy of a classifier, which are good estimates even when using a single sample of predictions. This is formalized and proved in section S.3.2. Since it would not need to resample the validation set, this makes PIM more efficient than the Bootstrap method. Experiments comparing PIM with the two methods above are described next. S.3.1 Benchmaring experiments For binary classification, the target $Y$ is either 0 (negative class) or 1 (positive class). Predictive models capture this by making predictions $\hat{f}(x)\in[0,1]$. According to the chosen threshold $\tau$ (here $\tau=1/2$), these are positive predictions if $\hat{f}(x)>\tau$, otherwise they are negative. Furthermore, by comparing with the corresponding ground truth, each prediction may be categorized as true negative, true positive, false negative, or false positive; denoted, respectively, by the index $l\in\{\textrm{TN, TP, FN, FP}\}$. Relevant metrics of model performance are derived from the classification rates $R_{k}$. For instance, the accuracy can be written as $\textrm{ACC}=p_{\textrm{N}}R_{\textrm{TN}}+p_{\textrm{P}}R_{\textrm{TP}}$, where $p_{\textrm{N}}$ ($p_{\textrm{P}}$), are the negative (positive) class proportions in the validation set. Confidence intervals for accuracy are then obtained as $$\delta_{p}(\textrm{ACC})=p_{\textrm{N}}\,\delta_{p}(R_{\textrm{TN}})+p_{\textrm{P}}\,\delta_{p}(R_{\textrm{TP}}),$$ (11) in terms of confidence intervals for the classification rates $\delta_{p}(R_{l})$. The latter are estimated by PIM after estimating quantiles from the error samples $\varepsilon_{l}(x)=y_{l}-\hat{f}(x)$ observed in the validation set, where $y_{l}$ is the ground truth label if $l$ refers to a true prediction, otherwise $y_{l}=\tau$. For this, four neurons $s_{l}$ are trained in parallel until the optimal weights $\hat{r}_{p}^{l}$ estimate the desired quantiles. As stated in Theorem 2, the success of PIM depends on being fed by the outputs of a well-calibrated classifier (Guo et al., 2017; Kuleshov et al., 2018; Krishnan & Tickoo, 2020), so that these outputs approximate true probabilities. In these cases, PIM will give high-quality uncertainty estimates for the classification rates and derived quantities, given enough data. For the experiments that follow, a lower bound $\Delta_{\textrm{calib}}$ for the calibration error of the uncalibrated models is calculated by the method introduced in Kumar et al. (2019). To run the experiments, a classification model $\hat{f}$ with one hidden layer (having as many units as data features) and a sigmoid-activated output, is trained on $80\%$ of the data and evaluated on the remaining $20\%$, for nine UCI datasets. To quantify the spread of the uncertainty estimations, the train-test partition is randomly shuffled multiple times, forming an ensemble of 30 experiments. For each experiment, the distribution of ACC is sampled $B=20$ times by training $\hat{f}$ on $B$ transformations of the original training set, obtained by randomly sampling from it with replacement. Confidence intervals from the $B$ accuracies obtained in the validation ($=$test) sets are then computed by ranking their order statistics. The results from this Bootstrap (BS) method are compared with PIM and the binomial estimate in Table S1. As observed, PIM obtains confidence interval widths which are often narrower than BS (which ranks the order statistics of ACC) and therefore of higher quality, specially for small sample sizes. Yet, these are accurate enough555The effect of calibration on the spread of uncertainty estimations is investigated in section S.3.4 to overlap with the binomial estimate (except those few cases where $\hat{f}$ is miscalibrated). Benchmarking PIM against BS is important since the latter is a popular method (Zhang, 2019), used for addressing uncertainties in many real-world applications, including time series forecasting (Petropoulos et al., 2018). S.3.2 Theoretical details The main theoretical result of section S.3, namely Theorem 2, is proved in this section. It will be understood that $\hat{Y}$ is a random variable taking the continuous values $\hat{y}=\hat{f}(x)\in[0,1]$. Using the classification threshold $\tau$ (by default $\tau=0.5$), $\hat{Y}$ is compared to the ground truth binary variable $Y$, taking values $y\in\mathcal{B}=\{0,1\}$, by applying the rounding operation, $[\!\,{\hat{Y}}\,\!]_{\tau}=\mathbbm{1}(\hat{Y}>\tau)$, which projects the values to the binary set $\mathcal{B}$. Definition 1. The index $l$ has been defined as a label for the set $\mathcal{K}=\{\textrm{TN},\textrm{TP},\textrm{FN},\textrm{FP}\}$. This index can be written as the cartesian product $l=s_{l}\times v_{l}=\{(s_{l},v_{l}):s_{l}\in\{\textrm{T,\,F}\}\;\textrm{and}\;v_{l}\in\{\textrm{N,\,P}\}\}$. A mapping to the binary set $\mathcal{B}$ is introduced by putting a bar above the respective symbols according to: $$\bar{s}_{l}=\bar{v}_{l}\in\mathcal{B}\;\;\;\textrm{for}\;\;\;s_{l}=\textrm{T},\;\;\textrm{and}\;\;\bar{s}_{l}=1-\bar{v}_{l}\in\mathcal{B}\;\;\;\textrm{for}\;\;\;s_{l}=\textrm{F}.$$ In this way, a value of $l$ can be uniquely mapped to a pair of binary symbols $(\bar{s}_{l},\bar{v}_{l})$, taking on values in $\mathcal{B}$. Just as the rounding operator $[\!\,{\cdot}\,\!]_{\tau}\equiv[\!\,{\cdot}\,\!]_{v}$ projects to the set $\{\textrm{N, P}\}$ of possibles values of $v_{l}$, we define $[\!\,{\cdot}\,\!]_{s}$ as an operator projecting to the set $\{\textrm{T, F}\}$ of possible values of $s_{l}$. Unless otherwise stated, the subscript $\tau$ may be dropped, for simplicity, from the rounding operator $[\!\,{\cdot}\,\!]_{\tau}$. Definition 2. The symbol $|a|$ is used to count the total number of elements in the set labeled by $a$; for instance, the quantity $|v_{l}|\in\{|\textrm{N}|,|\textrm{P}|\}$ takes on values denoting the total number of negatives or positives in the validation set. Using this notation, the classification rates $R_{l}=L_{l}/V_{l}$ can be written as as quotient of random variables $L_{l}$ and $V_{l}$ taking on the values, $|l|$ and $|v_{l}|$, respectively. Lemma 3. The classification rates $R_{l}=L_{l}/V_{l}$ are asymptotically normal distributed with mean $\Pr([\!\,{\hat{Y}}\,\!]=\bar{s}_{l}\,|\,Y=\bar{v}_{l})$ and variance $|v_{l}|^{-1}\Pr([\!\,{\hat{Y}}\,\!]=\bar{s}_{l}\,|\,Y=\bar{v}_{l})\,\Pr([\!\,{\hat{Y}}\,\!]=1-\bar{s}_{l}\,|\,Y=\bar{v}_{l})$ in the limit when $|v_{l}|\rightarrow\infty$. Proof. Since $[\!\,{\hat{Y}}\,\!]$ is a binary random variable, $L_{l}$ is Binomially distributed, so the result immediately follows after applying the de Moivre-Laplace central limit theorem. ∎ Corollary 1. The accuracy of a binary classification algorithm is asymptotically normal distributed with mean $\mu_{\textrm{ACC}}=p_{\textrm{N}}\,\mu_{R_{\textrm{TN}}}+p_{\textrm{P}}\,\mu_{R_{\textrm{TP}}}$ and variance $\sigma_{\textrm{ACC}}^{2}=p_{\textrm{N}}^{2}\,\sigma_{R_{\textrm{TN}}}^{2}+p_{\textrm{P}}^{2}\,\sigma_{R_{\textrm{TP}}}^{2}$, with $\sigma_{\textrm{ACC}}^{2}\rightarrow\mu_{\textrm{ACC}}(1-\mu_{\textrm{ACC}})/m$ as $m=|\textrm{P}|+|\textrm{N}|\rightarrow\infty$. Proof. This follows by writing the accuracy as a weighted sum $\textrm{ACC}=p_{N}\,R_{\textrm{TN}}+p_{P}\,R_{\textrm{TP}}$ of independent and asymptotically normal random variables. As a consequence, the accuracy is also asymptotically normal distributed with mean $\mu_{\textrm{ACC}}=p_{\textrm{N}}\,\mu_{R_{\textrm{TN}}}+p_{\textrm{P}}\,\mu_{R_{\textrm{TP}}}$. The independence of $R_{\textrm{TN}}$ and $R_{\textrm{TP}}$ (they refer to mutually exclusive subspaces) then implies that the variance is $\sigma_{\textrm{ACC}}^{2}=p_{\textrm{N}}^{2}\,\sigma_{R_{\textrm{TN}}}^{2}+p_{\textrm{P}}^{2}\,\sigma_{R_{\textrm{TP}}}^{2}$. From Lemma 3, $\sigma_{R_{\textrm{TN}}}^{2}=\mu_{R_{\textrm{TN}}}(1-\mu_{R_{\textrm{TN}}})/|\textrm{N}|$ and $\sigma_{R_{\textrm{TP}}}^{2}=\mu_{R_{\textrm{TP}}}(1-\mu_{R_{\textrm{TP}}})/|\textrm{P}|$. Therefore, $\sigma_{\textrm{ACC}}^{2}$ differs from $\mu_{\textrm{ACC}}(1-\mu_{\textrm{ACC}})/m$ by a quantity of $O(1/m)$, with $m=|\textrm{P}|+|\textrm{N}|$ and $p_{\textrm{N}}=|\textrm{N}|/m$, $p_{\textrm{P}}=|\textrm{P}|/m$. This result was used in (10) to express the binomial confidence interval radius as $\delta(\textrm{ACC})_{\mathcal{N}}=z_{p}\,\sigma_{\textrm{ACC}}$. ∎ Theorem 2. If the output $\hat{y}\in[0,1]$ of a binary classifier is perfectly calibrated (Guo et al., 2017), i.e. $\Pr([\!\,{\hat{Y}}\,\!]=Y\,|\,\hat{Y}=q)=q\;$ for all $q\in[0,1]$, then the quantiles $\hat{r}_{p}^{l}$ directly estimated by PIM from the validation errors $\varepsilon_{l}=y_{l}-\hat{y}_{l}$ are asymptotically consistent with the $p$-quantiles of the asymptotically normal distribution of the classification rates $R_{l}$. Note that perfect calibration is impossible in all practical settings. However, there are empirical approximations (calibration methods), some of them used in section S.3.4 below, which capture the essence of perfect calibration. A “well-calibrated” $\hat{f}$ is understood here as model that, by designed, is calibrated or that has been calibrated properly after applying a calibration method. Before proceeding with the proof of Theorem 2, it helps to first visualize the meaning of the statement. In Figure S1, a plot of the empirical distribution of $\hat{Y}\,|\,Y$ is shown for a neural network with one hidden layer predicting on the test set of the Adult dataset in the UCI repository. It is noticed that, after applying a calibration method, the false predictions tend to cluster around the threshold $\tau=0.5$, following a kind of Gaussian-like envelope. For true predictions, these cluster around the ground truth but displaying long tails depending on the calibration method. PIM is applied to find quantiles for the errors around the targets (for false predictions, the target is $\tau$). The statement is then that, under certain conditions, these quantiles coincide with those of the distribution of the classification rates $R_{l}$ of the corresponding $[\!\,{\hat{Y}}\,\!]$. Proof. The proof proceeds by first showing that, if a classifier is perfectly calibrated, the quantiles of the predicted targets $\hat{Y}$ are intimately connected with the expected value of the predictions. This is then used to pivot the errors $\varepsilon_{l}=y_{l}-\hat{y}_{l}$ with respect to the threshold $\tau$ when $s_{l}=\textrm{F}$ (i.e. $y_{l}=\tau$) and with respect to the ground truth when $s_{l}=\textrm{T}$ (i.e. $y_{l}=y$). Using the notation of Definition 1, we seek an identity which links positive and negative predictions with true and false predictions. This is $$[\!\,{\hat{Y}}\,\!]_{\tau}:=\mathbbm{1}(\hat{Y}>\tau)=\;\;\mathbbm{1}(Y=1)\,\mathbbm{1}([\!\,{\hat{Y}}\,\!]_{s}=\textrm{T})+\mathbbm{1}(Y=0)\,\mathbbm{1}([\!\,{\hat{Y}}\,\!]_{s}=\textrm{F}),$$ which is valid for any threshold $\tau\in(0,1)$. Taking expectation value on both sides, $$\begin{split}\Pr(\hat{Y}>\tau)&=\Pr(Y=1,[\!\,{\hat{Y}}\,\!]_{s}=\textrm{T})+\Pr(Y=0,[\!\,{\hat{Y}}\,\!]_{s}=\textrm{F})\\ &=\Pr([\!\,{\hat{Y}}\,\!]_{\tau}=Y)=\int_{0}^{1}\Pr([\!\,{\hat{Y}}\,\!]_{\tau}=Y\,|\,\hat{Y}=q_{\tau}(z))\,\rho_{\hat{Y}}(z)dz,\end{split}$$ (12) where $\rho_{\hat{Y}}$ is the probability density function of $\hat{Y}$ associated to the cumulative distribution function $F_{\hat{Y}}(\hat{y})=\int_{0}^{\hat{y}}\rho_{\hat{Y}}(\hat{y}^{\prime})\,d\hat{y}^{\prime}$, and $q_{\tau}$ belongs to a family of smooth functions $q_{\tau}:[0,1]\rightarrow[0,1]$ labeled by $\tau$. If the classifier is perfectly calibrated, $$\Pr([\!\,{\hat{Y}}\,\!]_{\tau}=Y\,|\,\hat{Y}=q_{\tau}(z))=q_{\tau}(z),\;\;\;\;\;\forall q_{\tau}(z)\in[0,1].$$ Replacing this in (12) and denoting by $\langle q_{\tau}\rangle=\mathbb{E}_{\hat{Y}}(q_{\tau})$ the expectation value of $q_{\tau}$, we obtain $$F_{\hat{Y}}(\tau)=\Pr(\hat{Y}\leq\tau)=1-\langle q_{\tau}\rangle.$$ (13) Clearly, $\langle q_{\tau}\rangle\in(0,1)$, so (13) states that $\tau$ coincides with the $(1-\langle q_{\tau}\rangle)$-quantile of $F_{\hat{Y}}$. For well-balanced datasets $p_{\textrm{N}}\simeq p_{\textrm{P}}\simeq 1/2$, the classifier will presumably learn to predict aproximately the same amount of positive and negative predictions, so $\tau=1/2$ will coincide with the median ($=\,$mean) of $\hat{Y}$, which is a special case of (13) for $\tau=\langle q_{\tau}\rangle=1/2$ As in section 2.3, we are interested in the errors commited by the predictive model. In that section, these were measured with respect to the median of $\hat{Y}$ (expected to coincide with $Y$). However, in the binary classification context, the median of $\hat{Y}$ is not necessarily close to $Y$, as Fig. S1(c) suggests. The result implied by (13) then suggests that the errors $\varepsilon_{l}=y_{l}-\hat{y}_{l}$ committed by the predictive model be measured relative to the ground truth $Y$ for $s_{l}=\textrm{T}$ and relative to $\tau$ for $s_{l}=\textrm{F}$. This leads to the quantile estimation problem for the error variable $E_{l}$ as finding the $\hat{r}_{p}^{l}$ such that the empirical error distribution functions evaluate to the confidence level $p$ $$F_{l}(\hat{r}_{p}^{l}):=\dfrac{1}{|v_{l}|}\sum_{i:\,[\!\,{\hat{y}_{i}}\,\!]=\bar{s}_{l}}\mathbbm{1}(|\varepsilon_{l}^{i}|\leq\hat{r}_{p}^{l})=p.$$ (14) Such quantiles are estimated by PIM as an alternative to calculating the quantiles of $\hat{Y}$ directly (similar to what was done in section 2.3). The reader is referred to Guo et al. (2017) for a proof that, for a perfectly calibrated classifier, the accuracy is locally distributed as the average confidence (here $\hat{Y}$). Since the classification rate $R_{l}$ is the accuracy in the subspace indexed by $l$, this shows that the quantiles of $R_{l}$ coincide with the corresponding quantiles of $\hat{Y}_{l}$. Furthermore, by Lemma 3, the $R_{l}$ are asymptotically normally distributed. ∎ S.3.3 Uncertainty propagation PIM uses four neurons $u_{l}$ to measure confidence intervals $\hat{r}_{p}^{l}:=\delta_{p}(R_{l})$ for the classification rates $R_{l}$. Given the nature of classification $\delta_{p}(R_{l})<1$ with probability one. Knowing the value of $p$ from the context, we can omit it from the $\delta$ subscript. It is then convenient to think of $\delta_{p}(R_{l}):=\delta R_{l}$ as a uncertainty that can be propagated to quantities dependent on $\{R_{l}\}$ using Taylor’s theorem. This was done in (11) to go from $\textrm{ACC}=p_{\textrm{N}}R_{\textrm{TN}}+p_{\textrm{P}}R_{\textrm{TP}}$ to $\delta\textrm{ACC}=p_{\textrm{N}}\delta R_{\textrm{TN}}+p_{\textrm{P}}\delta R_{\textrm{TP}}$ by $\delta$-differentiating both sides. The result is straightforward in this case because the relationship connecting the classification rates with the quantity of interest is linear. No error in the Taylor expansion is committed in this case. In this section, we would like to consider non-linear relationships and use uncertainty propagation techniques — as in the natural sciences — to find the associated uncertainties. With $\sim$ denoting the asymptotic value around which the classification rates cluster, it has been shown in Lemma 3 that $$\begin{split}R_{\textrm{TP}}&=\dfrac{|\textrm{TP}|}{|\textrm{TP}|+|\textrm{FN}|}\sim\Pr([\!\,{\hat{Y}}\,\!]=1\,|\,Y=1),\\ R_{\textrm{FN}}&=1-R_{\textrm{TP}}\,\,\sim\Pr([\!\,{\hat{Y}}\,\!]=0\,|\,Y=1),\\ R_{\textrm{FP}}&=\dfrac{|\textrm{FP}|}{|\textrm{FP}|+|\textrm{TN}|}\sim\Pr([\!\,{\hat{Y}}\,\!]=1\,|\,Y=0),\\ R_{\textrm{TN}}&=1-R_{\textrm{FP}}\,\,\sim\Pr([\!\,{\hat{Y}}\,\!]=0\,|\,Y=0).\end{split}$$ (15) It is of interest to estimate the uncertainty of other important rates, namely, positive predictive value ($R_{\textrm{TP}}^{*}$, a.k.a. precision), the false discovery rate ($R_{\textrm{FN}}^{*}$), the negative predictive value ($R_{\textrm{TN}}^{*}$), and the false omission rate ($R_{\textrm{FP}}^{*}$). These are obtained after interchanging the roles of predictions and ground truths. By symmetry, $$\begin{split}R_{\textrm{TP}}^{*}&=\dfrac{|\textrm{TP}|}{|\textrm{TP}|+|\textrm{FP}|}\sim\Pr(Y=1\,|\,[\!\,{\hat{Y}}\,\!]=1),\\ R_{\textrm{FN}}^{*}&=1-R_{\textrm{TP}}^{*}\,\,\sim\Pr(Y=0\,|\,[\!\,{\hat{Y}}\,\!]=1),\\ R_{\textrm{TN}}^{*}&=\dfrac{|\textrm{TN}|}{|\textrm{TN}|+|\textrm{FN}|}\sim\Pr(Y=0\,|\,[\!\,{\hat{Y}}\,\!]=0),\\ R_{\textrm{FP}}^{*}&=1-R_{\textrm{TN}}^{*}\,\,\sim\Pr(Y=1\,|\,[\!\,{\hat{Y}}\,\!]=0).\end{split}$$ (16) Taking the $\delta R_{l}$ learned by PIM as independent variables (the neurons $u_{l}$ are independent), it is assumed that the uncertainties of any smooth function $g$ of $\{R_{l}\}$ can be approximated by Taylor’s expansion: $$\delta g(\{R_{l}\})=\sum_{q}\Bigl{|}\dfrac{\partial g}{\partial R_{q}}\Bigr{|}\,\delta R_{q}+\tfrac{1}{2}\sum_{q\in\textrm{F}}\Bigl{|}\dfrac{\partial^{2}g}{\partial R_{q}^{2}}\Bigr{|}\,(\delta R_{q})^{2}+\cdots.$$ (17) When an approximation of $\delta g$ is enough, only second order corrections are taken into account for false predictions, assuming they are more uncertain due to the (good enough) classification algorithm commiting them less frequently. Now, by using Bayes’ theorem, the asymptotic values in (15) and (16) can be connected as $$\begin{split}\Pr(Y=\bar{v}_{l}\,|\,[\!\,{\hat{Y}}\,\!]=\bar{s}_{l})&=\dfrac{\Pr([\!\,{\hat{Y}}\,\!]=\bar{s}_{l}\,|\,Y=\bar{v}_{l})\Pr(Y=\bar{v}_{l})}{\Pr([\!\,{\hat{Y}}\,\!]=\bar{s}_{l})},\\ &=\dfrac{\Pr([\!\,{\hat{Y}}\,\!]=\bar{s}_{l}\,|\,Y=\bar{v}_{l})\Pr(Y=\bar{v}_{l})}{\Pr([\!\,{\hat{Y}}\,\!]=\bar{s}_{l}\,|\,Y=0)\Pr(Y=0)+\Pr([\!\,{\hat{Y}}\,\!]=\bar{s}_{l}\,|\,Y=1)\Pr(Y=1)}.\end{split}$$ From this, it is easy to see that the most probable values of $R_{l}^{*}$ and $R_{l}$ are simply related. For instance, $$\begin{split}R_{\textrm{TP}}^{*}&\sim\dfrac{R_{\textrm{TP}}\,p_{\textrm{P}}}{R_{\textrm{TP}}\,p_{\textrm{P}}+R_{\textrm{FP}}\,p_{\textrm{N}}},\\ R_{\textrm{TN}}^{*}&\sim\dfrac{R_{\textrm{TN}}\,p_{\textrm{N}}}{R_{\textrm{FN}}\,p_{\textrm{P}}+R_{\textrm{TN}}\,p_{\textrm{N}}}.\end{split}$$ This relationships are examples of the $g$ function above, so by (17), the uncertainties are related as $$\begin{split}\delta R_{\textrm{TP}}^{*}&\sim\dfrac{p_{\textrm{N}}}{p_{\textrm{P}}}\,R_{\textrm{TP}}^{*2}\Biggl{[}\dfrac{R_{\textrm{FP}}}{R_{\textrm{TP}}}\,\dfrac{\delta R_{\textrm{TP}}}{R_{\textrm{TP}}}+\dfrac{\delta R_{\textrm{FP}}}{R_{\textrm{TP}}}+\dfrac{p_{\textrm{N}}}{p_{\textrm{P}}}\,R_{\textrm{TP}}^{*}\left(\dfrac{\delta R_{\textrm{FP}}}{R_{\textrm{TP}}}\right)^{2}\Biggr{]},\\ \delta R_{\textrm{TN}}^{*}&\sim\dfrac{p_{\textrm{P}}}{p_{\textrm{N}}}\,R_{\textrm{TN}}^{*2}\Biggl{[}\dfrac{R_{\textrm{FN}}}{R_{\textrm{TN}}}\,\dfrac{\delta R_{\textrm{TN}}}{R_{\textrm{TN}}}+\dfrac{\delta R_{\textrm{FN}}}{R_{\textrm{TN}}}+\dfrac{p_{\textrm{P}}}{p_{\textrm{N}}}\,R_{\textrm{TN}}^{*}\left(\dfrac{\delta R_{\textrm{FN}}}{R_{\textrm{TN}}}\right)^{2}\Biggr{]}.\end{split}$$ Uncertainties for other metrics derived from $R_{l}$, e.g the F1 score, can be obtained in a similar manner. S.3.4 Effect of calibration In the experiments of section S.3.1, no hyperparameter optimization is done. The only thing that is varied is the activation of the hidden layer (relu and tanh), and the calibration method for the predictions. Best results are reported. It is noticed during experimentation that sometimes some of the estimates $\hat{r}_{p}^{l}$ stay very close to their initial values (within a tolerance of $10^{-7}$), for which NA is used when requesting their optimal values. This is often due to scarcity of data, since $\hat{r}_{p}^{l}$ may not be updated for each of FN, FP, TN, TP within a mini-batch. For a classifier with relatively few false predictions, for instance, the optimal $\hat{r}_{p}^{F}$ may not be found. Therefore, uncertainty on the rates of false predictions cannot be currently evaluated for small datasets. This problem is not found for datasets like the Adult dataset ($48,842$ samples). Uncertainties estimated by PIM and propagated according to the technique described above are shown in Table S2. It is observed that calibrating the predictions leads most of the time to a decrease in the magnitude of the uncertainties, with a stabilization of the corresponding variance. However, for some rates the magnitude of the uncertainties still look too conservative. This is presumably due to the calibration method based on scaling not reaching calibrated-enough predictions, something typical according to the discussion in Kumar et al. (2019). The Scaling-Binary calibrator in Kumar et al. (2019) (whose effects are shown in Figure S1) is not considered in Table S2 since it does not give a continuous distribution of predictions, as required by PIM. Further research is desirable, combining the idea behind PIM with a suitable calibration method into one framework.
Can Bio-Inspired Swarm Algorithms Scale to Modern Societal Problems? Darren M. Chitty, Elizabeth Wanner, Rakhi Parmar    Peter R. Lewis Aston Lab for Intelligent Collectives Engineering (ALICE) Aston University, Aston Triangle, Birmingham, B4 7ET, UK darrenchitty@googlemail.com Abstract Taking inspiration from nature for meta-heuristics has proven popular and relatively successful. Many are inspired by the collective intelligence exhibited by insects, fish and birds. However, there is a question over their scalability to the types of complex problems experienced in the modern world. Natural systems evolved to solve simpler problems effectively, replicating these processes for complex problems may suffer from inefficiencies. Several causal factors can impact scalability; computational complexity, memory requirements or pure problem intractability. Supporting evidence is provided using a case study in Ant Colony Optimisation (ACO) regards tackling increasingly complex real-world fleet optimisation problems. This paper hypothesizes that contrary to common intuition, bio-inspired collective intelligence techniques by their very nature exhibit poor scalability in cases of high dimensionality when large degrees of decision making are required. Facilitating scaling of bio-inspired algorithms necessitates reducing this decision making. To support this hypothesis, an enhanced Partial-ACO technique is presented which effectively reduces ant decision making. Reducing the decision making required by ants by up to 90% results in markedly improved effectiveness and reduced runtimes for increasingly complex fleet optimisation problems. Reductions in traversal timings of 40-50% are achieved for problems with up to 45 vehicles and 437 jobs. Can Bio-Inspired Swarm Algorithms Scale to Modern Societal Problems? Darren M. Chitty, Elizabeth Wanner, Rakhi Parmar and Peter R. Lewis Aston Lab for Intelligent Collectives Engineering (ALICE) Aston University, Aston Triangle, Birmingham, B4 7ET, UK darrenchitty@googlemail.com Introduction The natural world is filled with a wealth of differing animals and ecosystems. Many of these organisms display collective behaviours which they use to overcome problems within their ecosystem such as ants foraging for food or bees communicating locations of nectar. These organisms have inspired many computing algorithms to assist in solving difficult real-world problems. Much of this inspiration comes from the exhibition of collective behaviours whereby thousands of organisms work together for the benefit of a colony, flock or hive. Each organism is simplistic in nature and by itself cannot survive but as part of a collective, problems such as finding sources of food can be solved. Nature has been used as a source of inspiration for the direct design of meta-heuristic algorithms that are moderately successful in solving optimisation problems of human consideration such as routing problems, information management and logistics to name a few. Examples of bio-inspired collective behaviour algorithms include Ant Colony Optimisation (ACO) (Dorigo and Gambardella, 1997) inspired by how ants forage for food; Artificial Bee Colony (ABC) (Karaboga and Basturk, 2007) based upon the way bees communicate sources of nectar; and Particle Swarm Optimisation (PSO) (Eberhart and Kennedy, 1995) which models the complex interactions between swarms of insects. These algorithms can be grouped under the term swarm intelligence through their use of hundreds or thousands of simulated digital organisms. However, the types of problems that are tackled in nature by these organisms such as finding sources of food can be considered much more simplistic than the complex societal problems facing the human world. In an increasing digital world whereby the available data is growing considerably alongside inter-connectivity and joined up thinking, the size and complexity of the problems that require solving are increasing rapidly such as with smart city planning (Batty, 2013; Murgante and Borruso, 2015). Moreover, unlike the natural world, restrictions exist on modern computers in terms of compute capability and available memory to be able to simulate many thousands of collective organisms. In regards to the literature of swarm algorithms most implementations of collective behaviour algorithms are applied to relatively small problem sizes. However, there have been some works in the field addressing scalability. For instance, Piccand et al. (2008) found applying PSO to problems of greater than 300 dimensions resulted in failing to find the optimal solution more than 50% of the time. Cheng and Jin (2015b) noted that PSO fails to scale well to problems of a high dimensionality potentially as a result of problem structure. However, the authors employ a social learning implementation whereby many particles act as demonstrators and present promising results on problems of sizes up to 1,000 dimensions. Cheng and Jin (2015a) later propose a modification to PSO whereby instead of using local and global best solution to update particle positions a pairwise competition is performed with the loser learning from the winner to update their position. The technique demonstrated improved results over PSO on benchmark problems of up to 5,000 dimensions although it was noted this was very computationally expensive. Cai et al. (2015) applies greedy discrete PSO to social network clustering problems with as many as 11,000 variables. For further reading Yan and Lu (2018) provide a review of the challenges of large-scale PSO. Regarding ACO, Li et al. (2011) noted the scaling issues of the approach proposing a DBSCAN clustering approach to decompose large Travelling Salesman Problems (TSPs) of up to 1,400 cities into smaller sub-TSPs and solve these. Ismkhan (2017) also noted the computational cost and memory requirements and considered the use of additional heuristics or strategies to facilitate the scaling of the technique to larger problems. Improvements such as considering the pheromone matrix as a sparse matrix and using pheromone in a local search operator enabled ACO to be applied effectively to TSPs of over 18,000 cities. Chitty (2017) also noted computational issues with ACO and mitigated them with a non-pheromone matrix ACO approach which only made partial changes to good solutions applying the technique to TSP instances of up to 200,000 cities. Therefore, it can be ascertained both ACO and PSO have issues in terms of scaling to high dimensional problems, the curse of dimensionality. Consequently, the question explored in this paper is can nature inspired, collective intelligence techniques scale up to the size and complexity of problems that the modern world desires solving? If not, what are the potential limiting causal factors for this and what mitigating steps could be taken? These questions will be investigated using a case study based on ACO to provide an illustration of the problems faced in scaling up a collective behaviour meta-heuristic and the hypothesized causal limitations by applying to a real-world fleet optimisation problem with steadily increasing complexity. The second aspect of this paper will attempt to mitigate ACO for these scalability issues using the novel Partial-ACO approach and enhance the approach further to assist scalability. Ant Colony Optimisation: An Exemplar Case A popular swarm based meta-heuristic is based upon the foraging behaviours of ants and known as Ant Colony Optimisation (ACO) (Dorigo and Gambardella, 1997). Essentially, the algorithm involves simulated ants moving through a graph $G$ probabilistically visiting vertices and depositing pheromone as they move. The pheromone an ant deposits on the edges $E$ of graph $G$ is defined by the quality of the solution the given ant has generated. Ants probabilistically decide which vertex to visit next using this pheromone level deposited on the edges of graph $G$ plus potential local heuristic information regarding the edges such as the distance to travel for routing problems. An evaporation effect is used to prevent pheromone levels building up too much and reaching a state of local optima. Therefore, ACO consists of two stages, the first solution construction, simulating ants, the second stage pheromone update. The solution construction stage involves $m$ ants constructing complete solutions to problems. Ants start from a random vertex and iteratively make probabilistic choices using the random proportional rule as to which vertex to visit next. The probability of ant $k$ at point $i$ visiting point $j\in N^{k}$ is defined as: $$\displaystyle p_{ij}^{k}=\frac{[\tau_{ij}]^{\alpha}[\eta_{ij}]^{\beta}}{\sum_{% l\in N^{k}}[\tau_{il}]^{\alpha}[\eta_{il}]^{\beta}}$$ (1) where $[\tau_{il}]$ is the pheromone level deposited on the edge leading from location $i$ to location $l$; $[\eta_{il}]$ is the heuristic information from location $i$ to location $l$; $\alpha$ and $\beta$ are tuning parameters controlling the relative influence of the pheromone deposit $[\tau_{il}]$ and the heuristic information $[\eta_{il}]$. Once all ants have completed the solution construction stage, pheromone levels on the edges $E$ of graph $G$ are updated. First, evaporation of pheromone levels upon every edge of graph $G$ occurs whereby the level is reduced by a value $\rho$ relative to the pheromone upon that edge: $$\displaystyle\tau_{ij}\leftarrow(1-\rho)\tau_{ij}$$ (2) where $\rho$ is the evaporation rate typically set between 0 and 1. Once this evaporation is completed each ant $k$ will then deposit pheromone on the edges it has traversed based on the quality of the solution found: $$\displaystyle\tau_{ij}\leftarrow\tau_{ij}+\sum_{k=1}^{m}\Delta\tau_{ij}^{k}$$ (3) where the pheromone ant $k$ deposits, $\Delta\tau_{ij}^{k}$ is defined by: $$\displaystyle\Delta\tau_{ij}^{k}$$ $$\displaystyle=$$ $$\displaystyle\left\{\begin{array}[]{ll}1/C^{k},&\mbox{if edge $(i,j)$ belongs % to $T^{k}$}\\ 0,&\mbox{otherwise}\\ \end{array}\right.$$ (4) where $1/C^{k}$ is the quality of ant $k$’s solution $T^{k}$. This ensures that better solutions found by an ant result in greater levels of pheromone being deposited on those edges. Consideration of the Scalability of ACO From a computational point of view, implementing an ant inspired algorithm on computational hardware to solve large-scale problems suffers from three potential limitations regarding overall performance. The degree of memory required, the computational costs of simulating thousands of ants and the sheer intractability of the problem itself. Memory Requirements A key aspect of ACO is the pheromone matrix used to store pheromone levels on all the edges in the graph $G$. This can require significant amounts of computing memory. For instance, a fully connected 100,000 city Travelling Salesman Problem (TSP) will have ten billion edges in graph $G$. Using a float data type requiring four bytes of memory will need approximately 37GB of memory to store the pheromone levels, considerably greater than available in standard computing platforms. In the natural world storing pheromone levels is not an issue with an infinite landscape to store them. A secondary memory requirement arises from ants only updating the pheromone matrix once all ants have constructed their solutions necessitating storing these in memory too. For a 100,000 city TSP a single ant will require 0.38MB of memory using a four byte integer data type. If the number of ants equals the number of vertices an additional 37GB of memory would be required. An ant inspired algorithm that addresses this memory overhead is Population-based ACO (P-ACO) (Guntsch and Middendorf, 2002) whereby the pheromone matrix is removed with only a population of ant solutions maintained. From this population, pheromone levels are reconstructed for the available edges by finding the edges taken within the population from the current vertex and assigning pheromone to edges based on the solution quality. Computational Costs A second aspect to consider with ACO is the time it will take to simulate ants through the graph $G$. At each vertex an ant needs to decide which vertex to next visit. This is performed probabilistically by looking at the pheromone levels, and possibly heuristic information, on all available edges. This requires computing probabilities for all these edges. As an example, take a 100,000 city TSP, at the first vertex an ant will have 99,999 possible edges to take all of which require obtaining probabilities from. Once an ant has made its choice it moves to the chosen vertex and once again analyses all available edges, now 99,998. Thus, for the 100,000 city TSP an ant will need to perform five billion edge comparisons. If a processor is capable of 100 GFLOPS (billion floating point operations per second) and assuming an edge comparison takes one floating point operation it will require at least 0.05 seconds to simulate an ant through graph $G$. If using a population of ants equivalent to the number of vertices in graph $G$ then to complete one iteration of solution construction would require nearly 90 minutes of computational time. For ants in nature, compute time is not an issue since each ant can act independently although, the actual time it would take real ants to move through a network of this size would still be problematic. The simulation of ants is inherently parallel in nature and therefore can easily take advantage of parallel computing resources to alliviate the computational costs. In recent years, speeding up ACO has focused on utilising Graphical Processor Units (GPUs) consisting of thousands of SIMD processors. DeléVacq et al. (2013) provide a comparison of differing parallelisation strategies for $\mathcal{MAX}$-$\mathcal{MIN}$ ACO on GPUs. Cecilia et al. (2013) reduced the decision making process of ants using a GPU with an Independent Roulette approach that exploits data parallelism and Dawson and Stewart (2013) went a step further introducing a double spin ant decision methodology when using GPUs. These works have provided speedups ranging from 40-80x over a sequential implementation, a considerable improvement. Peake et al. (2018) used the Intel Xeon Phi and a vectorized candidate list methodology to achieve a 100 fold speedup. Candidate lists are an alternative efficiency method of reducing the computational complexity of ACO whereby ants are restricted to selecting a subset of the available vertices within its current neighbourhood. If none of these vertices are available then the full set are considered as normal. Gambardella and Dorigo (1996) used this approach to solve TSP instances whereby speedups were observed but also a reduction in accuracy due to sub-optimal edges being taken. Problem Intractability A final scalability issue with ACO involves the amenability of the problem under consideration to be tackled by ACO. The key issue is the probabilistic methodology ACO employs to decide which edge to take next by utilisng the pheromone levels on the available edges to influence the probabilities. Computationally, an ant will take the pheromone level on each edge, and if available multiply by the heuristic information, and multiply this by a random value between zero and one. The edge with the largest product is selected as the next to be traversed. As an example consider a simple decision point whereby an ant has two choices available, one being the correct, optimal selection, the other suboptimal. If the pheromone levels on each edge are equal then there is a 0.5 probability the ant will take the optimal edge. However, consider ten independent decision points each with two possible choices akin to a binary optimisation problem such as clustering a set of items into two groups. Probabilistically this is equivalent to ten coin flips. With equal pheromone on all edges, there is only a $0.5^{10}$ probability of an ant making the optimal choices, approximately one in a thousand. Conversely, an ant has a 0.999 probability of a sub-optimal solution so 1,000 ants would need to traverse graph $G$ to obtain an optimal solution. For a much larger problem of 100,000 decision points this would be $0.5^{100,000}$ requiring $10^{30,102}$ ants to find the optimal solution. Consequently, pheromone levels are there to help guide the ants to taking the optimal edge. Consider the previous 100,000 decision point example again but with high levels of pheromone on the edge to the optimal choice, say 0.99 vs. 0.01 on the suboptimal edge, then the probability of obtaining the correct solution will be $0.99^{100,000}$ or approximately $3^{437}$ ants required, still a significant number. In fact, to get to a manageable number of ant simulations the pheromone on the optimal edges would need to be of the order 0.9999 vs. 0.0001 on the suboptimal edge when only approximately 20,000 ants would need to traverse the network before an ant probabilistically takes the correct edges at each decision point. However, this means the pheromone level would need to be 10,000 times greater on the optimal edge than the suboptimal edge. Moreover, the pheromone levels would need to build up over time before reaching these levels. Hence, it can be observed that applying ACO to ever larger problems results in increasingly reduced probabilities of optimal solutions being found unless the pheromone levels become increasingly stronger on the important edges. Candidate lists, as covered in the previous section, can reduce the number of decisions that ants need to make but with a potential error reducing accuracy and can only use if heuristic information is available to define the neighbourhood. Of course ants in the natural world do not have problems of this magnitude to solve and have the numbers if necessary without any undue computational cost to consider. An Illustration of the Scalability Issues With ACO To highlight the potential drawbacks of ACO it will be tested against a complex set of real-world fleet optimisation problems of steadily increasing complexity. These problems have been supplied by a Birmingham based maintenance company which operates a fleet of vehicles performing services at customer properties within the city. Each vehicle starts from a depot and must return when it has finished servicing customers. Each customer is defined by a location and a job duration predicting the length of time the job will take and in some cases, a time window for when the jobs must be completed. The speed of travel of a vehicle between maintenance jobs is defined at an average 13kph to account for city traffic. There is also a hard start time and end time to a given working day defined as 08:00 and 19:00 hours. Fleet optimisation is essentially the classic Multiple Depot Vehicle Routing Problem (MDVRP) (Dantzig and Ramser, 1959) with Time Windows (MDVRPTW). The MDVRP can be formally defined as a complete graph $G=(V,E)$, whereby $V$ is the vertex set and $E$ is the set of all edges between vertices in $V$. The vertex set $V$ is further partitioned into two sets, $V_{c}={V_{1},...,V_{n}}$ representing customers and $V_{d}={V_{n+1},...V_{n+p}}$ representing depots whereby $n$ is the number of customers and $p$ is the number of depots. Furthermore, each customer $v_{i}\in V_{c}$ has a service time associated with it and each vehicle $v_{i}\in V_{d}$ has a fixed capacity associated with it defining the ability to fulfill customer service. Each edge in the set $E$ has an associated cost of traversing it represented by the matrix $c_{ij}$. The problem is essentially to find the set of vehicle routes such that each customer is serviced once only, each vehicle starts and finishes from the same depot, each vehicle does not exceed its capacity to service customers and the overall cost of the combined routes is minimised. The worksheet data supplied by the company has been divided into a series of problems of increasing complexity and size which are described in Table 1. The manner in which the company assigns customer jobs to vehicles is known apriori enabling a ground truth for the optimisation process. Effectively, the company assigns geographically related jobs to vehicles based on postcode and then orders them such that the vehicle performs the job furthest from its depot first and then works its way back, time windows allowing. To highlight the drawbacks of ACO in terms of scalability, the $\mathcal{MAX}$-$\mathcal{MIN}$ Ant System ($\mathcal{MM}$AS) (Stützle and Hoos, 2000) will be tested upon these fleet optimisation problems. $\mathcal{MM}$AS simulates ants through the graph $G$ but, in contrast to standard ACO, only the best found solution provides pheromone updates. Additionally, minimum and maximum levels of pheromone on edges are defined. To solve the fleet optimisation problem the fully connected graph $G$ has vertices relating to the number of vehicles and customer jobs. Ants start from a random vehicle vertex then visit every other vertex once only resulting in a sequence of vehicles beginning from their specified depots followed by the customer jobs they will service before returning to their depot. This representation is shown in Figure 1 whereby V relates to a vehicle and J relates to a job. The first vehicle will undertake jobs 6, 5 and 9, the second jobs 3, 7, and 2 and so forth. Once a new solution has been generated its quality needs to be assessed. This is measured using two objectives, the first of which is to maximise the number of jobs correctly performed within their given time window. The second objective is the minimisation of the total traversal time of the fleet of vehicles. Reducing the number of missed jobs is the primary objective. Hence, comparing two solutions, if the first services more customer jobs than the second then the first solution is considered the better. If though they have equal customer job time serviced then the solution with the lower fleet traversal time is considered the better. The pheromone to deposit is calculated using these objectives to be optimised. A penalty based function will be utilised for the first objective whereby any customers that have not been serviced due to capacity limitations or missing the time window will be penalised by the predicted job time. The secondary objective is to minimise the time the fleet of vehicles spend traversing the road network between jobs. Solution quality can then be described as: $$\displaystyle C^{k}=(S-s^{k}+1)*L^{k}$$ (5) where $S$ is the total amount of time of jobs to be serviced, $s^{k}$ is the amount of job service time achieved by ant $k$’s solution and $L^{k}$ is the total traversal time of the fleet of vehicles of ant $k$’s solution. Clearly, if ant $k$ has achieved the primary objective of fulfilling all customer demand $S$ then $C^{k}$ becomes merely the total traversal time of the fleet. A parallel implementation of $\mathcal{MM}$AS is tested against the exemplar problems from Table 1 with experiments conducted using an AMD Ryzen 2700 processor using 16 parallel threads of execution. The algorithms were compiled using Microsoft C++. Experiments are averaged over 25 individual execution runs for each problem with a differing random seed used in each instance. The parameters used with $\mathcal{MM}$AS are described in Table 2. The results from these experiments are shown in Table 3 whereby the issue of scalability is abundantly clear. As the size and complexity of the fleet optimisation problems increases, the ability for $\mathcal{MM}$AS to find a solution which satisfies all the customer demand reduces. Similarly, $\mathcal{MM}$AS cannot obtain solutions with a lower fleet traversal time than the company’s own scheduling when the problem size increases. Therefore, it can be considered that these results support the hypothesis that a nature inspired swarm algorithm such as ACO suffers from scalability issues. Addressing the ACO Scalability Issues Given that the evidence seems to support the hypothesis that ACO methods will struggle to scale to larger, increasingly complex problems the next step is to attempt to address the underlying reasons behind the poor performance. As has been previously discussed, a key problem is the degree of decision making required to form solutions vs. the probabilistic nature of ACO. Therefore, it can be theorized that if the degree of decision making is reduced, ACO may well scale better. A novel modification to the ACO algorithm known as Partial-ACO (Chitty, 2017) provides a mechanism to achieve this. Essentially, this technique minimises the computational effort required and the probabilistic fallibility of ACO by ants only considering partial changes to their solutions rather than constructing completely new solutions. In contrast to standard ACO algorithms, Partial-ACO operates in a population based manner much the same as P-ACO. Essentially, a population of ants is maintained each of which represent a solution to the given problem. Pheromone levels are constructed from the edges taken within this population of solutions with their associated qualities which are relative to the best found solution. Partial-ACO also operates in a pure steady-state manner to preserve diversity. An ant only replaces its own best solution with a new solution if it is of better quality. Hence, each ant maintains a local memory of its best yet found solution. This $l_{best}$ memory enables an ant to consequently only partially change this solution to form a new solution. To partially modify its locally best found solution an ant simply picks a random point in the solution as a starting point and a random sub-length of the tour to preserve. The remaining aspect of the tour is rebuilt using standard ACO methodologies in a P-ACO manner. This process is illustrated in Figure 2. To highlight the computational advantage of this technique, consider retaining 50% of solutions for a 100,000 TSP problem. In this instance only 50,000 probabilistic decisions now need to be made and only 1.25 billion pheromone comparisons would be required, a reduction of 75%. An overview of the Partial-ACO technique is described in Algorithm 1. Evaluating the Partial-ACO Approach To test the hypothesis that reducing the degree of decision making that ants need to perform will enable them to scale to larger problems, Partial-ACO will be tested upon the same problems as previously. The parameters used for the implementation of Partial-ACO are described in Table 4. Note the lower number of ants in contrast to $\mathcal{MM}$AS. The original Partial-ACO work found a low number of ants was highly effective. To ensure the same number of solutions are evaluated, Partial-ACO will use six times more iterations. The results for the MDVRP fleet optimisation problem are shown in Table 5 whereby it can be observed that now in all problem instances, reductions in the fleet traversal times are achieved by Partial-ACO over the commercial company’s methodology. In fact, in many cases the improvement in the reduction in fleet traversal time is significantly better than that from $\mathcal{MM}$AS, especially regarding the larger, more complex, problems. Disappointingly though, the Partial-ACO technique was also unable to service all the customer jobs for the larger problems. In terms of execution timings, Partial-ACO is slightly slower than $\mathcal{MM}$AS when evaluating the same number of solutions. This is caused by the requirement to construct the edge pheromone levels at each point as an ant moves through the graph $G$. Enhancing Partial-ACO Although the results of the Partial-ACO approach seemed promising they did not significantly enforce the premise that ants are less effective with higher degrees of decision making. Analysing the Partial-ACO methodology, it could be postulated that modifying a continuous subsection of an ant’s locally best found tour could present problems in that individual points within the solution cannot be displaced a great distance. They are confined to a local neighbourhood as to how they could be reorganised. An enhancement to Partial-ACO is proposed which will facilitate the movement of points in a given ant’s locally best solution. To achieve this, it is proposed that instead of one continuous segment of an ant’s solution being preserved and the remaining part probabilistically regenerated as is the norm, a number of separate blocks throughout the solution are preserved instead. In this way a point at one end of a given solution could be moved to points throughout the solution. This should help prevent the ants becoming trapped in local optima. This methodology is actually well suited to the fleet optimisation problem since each vehicle can be considered as a stand alone aspect of the solution. Each preserved block could in fact be a vehicle’s complete job schedule. Before attempting to construct a new solution an ant can simply decide randomly which vehicle schedules to preserve and then use the probabilistic behaviour of moving through the graph $G$ to assign the remaining customer jobs to the remaining vehicles as normally. Figure 3 demonstrates the principle whereby it can be observed that two sections representing vehicle schedules are preserved by an ant from its $l_{best}$ solution with the rest built up probabilistically. To evaluate the enhancement to Partial-ACO, it will be tested against the same problems as previously using the same parameters as described in Table 4. The results are shown in Table 6 and when contrasted to those in Table 5 it can be seen that significant improvements have been made over the standard Partial-ACO approach. Now, for all problem instances including the most complex, all the customer jobs have all been serviced. Furthermore, significantly improved reductions in the fleet traversal times have been achieved. In fact, as much as an additional 26% reduction in fleet traversal time for the ThreeWeek_2 problem instance. With regards execution timings, block based Partial-ACO is slightly slower which is caused by the overhead of assembling blocks of retained solution rather than one continuous section. Consequently, from these results it can be inferred that when using solution preservation with Partial-ACO, smaller random blocks should be preserved rather than a continuous section to obtain improved results. Reducing the Degree of Modification The enhanced Partial-ACO approach has provided a significant improvement over standard ACO techniques such as $\mathcal{MM}$AS. However, recall that the original hypothesis supporting the development of Partial-ACO was that potentially, collectively intelligent meta-heuristics could fail to scale well to larger problems because of the degree of decision making that is necessitated. This hypothesis seems to be borne out by the results achieved by Partial-ACO to some degree. However, it is possible to test this hypothesis to a greater extent by reducing the degree of permissible modification an ant can make. Currently, an ant will randomly preserve any amount of its locally best solution and will modify the rest using the ACO probabilistic rules, approximately 50% of the solution on average. To avoid a large aspect of redesign, a maximum degree of modification could be imposed on an ant changing its locally best solution. This will firstly have the benefit of increasing the speed of Partial-ACO but also, if the hypothesis is correct, lead to improved optimisation. As such, the previous experiments will be rerun using a maximum modification limit ranging from 50% of the solution down to 10% in increments of 10%. The improved block preserving version of Partial-ACO will be used and additionally, to prevent ants becoming trapped in local optima with a small random probability (0.001) an ant can modify its locally best found solution to any degree. The results from reducing the degree of permissible modification of ants locally best solutions are shown in Figure 4. These describe the reductions in fleet traversal times over the commercial company’s own scheduling and execution timings. The percentage of customer jobs serviced is not shown as in all cases 100% of jobs were serviced. A clear trend can be observed for improved reductions in fleet traversal times whilst reducing the degree of permissible modification. This further reinforces the hypothesis that due to the probabilistic nature of ants, the degree of decision they are exposed to must be reduced in order for the technique to scale. Moreover, the larger the problem, the more pronounced the effect as evidenced by the month and six week long problem instances. Remarkably, for the largest problem, reducing ants decision making by 90% yields the best results fully enforcing the hypothesis that ants significantly benefit from reduced decision making. A further added benefit from reduced decision making of ants is faster execution times. Not only does the Partial-ACO approach provide improved reductions in fleet traversal times but can also achieve these reductions much faster by reducing the probabilistic decisions that ants need to make. In fact, from these results, it can stated that Partial-ACO is more accurate, much faster and more scalable than standard ACO as a consequence of the reduced decision making of ants within the algorithm. Conclusions This paper has posed the hypothesis that although algorithms inspired by the collective behaviours exhibited by natural systems have been effective for simplistic human level problems, they may fail to problems of much greater complexity. Evidence supporting this hypothesis is provided by applying Ant Colony Optimisation (ACO) to a range of increasingly complex fleet optimisation problems whereby degrading results are observed as complexity rises. A theory postulated is that the degree of decision making required by ants to construct solutions becomes too great. Given a small probability of an ant choosing poorly at each decision point, the greater decisions required to construct a solution and available choices, the greater probability of reduced solution qualities. Consequently, this paper applies the Partial-ACO approach to reduce the decision making of ants. Indeed, the Partial-ACO approach provided much improved results for a complex fleet optimisation problem enabling ACO to scale to much larger problems with reductions of over 50% in traversal times achieved with the subsequent savings in fuel costs for the given company and a similarly significant reduction in vehicles emissions and city traffic. In fact remarkably, for the larger problems, reducing ants decision making by up to 90% yielded the best results. Consequently, this reinforces the posed hypothesis that for collective behaviour algorithms to scale effectively, the degree of decision making should be minimised as much as possible. However, further studies need to be performed with bio-inspired algorithms besides ACO such as PSO and ABC and to consider problem areas other than fleet optimisation to provide better supporting evidence to the hypothesis posed by this paper. Acknowledgement Carried out for the System Analytics for Innovation project, part-funded by the European Regional Development Fund. References Batty (2013) Batty, M. (2013). Big data, smart cities and city planning. Dialogues in Human Geography, 3(3):274–279. Cai et al. (2015) Cai, Q., Gong, M., Ma, L., Ruan, S., Yuan, F., and Jiao, L. (2015). Greedy discrete particle swarm optimization for large-scale social network clustering. Information Sciences, 316:503–516. Cecilia et al. (2013) Cecilia, J. M., García, J. M., Nisbet, A., Amos, M., and Ujaldón, M. (2013). Enhancing data parallelism for ant colony optimization on GPUs. Journal of Parallel and Distributed Computing, 73(1):42–51. Cheng and Jin (2015a) Cheng, R. and Jin, Y. (2015a). A competitive swarm optimizer for large scale optimization. IEEE transactions on cybernetics, 45(2):191–204. Cheng and Jin (2015b) Cheng, R. and Jin, Y. (2015b). A social learning particle swarm optimization algorithm for scalable optimization. Information Sciences, 291:43–60. Chitty (2017) Chitty, D. M. (2017). Applying ACO to large scale TSP instances. In UK Workshop on Computational Intelligence, pages 104–118. Springer. Dantzig and Ramser (1959) Dantzig, G. B. and Ramser, J. H. (1959). The truck dispatching problem. Management science, 6(1):80–91. Dawson and Stewart (2013) Dawson, L. and Stewart, I. (2013). Improving ant colony optimization performance on the GPU using CUDA. In Evolutionary Computation (CEC), 2013 IEEE Congress on, pages 1901–1908. IEEE. DeléVacq et al. (2013) DeléVacq, A., Delisle, P., Gravel, M., and Krajecki, M. (2013). Parallel ant colony optimization on graphics processing units. Journal of Parallel and Distributed Computing, 73(1):52–61. Dorigo and Gambardella (1997) Dorigo, M. and Gambardella, L. M. (1997). Ant colony system: a cooperative learning approach to the traveling salesman problem. IEEE Transactions on evolutionary computation, 1(1):53–66. Eberhart and Kennedy (1995) Eberhart, R. and Kennedy, J. (1995). A new optimizer using particle swarm theory. In Micro Machine and Human Science, 1995. MHS’95., Proceedings of the Sixth International Symposium on, pages 39–43. IEEE. Gambardella and Dorigo (1996) Gambardella, L. M. and Dorigo, M. (1996). Solving symmetric and asymmetric tsps by ant colonies. In Proceedings of IEEE international conference on evolutionary computation, pages 622–627. IEEE. Guntsch and Middendorf (2002) Guntsch, M. and Middendorf, M. (2002). A population based approach for ACO. In Workshops on Applications of Evolutionary Computation, pages 72–81. Springer. Ismkhan (2017) Ismkhan, H. (2017). Effective heuristics for ant colony optimization to handle large-scale problems. Swarm and Evolutionary Computation, 32:140–149. Karaboga and Basturk (2007) Karaboga, D. and Basturk, B. (2007). A powerful and efficient algorithm for numerical function optimization: artificial bee colony (abc) algorithm. Journal of global optimization, 39(3):459–471. Li et al. (2011) Li, X., Liao, J., and Cai, M. (2011). Ant colony algorithm for large scale tsp. In 2011 International Conference on Electrical and Control Engineering, pages 573–576. IEEE. Murgante and Borruso (2015) Murgante, B. and Borruso, G. (2015). Smart cities in a smart world. In Future City Architecture for Optimal Living, pages 13–35. Springer. Peake et al. (2018) Peake, J., Amos, M., Yiapanis, P., and Lloyd, H. (2018). Vectorized candidate set selection for parallel ant colony optimization. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, pages 1300–1306. ACM. Piccand et al. (2008) Piccand, S., O’Neill, M., and Walker, J. (2008). On the scalability of particle swarm optimisation. In 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), pages 2505–2512. IEEE. Stützle and Hoos (2000) Stützle, T. and Hoos, H. H. (2000). Max–min ant system. Future generation computer systems, 16(8):889–914. Yan and Lu (2018) Yan, D. and Lu, Y. (2018). Recent advances in particle swarm optimization for large scale problems. Journal of Autonomous Intelligence, 1(1):22–35.
Equivalence of the Sutherland Model to Free Particles on a Circle N. Gurappa and Prasanta K. Panigrahi panisprs@uohyd.ernet.inpanisp@uohyd.ernet.in School of Physics, University of Hyderabad, Hyderabad, Andhra Pradesh, 500 046 INDIA. Abstract A method is developed to construct the solutions of one and many variable, linear differential equations of arbitrary order. Using this, the $N$-particle Sutherland model, with pair-wise inverse sine-square interactions among the particles, is shown to be equivalent to free particles on a circle. Applicability of our method to many other few and many-body problems is also illustrated. pacs: PACS: 03.65.-w, 03.65.Ge The Calogero-Sutherland model (CSM) [1, 2] and its generalizations [3, 4] are extremely well-studied quantum mechanical systems, having relevance to diverse branches of physics [5], such as the universal conductance fluctuations in mesoscopic systems [6], quantum Hall effect [7], wave propagation in stratified fields [8], random matrix theory [2, 5, 9, 10], fractional statistics [11], gravity [12] and gauge theories [13]. In particular, the Sutherland model [3], describing $N$-identical particles having pair-wise inverse distance square interaction on a circle has received wider attention in the context of questions related to statistics [14]. The spectrum of this exactly solvable model can be interpreted as arising from a set of free quasi-particles satisfying the generalized exclusion principle [15]. Some time ago, the present authors have shown that, the CSM is equivalent to a set of free harmonic oscillators [16]. This mapping throws considerable light into the algebraic structure of this correlated system [17, 18]. Later, the same model, without the harmonic confinement, was made unitarily equivalent to free particle [19]. However, the connection between the Sutherland model and free particles have so far remained an open problem. In this paper, we establish the equivalence of the Sutherland model to free particles on a circle, for arbitrary number of particles. This paper is organized as follows. (i) A general method is developed to solve any linear differential equation of arbitrary order, (ii) a few examples are discussed to show how the present technique yields the known, as well as, new results, (iii) the usefulness of this technique for solving the Schrödinger equation with complicated potentials is pointed out, (iv) the mapping between the $A_{N-1}$ Calogero model and free harmonic oscillators is rederived using the present method, and, finally, (v) we show, the equivalence of the Sutherland model with free particles on a circle. Consider the following differential equation, $$\displaystyle\left(F(D)+P(x,d/dx)\right)y(x)=0\quad,$$ (1) where, $D\equiv x\frac{d}{dx}$, $F(D)=\sum_{n=-\infty}^{n=\infty}a_{n}D^{n}$, is a diagonal operator in the space of monomials spanned by $x^{n}$ and $a_{n}$’s are some parameters. $P(x,d/dx)$ can be an arbitrary polynomial function of $x$ and $\frac{d}{dx}$. We prove the following ansatz, $$\displaystyle y(x)$$ $$\displaystyle=$$ $$\displaystyle C_{\lambda}\left\{\sum_{m=0}^{\infty}(-1)^{m}\left[\frac{1}{F(D)% }P(x,d/dx)\right]^{m}\right\}x^{\lambda}$$ (2) $$\displaystyle\equiv$$ $$\displaystyle C_{\lambda}\hat{G}_{\lambda}x^{\lambda}\qquad,$$ is a solution of the above equation, provided, $F(D)x^{\lambda}=0$ and the coefficient of $x^{\lambda}$ in $y(x)-C_{\lambda}x^{\lambda}$ is zero (no summation over $\lambda$); here, $C_{\lambda}$ is a constant. The case, when the equation $F(D)x^{\lambda}=0$ does not have distinct roots is not considered here and will be treated separately elsewhere. Substituting Eq. (2), modulo $C_{\lambda}$, in Eq. (1), $$\displaystyle\left(F(D)+P(x,d/dx)\right)$$ $$\displaystyle\left\{\sum_{m=0}^{\infty}(-1)^{m}\left[\frac{1}{F(D)}P(x,d/dx)% \right]^{m}\right\}x^{\lambda}=$$ $$\displaystyle=F(D)$$ $$\displaystyle\left[1+\frac{1}{F(D)}P(x,d/dx)\right]\left\{\sum_{m=0}^{\infty}(% -1)^{m}\left[\frac{1}{F(D)}P(x,d/dx)\right]^{m}\right\}x^{\lambda}$$ $$\displaystyle=F(D)$$ $$\displaystyle\sum_{m=0}^{\infty}(-1)^{m}\left[\frac{1}{F(D)}P(x,d/dx)\right]^{% m}x^{\lambda}$$ $$\displaystyle+F(D)\sum_{m=0}^{\infty}(-1)^{m}\left[\frac{1}{F(D)}P(x,d/dx)% \right]^{m+1}x^{\lambda}$$ $$\displaystyle=F(D)$$ $$\displaystyle x^{\lambda}-F(D)\sum_{m=0}^{\infty}(-1)^{m}\left[\frac{1}{F(D)}P% (x,d/dx)\right]^{m+1}x^{\lambda}$$ $$\displaystyle+F(D)\sum_{m=0}^{\infty}(-1)^{m}\left[\frac{1}{F(D)}P(x,d/dx)% \right]^{m+1}x^{\lambda}$$ $$\displaystyle=0\qquad.$$ (3) Eq. (2), which connects the solution of a given differential equation to the monomials, can also be generalized to many-variables. In order to show that, this rather straightforward procedure indeed yields non-trivial results, we explicitly work out a few examples below and then proceed to prove the equivalence of the Sutherland model to free particles on a circle. Consider the Hermite differential equation, which arises in the context of quantum harmonic oscillator, $$\displaystyle\left(D-n-\frac{1}{2}\frac{d^{2}}{dx^{2}}\right)H_{n}(x)=0\qquad.$$ (4) Here, $F(D)=D-n$ and $F(D)x^{\lambda}=0$ yields $\lambda=n$. Hence, $$\displaystyle H_{n}(x)=C_{n}\sum_{m=0}^{\infty}(-1)^{m}\left[\frac{1}{D-n}(-1/% 2)(d^{2}/dx^{2})\right]^{m}x^{n}\qquad.$$ (5) Using, $[D\,\,,\,\,(d^{2}/dx^{2})]=-2(d^{2}/dx^{2})$, it is easy to see that, $$\displaystyle\left[\frac{1}{(D-n)}(-1/2)(d^{2}/dx^{2})\right]^{m}x^{n}=(-1/2)^% {m}(d^{2}/dx^{2})^{m}\prod_{l=1}^{m}\frac{1}{(-2l)}x^{n}\qquad,$$ and $$\displaystyle H_{n}(x)$$ $$\displaystyle=$$ $$\displaystyle C_{n}\sum_{m=0}^{\infty}(-1/4)^{m}\frac{1}{m!}(d^{2}/dx^{2})^{m}% x^{n}$$ (6) $$\displaystyle=$$ $$\displaystyle C_{n}e^{-\frac{1}{4}\frac{d^{2}}{dx^{2}}}x^{n}\qquad;$$ this is a well-known result. Similar expression also holds for the Lagurre polynomials which matches with the one found in [20]. In order to make an important remark, we list the solutions of some frequently encountered differential equations in various branches of physics [21]. Legendre polynomials $$\displaystyle P_{n}(x)=C_{n}e^{-\left\{1/(2[D+n+1])\right\}(d^{2}/dx^{2})}\,\,% x^{n}\qquad.$$ Associated Legendre polynomials $$\displaystyle P_{n}^{m}(x)=C_{n}(1-x^{2})^{m/2}e^{-\left\{1/(2[D+n+m+1])\right% \}(d^{2}/dx^{2})}\,\,x^{n-m}\qquad.$$ Bessel functions $$\displaystyle J_{\pm\nu}(x)=C_{\pm\nu}e^{-\{1/(2[D\pm\nu])\}x^{2}}\,\,x^{\pm% \nu}\qquad.$$ Generalized Bessel functions $$\displaystyle u_{\pm}(x)=C_{\pm}e^{-\{\beta\gamma^{2}/(2[D+\alpha\pm\beta\nu])% \}x^{2\beta}}\,\,x^{\beta\nu-\alpha\pm\beta\nu}\qquad.$$ Gegenbauer polynomials $$\displaystyle C_{n}^{\lambda}(x)=C_{n}e^{-\{1/(2[D+n+2\lambda])\}(d^{2}/dx^{2}% )}\,\,x^{n}\qquad.$$ Hypergeometric functions $$\displaystyle y_{\pm}(\alpha,\beta;\gamma;x)=C_{\pm}e^{-\{(1/(D+\lambda_{\pm})% \}\hat{A}}\,\,x^{-\lambda_{\mp}}\qquad,$$ where, $\lambda_{\pm}$ is either $\alpha$ or $\beta$ and $\hat{A}\equiv x\frac{d^{2}}{dx^{2}}+\gamma\frac{d}{dx}$. All the above series solutions have descending powers of $x$. In order to get the series in the ascending powers, one has to replace $x$ by $\frac{1}{x}$ in the original differential equation and generate the solutions via Eq. (2). However, the number of solutions will remain the same. One can also generate the series solutions by multiplying the original differential equations with $x^{2}$, and then, rewriting $x^{2}\frac{d^{2}}{dx^{2}}=D(D-1)=F(D)$. Meijer’s G-Function $$\displaystyle G_{pq}^{mn}\left(x\mid_{b_{s}}^{a_{r}}\right)=\left\{\sum_{k=0}^% {\infty}\left[\frac{1}{\prod_{j=1}^{q}\left(D-b_{j}\right)}\left((-1)^{p-m-1}x% \prod_{j=1}^{p}\left(D-a_{j}+1\right)\right)\right]^{k}\right\}\,\,x^{b_{s}}\qquad.$$ Neumann’s polynomials $$\displaystyle O_{n}(x)=\left\{\sum_{r=0}^{\infty}(-1)^{r}\left[\frac{1}{[(D+1)% ^{2}-n^{2}]}x^{2}\right]^{r}\left(\frac{1}{[(D+1)^{2}-n^{2}]}\right)\right\}\,% \,(x\cos^{2}(n\pi/2)$$ $$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+n\sin^{2}(% n\pi/2))\,\,;$$ the cases of Struve, Lomel, Anger and Weber functions are identical to the above one. Jacobi, Schläfli, Whittaker, Chebyshev and some other polynomials were not given here since the list is rather lengthy. Now, the remark follows: $\hat{G}_{\lambda}$ becomes independent of the roots of the equation $F(D)x^{\lambda}=0$, only when $F(D)$ is linear in $D$. The solution for the following equation with periodic potential, $$\displaystyle\frac{d^{2}y}{dx^{2}}+a\cos(x)y=0\quad,$$ (7) can be found, after multiplying Eq. (7) by $x^{2}$ and rewriting $x^{2}\frac{d^{2}}{dx^{2}}$ as $(D-1)D$, to be $$\displaystyle y(x)=\sum_{m,\{n_{i}\}=0}^{\infty}\frac{(-a)^{m}}{m!}\left\{% \prod_{i=1}^{m}\frac{(-1)^{n_{i}}}{(2n_{i})!}\right\}$$ $$\displaystyle\left\{\prod_{r=1}^{m}\frac{(2[m+\lambda/2-r+\sum_{i=1}^{m+1-r}n_% {i}])!}{(2[m+\lambda/2+1-r+\sum_{i=1}^{m+1-r}n_{i}])!}\right\}$$ (8) $$\displaystyle\times\,\,x^{2(m+\sum_{i=1}^{m}n_{i}+\lambda/2)}\qquad,$$ where, $\lambda=0$ or $1$. In the same manner, one can write down the solutions for the Mathieu’s equation as well [21]. Using this method, one can, in principle write down the ground and first excited states of the one variable Schrödinger equation. We first illustrate the procedure for harmonic oscillator, ($\hbar=m=\omega=1$), $$\displaystyle\left(\frac{d^{2}}{dx^{2}}+(2E_{n}-x^{2})\right)\psi_{n}=0\qquad,$$ (9) before proceeding to the other non-trivial examples. Multiplying by $x^{2}$, the above can be written as $$\displaystyle\left((D-1)D+x^{2}(2E_{n}-x^{2})\right)\psi_{n}=0\qquad.$$ (10) For $n=0$ and $1$, $D(D-1)x^{n}=0$. Using Eq. (2), the solution for $n=0$ is, $$\displaystyle\psi_{0}$$ $$\displaystyle=$$ $$\displaystyle C_{0}\left\{\sum_{m=0}^{\infty}(-1)^{m}\left[\frac{1}{(D-1)D}(x^% {2}(2E_{0}-x^{2}))\right]^{m}\right\}x^{0}$$ (11) $$\displaystyle=$$ $$\displaystyle C_{0}\left(1-\frac{[2E_{0}]}{2!}x^{2}+\frac{(2!+[2E_{0}]^{2})}{4% !}x^{4}-\frac{(4!+(2!)^{2}[2E_{0}]+(2!)[2E_{0}]^{3})}{2!6!}x^{6}+\cdots\right)% \,\,.$$ Note that, $\psi_{0}$ is an expansion in the powers of $x$, whose coefficients are polynomials in $E_{0}$. Only when $E_{0}=1/2$, the series can be written in the closed form $C_{0}e^{-\frac{1}{2}x^{2}}$, which satisfies all properties required for a quantum mechanical ground-state. Similarly, the first excited state can also be found by applying $\hat{G}_{1}$ on $x$. To find the $n$-th excited state, one can differentiate the Schrödinger equation $n$ times, multiply it by $x^{n}$ and use $x^{n}\frac{d^{n}}{dx^{n}}=\prod_{l=0}^{n-1}(D-l)=F(D)$. In the case of complicated potentials, this method can be potentially very useful if applied in conjunction with numerical algorithms. The exact ground-state wavefunction for the Schrödinger equation with anharmonic potential, $V(x)=x^{2}/2+cx^{4}/2$, is, $$\displaystyle\psi_{0}=C_{0}\left\{\sum_{m=0}^{\infty}(-1)^{m}\left[\frac{1}{(D% -1)D}(x^{2}(2E_{0}-x^{2}-cx^{4}))\right]^{m}\right\}\,\,\,.1\qquad.$$ (12) One can numerically tune $E_{0}$ to obtain an appropriate $\psi_{0}$. In the following, we apply the above technique to the differential equations involving many variables. Consider, the $A_{N-1}$ Calogero-Sutherland model, which was made equivalent to free harmonic oscillators [16], $$\displaystyle\left(-\frac{1}{2}\sum_{i=1}^{N}\frac{{\partial}^{2}}{\partial x_% {i}^{2}}+\frac{1}{2}\sum_{i=1}^{N}x_{i}^{2}+\frac{1}{2}g^{2}\sum_{{i,j=1}\atop% {i\neq j}}^{N}\frac{1}{(x_{i}-x_{j})^{2}}-E_{n}\right)(\psi_{0}\phi_{n})=0\qquad.$$ (13) This can be brought to the following form, after the removal of the ground-state wavefunction, $\psi_{0}=\exp\{-\frac{1}{2}\sum_{i}x_{i}^{2}\}\prod_{i<j}^{N}[|x_{i}-x_{j}|^{\alpha}$: $$\displaystyle\left(\sum_{i}x_{i}\frac{\partial}{\partial x_{i}}+E_{0}-E_{n}-% \hat{A}\right)\phi_{n}=0\qquad,$$ (14) where, $E_{0}=\frac{1}{2}N+\frac{1}{2}N(N-1)\alpha$ is the ground-state energy and $\hat{A}\equiv[\frac{1}{2}\sum_{i}\frac{\partial^{2}}{\partial x_{i}^{2}}+% \alpha\sum_{i\neq j}\frac{1}{(x_{i}-x_{j})}\frac{\partial}{\partial x_{i}}]$. Without loss of generality, we have quantized the above system as bosons. Rewriting Eq. (14) as, $$\displaystyle\left(\sum_{i}x_{i}\frac{\partial}{\partial x_{i}}-n+n+E_{0}-E_{n% }-\hat{A}\right)\phi_{n}=0\qquad,$$ (15) the solution can be written by using Eq. (2): $$\displaystyle\phi_{n}=C_{n}\left\{\sum_{m=0}^{\infty}(-1)^{m}\left[\frac{1}{(% \sum_{i}x_{i}\frac{\partial}{\partial x_{i}}-n)}(n+E_{0}-E_{n}-\hat{A})\right]% ^{m}\right\}S_{n}(\{x_{i}\})\quad;$$ (16) here, $(\sum_{i}x_{i}\frac{\partial}{\partial x_{i}}-n)S_{n}(\{x_{i}\})=0$ and $S_{n}(\{x_{i}\})$’s are any homogeneous function of degree $n$. In order to avoid the possibility of having singular solutions when the inverse of $(\sum_{i}x_{i}\frac{\partial}{\partial x_{i}}-n)$ acts on $S_{n}(\{x_{i}\})$, we choose $n+E_{0}-E_{n}=0$; this yields the familiar energy spectrum of the Calogero-Sutherland model. Further, one has to choose $S_{n}(\{x_{i}\})$ to be completely symmetric under the exchange of $x_{i}$’s, such that, the action of $\hat{A}$ yields polynomial solutions which are normalizable with respect to $\psi_{0}$ as the weight function[16, 22]. Similar to the case of the Hermite polynomial, one can easily prove that, $$\displaystyle\phi_{n}=C_{n}e^{-\frac{1}{2}\hat{A}}S_{n}(\{x_{i}\})\qquad.$$ (17) In the following, we show that, the Sutherland model is equivalent to free particles on a circle. The Schrödinger equation is, $$\displaystyle\left(-\sum_{i=1}^{N}\frac{\partial^{2}}{\partial x_{i}^{2}}+2% \beta(\beta-1)\frac{\pi^{2}}{L^{2}}\sum_{i<j}\frac{1}{\sin^{2}[\pi(x_{i}-x_{j}% )/L]}-E_{\lambda}\right)\psi_{\lambda}(\{x_{i}\})=0\qquad.$$ (18) Choosing, $z_{j}=e^{2\pi ix_{j}/L}$ and writing $\psi_{\lambda}(\{z_{i}\})=\prod_{i}z_{i}^{-(N-1)\beta/2}\prod_{i<j}(z_{i}-z_{j% })^{\beta}J_{\lambda}(\{z_{i}\})$, the above equation becomes, $$\displaystyle\left(\sum_{i}D_{i}^{2}+\beta\sum_{i<j}\frac{z_{i}+z_{j}}{z_{i}-z% _{j}}(D_{i}-D_{j})+\tilde{E}_{0}-\tilde{E}_{\lambda}\right)J_{\lambda}(\{z_{i}% \})=0\qquad,$$ (19) where, $D_{i}\equiv z_{i}\frac{\partial}{\partial z_{i}}$, $\tilde{E_{\lambda}}\equiv(\frac{L}{2\pi})^{2}E_{\lambda}$, $\tilde{E_{0}}\equiv(\frac{L}{2\pi})^{2}E_{0}$ and $E_{0}=\frac{1}{3}(\frac{\pi}{L})^{2}\beta^{2}N(N^{2}-1)$, is the ground-state energy. Here, $J_{\lambda}(\{z_{i}\})$ is known as the Jack polynomials [23, 24, 25, 26]. $\sum_{i}D_{i}^{2}$ is a diagonal operator in the space spanned by the monomial symmetric functions, $m_{\{\lambda\}}$, with eigenvalues $\sum_{i=1}^{N}\lambda_{i}^{2}$ [25]. Rewriting (19) in the form, $$\displaystyle\left(\sum_{i}(D_{i}^{2}-\lambda_{i}^{2})+\beta\sum_{i<j}\frac{z_% {i}+z_{j}}{z_{i}-z_{j}}(D_{i}-D_{j})+\tilde{E}_{0}+\sum_{i}\lambda_{i}^{2}-% \tilde{E}_{\lambda}\right)J_{\lambda}(\{z_{i}\})=0\qquad,$$ (20) one can immediately show that, $$\displaystyle J_{\lambda}(\{z_{i}\})$$ $$\displaystyle=$$ $$\displaystyle C_{\lambda}\left\{\sum_{n=0}^{\infty}(-1)^{n}\left[\frac{1}{\sum% _{i}(D_{i}^{2}-\lambda_{i}^{2})}(\beta\sum_{i<j}\frac{z_{i}+z_{j}}{z_{i}-z_{j}% }(D_{i}-D_{j})+\tilde{E}_{0}+\sum_{i}\lambda_{i}^{2}-\tilde{E}_{\lambda})% \right]^{n}\right\}$$ (21) $$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad% \qquad\qquad\qquad\times m_{\lambda}(\{z_{i}\})$$ $$\displaystyle\equiv$$ $$\displaystyle C_{\lambda}\hat{G}_{\lambda}m_{\lambda}(\{z_{i}\})\qquad.$$ It is easy to check that, $\hat{G}_{\lambda}$ maps the Sutherland model to free particles on a circle, i.e., $$\displaystyle(\psi_{0}\hat{G}_{\lambda})^{-1}H_{S}(\psi_{0}\hat{G}_{\lambda})$$ $$\displaystyle=$$ $$\displaystyle\left(\frac{2\pi}{L}\right)^{2}\left(\sum_{i}D_{i}^{2}-\sum_{i}% \lambda_{i}^{2}+\tilde{E}_{\lambda}\right)$$ (22) $$\displaystyle=$$ $$\displaystyle-\sum_{i=1}^{N}\frac{\partial^{2}}{\partial x_{i}^{2}}-\left(% \frac{2\pi}{L}\right)^{2}\sum_{i}\lambda_{i}^{2}+{E}_{\lambda}\qquad,$$ where, $H_{S}$ is the Sutherland Hamiltonian and $\psi_{0}$ is its ground-state wavefunction. For the sake of convenience, we define $$\displaystyle\hat{S}$$ $$\displaystyle\equiv$$ $$\displaystyle\left[\frac{1}{\sum_{i}(D_{i}^{2}-\lambda_{i}^{2})}\hat{Z}\right]\qquad,$$ $$\displaystyle\mbox{and}\qquad\hat{Z}$$ $$\displaystyle\equiv$$ $$\displaystyle\beta\sum_{i<j}\frac{z_{i}+z_{j}}{z_{i}-z_{j}}(D_{i}-D_{j})+% \tilde{E}_{0}+\sum_{i}\lambda_{i}^{2}-\tilde{E}_{\lambda}\qquad.$$ (23) The action of $\hat{S}$ on $m_{\lambda}(\{z_{i}\})$ yields singularities, unless one chooses the coefficient of $m_{\lambda}$ in $\hat{Z}\,m_{\lambda}(\{z_{i}\})$ to be zero; this condition yields the well-known eigenspectrum of the Sutherland model: $$\tilde{E}_{\lambda}=\tilde{E}_{0}+\sum_{i}(\lambda_{i}^{2}+\beta[N+1-2i]% \lambda_{i})\qquad.$$ Using the above, one can write down the Jack polynomials as, $$\displaystyle J_{\lambda}(\{z_{i}\})=\sum_{n=0}^{\infty}(-\beta)^{n}\left[% \frac{1}{\sum_{i}(D_{i}^{2}-\lambda_{i}^{2})}(\sum_{i<j}\frac{z_{i}+z_{j}}{z_{% i}-z_{j}}(D_{i}-D_{j})-\sum_{i}(N+1-2i)\lambda_{i})\right]^{n}$$ $$\displaystyle\qquad\qquad\qquad\qquad\qquad\times m_{\lambda}(\{x_{i}\})\qquad.$$ (24) For the sake of illustration, we give the following computation of $J_{2}$ for two-particle case. Now, $m_{2}=z_{1}^{2}+z_{2}^{2}$ and it is easy to check that, $$\displaystyle\hat{Z}m_{2}$$ $$\displaystyle=$$ $$\displaystyle\beta 4z_{1}z_{2}=4\beta m_{1^{2}}$$ $$\displaystyle\hat{S}m_{2}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\sum_{i}(D_{i}^{2}-4)}(4\beta m_{1^{2}})=-2\beta m_{1^{2% }}\qquad,$$ $$\displaystyle\hat{S}^{n}m_{2}$$ $$\displaystyle=$$ $$\displaystyle-2(\beta)^{n}m_{1^{2}}\quad\mbox{for}\quad n\geq 1\quad.$$ Substituting the above result in Eq. (21), apart from $C_{2}$, $$\displaystyle J_{2}$$ $$\displaystyle=$$ $$\displaystyle m_{2}+\left(\sum_{n=1}^{\infty}(-1)^{n}(-2)(\beta)^{n}\right)m_{% 1^{2}}$$ (25) $$\displaystyle=$$ $$\displaystyle m_{2}+2\beta\left(\sum_{n=0}^{\infty}(-\beta)^{n}\right)m_{1^{2}}$$ $$\displaystyle=$$ $$\displaystyle m_{2}+\frac{2\beta}{1+\beta}m_{1^{2}}\qquad;$$ this is the desired result. Earlier, Lapointe et al. have obtained a Rodrigues-type formulae for the Jack polynomials [27]. Eq. (Equivalence of the Sutherland Model to Free Particles on a Circle) is a new formula for the Jack polynomials. In the following, we show that, the above method can be generalized to more general many-body systems. Consider, $$\displaystyle\left(\sum_{n=-\infty}^{\infty}a_{n}(\sum_{i}D_{i}^{n})+\hat{A}% \right)P_{\lambda}(\{z_{i}\})=B_{\lambda}(\{z_{i}\})\qquad,$$ (26) where, $a_{n}$’s are some parameters, $D_{i}$ is same as the above, $\hat{A}$ is a function of $z_{i}$ and $\partial/\partial z_{i}$ and $B_{\lambda}(\{z_{i}\})$ is a source term. Case (i) When $B_{\lambda}(\{z_{i}\})=0$ and $\hat{A}m_{\lambda}=\epsilon_{\lambda}m_{\lambda}+\sum_{\mu<\lambda}C_{\mu% \lambda}m_{\mu}$; where, $m_{\lambda}$’s are the monomial symmetric functions [25] and $\epsilon_{\lambda}$ and $C_{\lambda\mu}$ are some constants, we get $$\displaystyle P_{\lambda}(\{z_{i}\})=\sum_{r=0}^{\infty}(-1)^{r}\left[\frac{1}% {((\sum_{n=-\infty}^{\infty}a_{n}(\sum_{i}D_{i}^{n})-(\sum_{n=-\infty}^{\infty% }a_{n}(\sum_{i}{\lambda}_{i}^{n}))}(\hat{A}-\epsilon_{\lambda})\right]^{r}m_{% \lambda}(\{z_{i}\})$$ (27) with, $\sum_{n=-\infty}^{\infty}a_{n}(\sum_{i}{\lambda}_{i}^{n})+\epsilon_{\lambda}=0$. Case (ii) When $B_{\lambda}(\{z_{i}\})=\sum_{\mu\neq\lambda}K_{\mu}m_{\mu}$ and $\hat{A}m_{\mu}=\sum_{\mu\neq\lambda}C_{\mu\lambda}m_{\mu}$; where, $K_{\mu}$ and $C_{\lambda\mu}$ are some constants, then, $$\displaystyle P_{\lambda}(\{z_{i}\})=\sum_{r=0}^{\infty}(-1)^{r}\left[\frac{1}% {((\sum_{n=-\infty}^{\infty}a_{n}(\sum_{i}D_{i}^{n})-(\sum_{n=-\infty}^{\infty% }a_{n}(\sum_{i}{\lambda}_{i}^{n}))}\hat{A}\right]^{r}$$ $$\displaystyle\times\left[\frac{1}{((\sum_{n=-\infty}^{\infty}a_{n}(\sum_{i}D_{% i}^{n})-(\sum_{n=-\infty}^{\infty}a_{n}(\sum_{i}{\lambda}_{i}^{n}))}\right]B_{% \lambda}(\{z_{i}\})\,\,.$$ (28) In conclusion, we have developed a general method to solve any linear differential equation of arbitrary order. The major advantage of this technique lies in the fact that, in order to obtain the solutions, one does not have to solve any recursion relations or integral equations. By applying this method, we obtained new formulae for many differential equations which are of interest to theoretical physicists. We also discussed how to obtain the ground and excited states for the Schrödinger equations with complicated potentials. In particular, we have written down the exact ground-state wavefunction for anharmonic potential, $V(x)=x^{2}/2+cx^{4}/2$, as a function of energy parameter, which has to be tuned to obtain the normalizable solutions. This method treated both $A_{N-1}$ Calogero-Sutherland and the Sutherland model with inverse sine-square interactions on the equal footing. The former one is equivalent to free harmonic oscillators and the later, to free particles on a circle. We also proposed a general scheme for generating more general symmetric functions. This scheme may find interesting applications in the context of obtaining the $q$-deformed versions of the known polynomials [21, 25]. The authors acknowledge extremely useful discussions with Profs. V. Srinivasan and S. Chaturvedi. N.G thanks U.G.C (India) for the financial support. References [1] F. Calogero, J. Math. Phys. 12, 419 (1971). [2] B. Sutherland, J. Math. Phys., 12, 246 (1971); 12, 251 (1971). [3] B. Sutherland, Phys. Rev. A 4 2019 (1971); A 5, 1372 (1972); Phys. Rev. Lett. 34, 1083 (1975). [4] M.A. Olshanetsky and A.M. Perelomov, Phys. Rep. 94, 6 (1983). [5] For various connections, see the chart in B.D. Simons, P.A. Lee and B.L. Altshuler, Phys. Rev. Lett. 72, 64 (1994). [6] S. Tewari, Phys. Rev. B 46, 7782 (1992); N.F. Johnson and M.C. Payne, Phys. Rev. Lett. 70, 1513 (1993); 70, 3523 (1993); M. Caselle, ibid. 74, 2776 (1995). [7] N. Kawakami, Phys. Rev. Lett. 71, 275 (1993); H. Azuma and S. Iso, Phys. Lett. B 331, 107 (1994); P.K. Panigrahi and M. Sivakumar, Phys. Rev. B 52, 13742 (15) (1995). [8] H.H. Chen, Y.C. Lee and N.R. Pereira, Phys. Fluids. 22, 187 (1979). [9] K. Nakamura and M. Lakshamanan, Phys. Rev. Lett. 57, 1661 (1986). [10] See M.L. Mehta, Random Matrices, Revised Edition (Academic Press N.Y, 1990). [11] J.M. Leinaas and J. Myrheim, Phys. Rev. B 37, 9286 (1988); A.P. Polychronakos, Nucl. Phys. B 324, 597 (1989); M.V.N. Murthy and R. Shankar, Phys. Rev. Lett. 73, 3331 (1994). [12] I. Andric, A. Jevicki and H. Levine, Nucl. Phys. B 312, 307 (1983); A. Jevicki, ibid, 376, 75 (1992); G.W. Gibbons and P.K. Townsend, hep-th/9812034. [13] J.A. Minahan and A.P. Polychronakos, Phys. Lett. B 312, 155 (1993); 336, 288 (1994); E. D’Hoker and D.H. Phong, Nucl. Phys. B 513, 405 (1998). [14] A.P. Polychronakos, pre-print hep-th/9902157 and references therein. [15] F.D.M. Haldane, Phys. Rev. Lett. 67, 937 (1991). [16] N. Gurappa and P.K. Panigrahi, Phys. Rev. B, R2490 (1999). [17] H. Ujino, A. Nishino and M. Wadati, J. Phys. Soc. Jpn. 67, 2658 (1998). [18] A. Nishino, H. Ujino and M. Wadati, Phys.Lett. A 249, 459 (1998). [19] T. Brzezinski, C. Gonera and P. Maslanka, Phys.Lett. A 254, 185 (1999). [20] N. Gurappa, A. Khare and P.K. Panigrahi, Phys. Lett. A 224, 467 (1998). [21] I.S. Gradshteyn and I.M. Ryzhik, Tables of Integrals, Series and Products (Academic Press Inc., 1965). [22] T.H. Baker and P.J. Forrester, Nucl. Phys. B 492, 682 (1997). [23] H. Jack, Proc. R. Soc. Edinburgh (A), 69, 1 (1970); ibid, 347, (1972). [24] R.P. Stanley, Adv. Math. 77, 76 (1988). [25] I.G. Macdonald, Symmetric Functions and Hall Polynomials, 2nd edition, Oxford: Clarendon press, 1995. [26] S. Chaturvedi, Special Functions and Differential Equations, Eds. K.S. Rao, R. Jagannathan, G.V. Berghe and J.V. Jeugt, Allied publishers, New Delhi, 1997. [27] L. Lapointe and L. Vinet, Commun. Math. Phys. 178, 425 (1996).
Topological phase in oxidized zigzag stanene nanoribbons Mohsen Modarresi${}^{1}$, Wei Bin Kuang${}^{2}$, Thaneshwor P. Kaloni${}^{2}$, Mahmood Rezaee Roknabadi${}^{1}$, Georg Schreckenbach${}^{2}$ Georg.Schreckenbach@umanitoba.ca ${}^{1}$Department of Physics, Ferdowsi University of Mashhad, Mashhad, Iran ${}^{2}$Department of Chemistry, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada Abstract First-principles and semi-empirical tight binding calculations were performed to understand the adsorption of oxygen on the surface of two dimensional (2D) and zigzag stanene nano-ribbons. The intrinsic spin-orbit interaction is considered in the Kane-Mele tight binding model. The adsorption of an oxygen atom or molecule on the 2D stanene opens an electronic energy band gap. We investigate the helical edge states and topological phase in the pure zigzag stanene nano-ribbons. The adsorption of oxygen atoms on the zigzag stanene nano-ribbons deforms the helical edge states at the Fermi level which causes topological (non-trivial) to trivial phase transition. The structural stability of the systems is checked by performing $\Gamma$-point phonon calculations. The adsorption of an oxygen atom or molecule on the 2D staneneSpecific arrangements of adsorbed oxygen atoms on the surface of zigzag stanene nano-ribbons conserve the topological phase which has potential applications in future nano-electronic devices. I Introduction After the discovery of graphene R1 , other 2D nano-structures composed of group IV honeycomb lattices were theoretically proposed and synthesized R2 ; R3 ; R4 ; R5 . Topological insulators were observed experimentally for 3D nano-structures R6 ; R7 ; R8 ; R9 and predicted in low buckled 2D nano-structures R11 ; R12 ; R13 ; R14 ; R15 ; sr ; prb14 ; jpcc14 ; apl14 ; pssrrl1 ; pssrrl2 . The condition required for observing the Hall effect is to break the time-reversal invariance by applying a strong magnetic field. Kane and Mele modeled the intrinsic spin-orbit interaction (SOC) in 2D graphene and report a quantum Anomalous Hall effect for different spin directions which is called the quantum spin Hall effect or $Z_{2}$ topological insulator R14 ; R16 . Graphene is not an appropriate candidate for experimental realization of the quantum spin Hall effect because of a very weak intrinsic spin-orbit interaction ($10^{-3}$ meV)R17 ; R18 . By increasing the atomic mass using heavier atoms the intrinsic spin-orbit interaction strength is increased. Stanene is a cousin of graphene; it is a hexagonal buckled arrangement of Sn atoms in 2D. Stanene has been synthesized using molecular beam epitaxy R2 . The relatively strong spin-orbit interaction magnitude in stanene opens an electronic gap around 70 meV and 0.3 eV in pure and halogenated stanene, respectively R19 . Theoretical calculations based on density functional theory (DFT) predict topological phases in pure, halogenated R19 and hydrogenated R11 stanene. Due to the buckled structure a perpendicular external electric field produces a staggered sub-lattice potential that modifies the energy band gap R20 and removes the topological phase of 2D nano-structures R21 ; R22 ; R23 ; R24 . Also a critical value of applied strain causes trivial to topological phase transitionR12 ; R25 ; R26 . In the topological phase, surface states between two time-reversal invariant momenta (TRIM) points intersect Fermi energy in an odd number of times R27 that guarantees the robustness of edge states against weak disorder. Based on previous study, the $Z2$ topological invariant can be extracted by analyzing the parity of the occupied wave function in the structure with inversion symmetries R28 . Also Soluyanov and Vanderbilt proposed a method based on the time-reversal polarization for computing topological invariant without inversion symmetryWIN . The $Z2$ topological phase of 2D functionalized stanene was studied in previous works R13 ; R19 , but to the best of our knowledge the adsorption of different atoms on zigzag stanene nano-ribbons has not been reported. Due to the possibility of oxidation of fabricated nano-devices, one needs to address the adsorption of oxygen on two dimensional nano-structures for possible future applications. Graphene oxide R29 ; R30 ; jmc ; apl1 ; apl2 and silicene oxide R31 ; R32 were investigated theoretically and experimentally before. Motivated by these works, here we present a study of oxygen adsorption on 2D stanene using DFT and tight binding approaches. The tight binding parameters are extracted by fitting two models. The obtained tight binding parameters are used to study the effect of oxygen adsorption on the topological phase in zigzag stanene nano-ribbons. II Model and method Our calculations include an ab-initio study of oxygen adsorption on the 2D structure of stanene and tight binding modeling of oxygen atom adsorption on zigzag nano-ribbons with different widths. In the first part we perform DFT calculations and fit these with tight binding results for the 2D structure to obtain required parameters. The on-site energy, spin-orbit interaction strength and hopping between nearest neighbor atomic sites for describing the tight binding Hamiltonian of a monolayer of pure stanene are extracted from our previous work R20 . The remaining parameters which include the on-site energy of oxygen atoms and hopping between tin and oxygen atoms are obtained in the present work. II.1 Density functional theory All the DFT calculations were performed by employing the Quantum-ESPRESSO package QE ; the generalized gradient approximation (GGA) and the Perdew-Burke-Ernzerhof (PBE) RPBE exchange correlation functional were adopted. To avoid interaction between adjacent stanene layers in neighboring unit cells, a vacuum layer of 14 Åwas used. An $8\times 8\times 1$ Monkhorst Pack k-point mesh was adopted to sample the 2D hexagonal Brillouin zone and the cut-off energy for the plane wave functions was set to 550 eV in all calculations. We used ultra soft and full relativistic pseudo-potentials to include SOC in the DFT calculations. Through geometry optimizations, cell parameters as well as atomic positions were fully relaxed until the forces on the atoms were less than 0.002 eV/Å. We consider a $4\times 4$ supercell of stanene that includes 32 Sn and various of O atoms. Different atomic positions for the adsorption of an oxygen atom and molecule were examined to find the minimum energy position for oxygen. Fig. 1 shows the optimized structure of pure stanene and adsorption of an oxygen molecule and atom on the 2D stanene. The adsorption of oxygen on the surface of stanene deforms neighboring hexagons and results buckling structure of stanene in the vicinity of the adsorption site, see Table I for detail. II.2 Tight binding calculations The tight binding model is based on an expansion of the electronic wave function into the basis of localized atomic wave functions. In the tight binding calculations we only consider the nearest neighbor atomic sites. The required tight binding parameters which include the on-site energy of tin atoms, hopping parameter between nearest neighbor sites and strength of spin-orbit interaction are extracting from our previous work R20 . For the pure 2D stanene we adopt single- and multi-orbital tight binding approximations. In the single- and multi-orbital models the intrinsic spin-orbit interaction is written as Kane-Mele term R14 ; R16 ; R20 and L.S R20 ; R33 ; respectively. In the Kane-Mele model, the intrinsic spin-orbit interaction term is written as an imaginary hopping between next-nearest neighbor sites. We applied the single orbital tight binding along with the Kane-Mele model for a zigzag edge stanene nano-ribbon with specific width. For a typical nano-ribbon, 1D periodicity is applied along the ribbon length and the ribbon width is limited to W atoms. The real space Hamiltonian matrix is Fourier transformed and diagonalized to find the electronic bands as a function of wave vector in the first Brillouin zone. The topological phase was examined under adsorption of oxygen atom and molecule on all different possible atomic sites in the unit-cell. Here all tight binding calculations of 2D stanene and stanene nano-ribbon were performed by using a self-developed tight-binding code. III Results and discussions The stability of the adsorption of oxygen on the surface of 2D stanene is confirmed by performing $\Gamma$-point phonon calculations, see Fig. 2(a-b). From Fig. 2(a-b), it is confirmed that oxygen atom and oxygen molecule adsorbed stanene is stable because of the absence of negative phonon frequencies. This approach of evaluation of the structural stability has been applied for oxygen adsorbed graphene.carbon Since each atom has three degrees of vibrational freedom and the O atom adsorbed stanene has 33 atoms in the system, therefore, the total vibrational modes in this system is 99, see x-axis in Fig. 2(a), while it becomes 102 for oxygen molecule adsorbed stanene because of 34 atoms in this system. The phonon frequency for pristine stanene has a value of $~{}180$ cm${}^{-1}$ at the $\Gamma$-point scirep1 , however, this value is reduced to be 162 cm${}^{-1}$ and 154 cm${}^{-1}$ for oxygen atom and oxygen molecule adsorbed stanene (see Fig. 2(a-b)), respectively. This indicates that the vibrational frequencies are softened because of weakening of the Sn$-$Sn bonds due to the presence of the oxygen atom/molecule, which agrees well with previous reports for similar systems.carbon Note that the lattice parameter of the systems under consideration (18.68 Å) is not modified by oxygen adsorption, while the bond lengths and buckling are affected, see Table I. Generally, in the vicinity of the adsorption site the Sn$-$Sn bond lengths and buckling are increased. The main purpose of this study is to understand the effect of oxygen adsorption on the topological to trivial phase transition in zigzag stanene nano-ribbons. The oxygen adsorption determines one of the important environmental effects of future nano-devices based on topological insulators. DFT calculations for nano-ribbons are computationally expensive due to the larger unit cell of the one-dimensional structure. Hence, we adopt a method based on tight binding Hamiltonians from DFT calculations. We start with the DFT and tight binding calculations of two dimensional stanene. The required tight binding parameters for 2D stanene were reported in our previous work R20 . Fig. 3 shows the band structure of the $4\times 4$ supercell of pristine stanene in both DFT and tight binding theories. In the absence of intrinsic spin-orbit interaction the electronic band gap of 2D stanene is zero at the K point. But both ab-initio and tight binding models predict an opening of the band gap at the K point of around 0.1 eV in pure stanene due to the strong spin-orbit interaction. The four atomic orbitals model describes the band structure in all region of the first Brillouin zone. The single orbital tight binding model does not match the DFT at the $\Gamma$ point but it describes the band structure around the Fermi level at the K point. The multi orbital model which includes the interaction between $\sigma$ and $\pi$ atomic orbitals solves the discrepancy at the $\Gamma$ point. To reduce the number of tight binding parameters and size of the Hamiltonian matrix for wide zigzag nano-ribbon calculations, we use the single orbital tight binding model in the remainder of paper. Additionally, one needs to understand the effect of oxygen adsorption on the tight binding parameters and Hamiltonian. We performed DFT calculations for adsorption of oxygen on the $4\times 4$ supercell of 2D stanene. The DFT band structures for the optimized structures of oxygen atoms and molecules on stanene are plotted in Fig. 4(a). In comparison to the pure stanene, adsorption of oxygen atom and molecule opens a direct electronic energy band gap of around 0.1 eV at the K point of the Brillouin zone. In both structures, the valence band maximum and conduction band minimum are located at the K point. The band gap in the functionalized stanene has potential application in nano-electronics. The band gap tuning by oxidation was reported for silicene experimentally R31 and theoretically t1 ; t2 ; t3 . It has been claimed that the free-standing silicene is unstable under oxygen adsorption due to the fact that oxygen molecules can be dissociated into oxygen atoms in silicene with no overcoming energy barrier t3 , while, the energy barrier for an oxygen molecule indeed depends on the position and orientation of the overlying oxygen molecule t2 . Also the metallic state is gradually decaying due to oxygen adsorption in epitaxial silicene on Ag (111) R34 . For the tight binding study of oxygen adsorption on stanene, one needs the hopping parameters between tin and oxygen atoms. We match the DFT and single orbital tight binding results for adsorption of an oxygen atom on a 2D stanene as shown in Fig. 4(b). Fig. 4(b) compares the low energy band structure of stanene with a single oxygen atom in the DFT and single orbital tight binding models around the K point. In the fitting process we attend to the energy band gap and slope of low energy bands around the Fermi level. Although the two models do not match for high energy bands, the low energy states around the Fermi level at the K point are fairly well described in the tight binding approximation. By matching the two models, the on-site energy and hopping parameter between tin and oxygen atom are obtained as $-1$ eV and $-1.6$ eV, respectively. Using the above tight binding parameters we studied the adsorption of oxygen atoms on the zigzag edge stanene nano-ribbon. Fig. 5(a) shows the atomic structure and definition of unit-cell for zigzag stanene nano-ribbons. In the absence of O atoms, the structure is symmetric and the electronic properties of the ribbon are independent of the unit-cell length L. The smallest unit cell (L$=1$) has W atoms and is appropriate for pure zigzag nano-ribbons. Fig. 5(b) shows the electronic band structure of pure zigzag stanene nano-ribbon in the presence and absence of intrinsic spin-orbit interaction. In the absence of spin-orbit interaction, the low energy electronic states at the Fermi level are localized in the zigzag edges. The intrinsic spin-orbit interaction in the Kane-Mele model lifts the degeneracy of the edge states and produces helical edge states at the Fermi level. Between two TRIM points the helical edge state in the zigzag stanene nano-ribbon crosses the Fermi level in odd pairs which shows the topologically protected edge states as shown in Fig. 5(b). The band structure is plotted in first Brillouin zone between $\Gamma$ (k$=0$) and X (k$=$ $\pm$ $\pi$/a) points. The topological protected edge states are robust against weak disorder and interactions because time reversal symmetry prevents elastic back-scattering R28 . It was shown that the Hubbard term separates up and down spin states on the different zigzag edges of nano-structures and causes anti-ferromagnetic alignment of spins on the edges R35 ; R36 . In the present work we ignore the electron-electron interaction and the electronic bands for spin up and down states are degenerate in the entire first Brillouin zone. It has been shown that the staggered sub-lattice potential, Rashba spin-orbit interaction R14 and random disorder R37 have the potential to remove the helical edge states and topological phase in zigzag stanene nano-ribbons. Our main goal is to study the effect of oxygen adsorption on the topological phase and helical edge states in the zigzag stanene nano-ribbons. We assume that the oxygen atoms are adsorbed on random atomic positions in the unit-cell of zigzag ribbon. The adsorption of the first O atom on the zigzag ribbon breaks the symmetry of structure. To model the exact adsorption process one should consider a unit-cell with large length which is computationally unavailable. Additionally L$=1$ leads to an oxygen chain along the ribbon length which amounts to an artificial order. To prevent the oxygen chain we also consider a unit cell with L$=$2 that has $2\times W$ atoms per unit-cell as shown in Fig. 5(a). We examined all different possible atomic sites for adsorption of a oxygen atom on the surface of the stanene nano-ribbon. Fig. 6(a-b) shows the band structure of the zigzag nano-ribbon after adsorption of a single oxygen atom on two different positions. In Figs. 5(a) and 5(b), the electronic bands cross the Fermi level in odd/even pairs between TRIM points, which indicates the topological/trivial phase in the zigzag ribbon. After oxygen adsorption, both trivial and topological phases are possible as shown in Fig. 6(a-b). Based on our results, the atomic position of adsorbed oxygen atoms determines the topological/trivial phase of the zigzag stanene nano-ribbon after adsorption. We observe similar results for L$=$2 which are not presented here. For certain atomic positions, the adsorption of a single oxygen atom on the edge atoms conserves the topological phase of the zigzag stanene nano-ribbon. In all cases the oxygen atom conserves the helical edge state at the X point but additional crosses between energy bands and Fermi level occurs which determine the topological to trivial phase transition in zigzag stanene nano-ribbon. The adsorption of the next oxygen atoms on the surface of stanene nano-ribbons is a more complex problem. Generally for adsorption of N oxygen atoms on the zigzag ribbon with M atoms per unit-cell there are  M!/N!(M-N)! different possible combinations. This number increases rapidly with ribbon width and number of oxygen atoms. The adsorption of more oxygen atoms will deform the energy bands more drastically and brings extra energy levels to the Fermi level. In the case of more oxygen atoms, the relative position of the oxygen atoms on the surface of the zigzag ribbon determines the number of crossings between energy bands and Fermi level and the trivial/topological phase of the ribbon. To specify the number of available topological structures, we define the topological phase probability as the number of topological structures divided by the number of all possible arrangement of oxygen atoms in the unit-cell. We examine all possible structures for L$=1$, 2 after adsorption of up to three oxygen atoms. The probability of the topological phase as a function of ribbon width is plotted in Fig. 7 for L$=1$, 2. We consider up to three oxygen atoms on relatively wide zigzag nano-ribbons. For $L=2$ the number of tin atoms per unit-cell for a specific width is twice that of L$=$1. For higher numbers of oxygen atoms the calculations are very time-consuming and exceed our computational resources. Accordingly, they are left blank in Fig. 7. Generally, for L$=2$ the density of oxygen and the distortion of the band structure due to the adsorbed atoms is decreased which increases the possibility of the topological phase in the ribbon after oxygen adsorption. The possibility of the topological phase is almost width independent for adsorption of the first oxygen atom. For adsorption of two or three oxygen atoms the probability of the topological phase increases with ribbon width and tends to a constant value for wide enough zigzag nano-ribbons. The adsorption of additional oxygen atoms perturbs the band structure of narrow zigzag nano-ribbons and opens an energy band gap in the Fermi level which leads to the trivial phase. In the case of wider ribbons the effect of oxygen atoms is modified and the probability of the topological phase increases as shown in Fig. 7. The localized low energy edge states contribute effectively on the electronic structure of nano ribbon around the Fermi level. The adsorption of oxygen atom on the edge atoms perturbs the band structure in the Fermi level which may cause topological to trivial phase transition. On the other hand the topological phase is less sensitive to adsorption of oxygen on the non-edge atoms. According to Fig. 7, the topological phase and quantum spin Hall effect are sensitive to the adsorption of oxygen atoms but there is still a possibility for topological phase after oxidation of stanene zigzag edges. The topological phase in zigzag stanene nano-ribbons even after oxidation indicates that stanene has strong potential for experimental realization of spintronic nano-devices based on the topological phase of matter. IV Conclusion In summary, we present a study comprising DFT and tight binding approaches to investigate the adsorption of oxygen on the 2D structure and zigzag stanene nano-ribbons. By matching both models, we obtained the required tight binding parameters to study the topological phase in zigzag stanene nano-ribbons after oxygen atom adsorption. In addition, $\Gamma$-point phonon calculations were performed in order to make sure that the systems under study are stable. Our results demonstrate the possibility of the topological phase in zigzag stanene ribbons after oxidation which can be useful for future nano-electronics based on the topological phase of nano-materials. V Acknowledgement G.S. and T.P.K. thank Prof. Michael Freund for his support. G.S. acknowledges funding from the Natural Sciences and Engineering Council of Canada (NSERC, Discovery Grant). References (1) K. S. Novoselov, A. K. Geim, S. Morozov, D. Jiang, Y. Zhang, S. a. Dubonos, I. Grigorieva, and A. Firsov, Science 306, 666 (2004). (2) F. Zhu, W. j. Chen, Y. Xu, C. l. Gao, D.d. Guan, C.h. Liu, D. Qian, S. C. Zhang, and J. F. Jia, Nat. Mater. 14, 1020 (2015). (3) P. Vogt, P. De Padova, C. Quaresima, J. Avila, E. Frantzeskakis, M. C. Asensio, A. Resta, B. Ealet, and G. Le Lay, Phys. Rev. Lett. 108, 155501 (2012). (4) L. Li, S. Z. Lu, J. Pan, Z. Qin, Y. Q. Wang, Y. Wang, G. y. Cao, S. Du, and H. J. Gao, Adv. Mater. 26, 4820 (2014). (5) M. Dávila, L. Xian, S. Cahangirov, A. Rubio, and G. Le Lay, New J. Phys. 16, 095002 (2014). (6) D. Hsieh, Y. Xia, D. Qian, L. Wray, J. Dil, F. Meier, J. Osterwalder, L. Patthey, J. Checkelsky, and N. Ong, Nature 460, 1101 (2009). (7) Y. Tanaka, Z. Ren, T. Sato, K. Nakayama, S. Souma, T. Takahashi, K. Segawa, and Y. Ando, Nat. Phys. 8, 800 (2012) . (8) Y. Chen, J. Analytis, J.-H. Chu, Z. Liu, S.-K. Mo, X.-L. Qi, H. Zhang, D. Lu, X. Dai, and Z. Fang, Science 325, 178 (2009). (9) K. Kuroda, M. Ye, A. Kimura, S. Eremeev, E. Krasovskii, E. Chulkov, Y. Ueda, K. Miyamoto, T. Okuda, and K. Shimada, Phys. Rev. Lett. 105, 146801 (2010). (10) B.H. Chou, Z.-Q. Huang, C.-H. Hsu, F.-C. Chuang, Y.-T. Liu, H. Lin, and A. Bansil, New J. Phys. 16, 115008 (2014). (11) G. Cao, Y. Zhang, and J. Cao, Phys. Lett. A 379, 1475 (2015). (12) R.W. Zhang, C.-W. Zhang, W.-X. Ji, S.-S. Li, S.-J. Hu, S.-S. Yan, P. Li, P.-J. Wang, and F. Li, New J. Phys. 17, 083036 (2015). (13) C. L. Kane and E. J. Mele, Phys. Rev. Lett. 95, 146802 (2005). (14) C.C. Liu, W. Feng, and Y. Yao, Phys. Rev. Lett. 107, 076802 (2011). (15) T. P. Kaloni, M. Tahir, and U. Schwingenschlögl, Sci. Rep. 3, 3192 (2013). (16) T. P. Kaloni, N. Singh, and U. Schwingenschlögl, Phys. Rev. B 89, 035409 (2014). (17) T. P. Kaloni, J. Phys. Chem. C 118, 25200 (2014). (18) T. P. Kaloni, L. Kou, T. Frauenheim, and U Schwingenschlögl, Appl. Phys. Lett. 105, 233112 (2014). (19) T. P. Kaloni and U. Schwingenschlögl, Phys. Status solidi–RRL 8, 685 (2014). (20) T. P. Kaloni, G. Schreckenbach, M. S. Freund, and U. Schwingenschlögl, Phys. Status solidi–RRL 10, 133 (2016). (21) C. L. Kane and E. J. Mele, Phys. Rev. Lett. 95, 226801 (2005). (22) Y. Yao, F. Ye, X. L. Qi, S.-C. Zhang, and Z. Fang, Phys. Rev. B 75, 041401 (2007). (23) H. Min, J. Hill, N. A. Sinitsyn, B. Sahu, L. Kleinman, and A. H. MacDonald, Phys. Rev. B 74, 165310 (2006). (24) Y. Xu, B. Yan, H. J. Zhang, J. Wang, G. Xu, P. Tang, W. Duan, and S.-C. Zhang, Phys. Rev. Lett. 111, 136804 (2013). (25) T. P. Kaloni, M. Modarresi, M. Tahir, M. R. Roknabadi, G. Schreckenbach, and M. S. Freund, J. Phys. Chem. C 119, 11896 (2015). (26) M. Ezawa, J. Phys. Soc. Jpn. 84, 121003 (2015). (27) M. Ezawa, Y. Tanaka, and N. Nagaosa, Sci. Rep. 3, 2790 (2013). (28) L. Feng, L. Cheng-Cheng, and Y. Yu-Gui, Chin. Phys. B 24, 87503 (2015). (29) Q. Liu, X. Zhang, L. Abdalla, A. Fazzio, and A. Zunger, Nano Lett. 15, 1222 (2015). (30) M. Zhao, X. Chen, L. Li, and X. Zhang, Sci. Rep. 5, 8441 (2015). (31) Y. Nie, M. Rahman, D. Wang, C. Wang, and G. Guo, Sci. Rep. 5, 17980 (2015). (32) L. Fu, C. L. Kane, and E. J. Mele, Phys. Rev. Lett. 98, 106803 (2007). (33) L. Fu and C. L. Kane, Phys. Rev. B 76, 045302 (2007). (34) A. Soluyanov and D. Vanderbilt, Phys. Rev. B 83, 235401 (2011). (35) A. M. Dimiev and J. M. Tour, ACS Nano 8, 3060 (2014). (36) S. Abdolhosseinzadeh, H. Asgharzadeh, and H. S. Kim, Sci. Rep. 5, 10160 (2015). (37) T. P. Kaloni, Y. C. Cheng, R. Faccio, U. Schwingenschlögl, J. Mater. Chem. 21, 18284 (2011). (38) Y. C. Cheng, T. P. Kaloni, Z. Y. Zhu, and U. Schwingenschlögl, Appl. Phys. Lett. 101, 073110 (2012). (39) N. Singh, T. P. Kaloni and U. Schwingenschlögl, Appl. Phys. Lett. 102, 023101 (2013). (40) Y. Du, J. Zhuang, H. Liu, X. Xu, S. Eilers, K. Wu, P. Cheng, J. Zhao, X. Pi, and K. W. See, ACS Nano 8, 10019 (2014). (41) V. Ongun Özcelik, and S. Ciraci, J. Phys. Chem. C 117, 26305 (2013). (42) T. Morishita and M. J. S. Spencer, Sci. Rep. 5, 17570 (2015). (43) G. Liu, X. L. Lei, M. S. Wu, B. Xu, and C. Y. Ouyang, J. Phys.:Condens. Matter 26, 355007 (2014). (44) R. Wang, X. Pi, Z. Ni, Y. Liu, S. Lin, M. Xu, and D. Yang, Sci. Rep. 3, 3507 (2013). (45) P. Giannozziet et. al., J. Phys.:Condens. Matter, 21, 395502 (2009). (46) J. P. Perdew, K. Burke and M. Ernzerhof, Phys. Rev. Lett. 77, 3865 (1997). (47) S. Konschuh, M. Gmitra, and J. Fabian, Phys. Rev. B 82, 245412 (2010). (48) T.P. Kaloni, M. Upadhyay Kahaly, R. Faccio, and U. Schwingenschlögl, Carbon 64, 281 (2013). (49) B. Peng, H. Zhang, H. Shao, Y. Xu, X. Zhang, and H. Zhu, Sci. Rep. 6, 20225 (2016). (50) X. Xu, J. Zhuang, Y. Du, H. Feng, N. Zhang, C. Liu, T. Lei, J. Wang, M. Spencer, and T. Morishita, Sci. Rep. 4, 7543 (2014). (51) M. Modarresi, M. Roknabadi, and N. Shahtahmasebi, J. Magn. Magn. Mater. 350, 6 (2014). (52) M. Modarresi, B. Kandemir, M. Roknabadi, and N. Shahtahmasebi, J. Magn. Magn. Mater. 367, 81 (2014). (53) X. F. Wang, Y. Hu, and H. Guo, Phys. Rev. B 85, 241402 (2012).
Asymptotic stability for standing waves of a NLS equation with concentrated nonlinearity in dimension three. II Riccardo Adami riccardo.adami@polito.it Dipartimento di Scienze Matematiche, Politecnico di Torino, C.so Duca degli Abruzzi 24 10129 Torino, Italy    Diego Noja diego.noja@unimib.it Dipartimento di Matematica e Applicazioni, Università di Milano Bicocca, Via R. Cozzi 55 20125 Milano, Italy    Cecilia Ortoleva cecilia.ortoleva@gmail.com PriceWaterhouseCoopers Italia, via Monte Rosa 91, 21049, Milano (November 21, 2020) Abstract In this paper the study of asymptotic stability of standing waves for a model of Schrödinger equation with spatially concentrated nonlinearity in dimension three, begun in ADO , is continued. The nonlinearity studied is a power nonlinearity concentrated at the point $x=0$ obtained considering a contact (or $\delta$) interaction with strength $\alpha$, which consists of a singular perturbation of the Laplacian described by a selfadjoint operator $H_{\alpha}$, and letting the strength $\alpha$ depend on the wavefunction in a prescribed way: $i\dot{u}=H_{\alpha}u$, $\alpha=\alpha(u)$. For power nonlinearities in the range $(\frac{1}{\sqrt{2}},1)$ there exist orbitally stable standing waves $\Phi_{\omega}$, and the linearization around them admits two imaginary eigenvalues (absent in the range $(0,\frac{1}{\sqrt{2}})$ previously treated) which in principle could correspond to non decaying states, so preventing asymptotic relaxation towards an equilibrium orbit. This situation is usually treated requiring the validity of a nonlinear Fermi golden rule, which assures the presence of a dissipative term in the modulation equation ruling the complex amplitudes $z(t),\bar{z}(t)$ associated to the discrete part of the linearized spectrum. Here without the use of FGR it is proven that, in the range $(\frac{1}{\sqrt{2}},\sigma^{*})$ for a certain $\sigma^{*}\in(\frac{1}{\sqrt{2}},\frac{\sqrt{3}+1}{2\sqrt{2}}]$, the dynamics near the orbit of a standing wave asymptotically relaxes in the following sense: consider an initial datum $u(0)$ near the standing wave $\Phi_{\omega_{0}},$ written in the form $u(0)=u_{0}=e^{i\omega_{0}+\gamma_{0}}\Phi_{\omega_{0}}+e^{i\omega_{0}+\gamma_{% 0}}[(z_{0}+\overline{z_{0}})\Psi_{1}+i(z_{0}-\overline{z_{0}})\Psi_{2}]+f_{0}$ with $z_{0}$ small and $f_{0}$ small in energy and in a certain weighted space $L^{1}_{w}$; then the solution $u(t)$ can be asymptotically decomposed as $$u(t)=e^{i\omega_{\infty}t+ib_{1}\log(1+\epsilon k_{\infty}t)}\Phi_{\omega_{% \infty}}+U_{t}*\psi_{\infty}+r_{\infty},\quad\textrm{as}\;\;t\rightarrow+\infty,$$ where $\omega_{\infty}$, $k_{\infty}>0$, $b_{1}\in\mathbb{R}$, and $\psi_{\infty}$ and $r_{\infty}\in L^{2}(\mathbb{R}^{3})$ , $U(t)$ is the free Schrödinger group and $$\|r_{\infty}\|_{L^{2}}=O(t^{-1/4})\quad\textrm{as}\;\;t\rightarrow+\infty\ .$$ We stress the fact that in the present case and contrarily to the main results in the field, the admitted nonlinearity is $L^{2}$-subcritical. I Introduction We continue here the analysis of a model of nonlinear Schrödinger equation with a concentrated nonlinearity in dimension three begun in ADO . We recall that such a model is defined by the equation $$i\frac{du}{dt}=H_{\alpha(u)}u.$$ (1) where the nonlinear operator $H_{\alpha(u)}$ is a point interaction (or “delta potential”) in dimension three with a strength $\alpha=\alpha(u)$ depending on the wavefunction $u$ in a prescribed way. Here the nonlinearity is of power type and focusing. More precisely, the domain of $H_{\alpha(u)}$ is given by $$D(H_{\alpha(u)})=\left\{u\in L^{2}(\mathbb{R}^{3}):\;u(x)=\phi(x)+qG_{0}(x)\,% \ \textrm{with}\ \;\phi\in H^{2}_{loc}(\mathbb{R}^{3}),\,\Delta\phi\in L^{2}(% \mathbb{R}^{3}),\right.$$ (2) $$\left.q\in\mathbb{C},\ \ \lim_{x\rightarrow 0}(u(x)-qG_{0}(x))=\phi(0)=-\nu|q|% ^{2\sigma}q\},\ \ G_{0}(x)=\frac{1}{4\pi|x|},\ \ \nu>0\ .\right\}$$ and the action is given by $$H_{\alpha}u(x)=-\Delta\phi(x)\ ,\ \ x\in\mathbb{R}^{3}\setminus\{0\}\ .$$ (3) The nonlinearity, displayed in the boundary condition, means that the value at zero of the “regular part” $\phi$ of the element domain is related in a nonlinear way to the so called “charge” $q$ of the same element domain, which is the coefficent of the “singular part” $G_{0}$. According to (2), our choice for the function $\alpha(u)$ is $$\alpha(u)\ =\ -\nu|q|^{2\sigma},\qquad\nu,\sigma>0$$ When $\sigma=0$ one obtains the well known contact interaction (of strength $\alpha=-\nu$), which, due to the sign, is a so called attractive $\delta$ interaction (see Albeverio ). When $\sigma\neq 0$ the nonlinearity is in some sense acting at the single point zero, coinciding with the location of the singularity of the contact interaction. This is the origin of the denomination of concentrated nonlinearity (see ADFT ; ADFT2 ; NP ). The above model can be derived from a standard nonlinear Schrödinger equationquation with a inhomogeneous nonlinearity, i.e. with a nonlinearity space dependent, and shrinking at a point in a suitable scaling limit. This derivation is rigorously treated in CFNT1 for the one dimensional case and in the forthcoming paper CFNT2 for the present three dimensional case. The problem (1) describes a Hamiltonian system for which global well posedness holds in the nonlinearity range $\sigma\in(0,1)$. More precisely we endow the space $L^{2}(\mathbb{R}^{3},\mathbb{C})\simeq L^{2}(\mathbb{R}^{3},\mathbb{R})\oplus L% ^{2}(\mathbb{R}^{3},\mathbb{R})$ (assuming the usual identification $z\simeq(\Re z,\Im z)$) with the symplectic form $$\Omega(u,v)=\Im\int_{\mathbb{R}^{3}}u{\overline{v}}\ dx=\int_{\mathbb{R}^{3}}(% \Re v\Im u-\Im v\Re u)dx=\int_{\mathbb{R}^{3}}(u_{2}v_{1}-u_{1}v_{2})dx$$ (4) Of course $H^{1}(\mathbb{R}^{3},\mathbb{R})\oplus H^{1}(\mathbb{R}^{3},\mathbb{R})$ is a symplectic submanifold, and we associate to (1) the Hamiltonian functional coinciding with the total conserved energy associated to the evolution equation (1), that is given by $$E(u(t))=\frac{1}{2}\|\nabla\phi\|_{L^{2}}^{2}-\frac{\nu}{2\sigma+2}|q|^{2% \sigma+2},\ \ u=\phi+qG_{0}\in V.$$ (5) where $V$ is the domain of finite energy states $$V=\{u\in L^{2}(\mathbb{R}^{3}):\;u(x)=\phi(x)+qG_{0}(x),\,\textrm{with}\;\phi% \in L^{2}_{loc}(\mathbb{R}^{3}),\;\nabla\phi\in L^{2}(\mathbb{R}^{3}),\,q\in% \mathbb{C}\},$$ (6) which is a Hilbert space endowed with the norm $$\|u\|^{2}_{V}=\|\nabla\phi\|_{L^{2}}^{2}+|q|^{2}.$$ (7) Note that for a generic element $u$ of the form domain the charge $q$ and its regular part $\phi$ are independent of each other. Correspondingly, the NLSE (1) can be rephrased in the hamiltonian form $$\frac{du}{dt}=J\ E^{{}^{\prime}}(u(t))\ .$$ (8) where $J=\left(\begin{array}[]{cc}0&1\\ -1&0\end{array}\right)$ is the standard symplectic matrix. In ADO the case of power $\sigma\in(0,\frac{1}{\sqrt{2}})$ was studied, showing existence of nonlinear bound states, orbital stability and relaxation to (relative) equilibrium asymptotically in time. For such values of the power nonlinearity $\sigma$ the asymptotic analysis is simplified by the fact that linearization around standing waves has no eigenvalues apart from the zero eigenvalue, always existing due to the gauge $U(1)$ symmetry. For $\sigma\in(\frac{1}{\sqrt{2}},1)$ the linearization around a standing wave admits pure imaginary eigenvalues $\pm i\xi$, which correspond to existence of neutral oscillation in the linearized dynamics. If these neutral oscillations persist as invariant tori in the phase space of the complete nonlinear system, relaxation to an asymptotic equilibrium standing wave is precluded. The analysis of so called asymptotic stability of solitons started at the beginning of the nineties with the studies of Soffer and Weinstein (SW1 ; SW2 ) on the NLS equation with an external potential in dimension three and of Buslaev and Perelman on the translation invariant NLS in dimension one (BP1 ; BP2 ). In our model the analysis goes as follows. We consider the nonlinear evolution problem $$i\frac{du}{dt}=H_{\alpha}u,\qquad u(0)=u_{0}\in D(H_{\alpha}),$$ (9) In ADO it is shown the existence of a solitary wave manifold for (9) $$\mathcal{M}=\left\{\ \Phi_{\omega}(x)=\left(\frac{\sqrt{\omega}}{4\pi\nu}% \right)^{\frac{1}{2\sigma}}\frac{e^{-\sqrt{\omega}|x|}}{4\pi|x|}:\;\omega>0\,% \right\}\ ,$$ (10) and it is shown that these solitary waves, which belong to $D(H_{\alpha})$, are orbitally stable in the same range ($\sigma\in(0,1)$) where global well posedness of equation (9) is guaranteed. One aims at proving that an initial datum near this solitary manifold relaxes for $t\to+\infty$ to the solitary manifold itself. The solitary manifold turns out to be a symplectic two dimensional submanifold. Now writing $u=e^{i\omega t}(\Phi_{\omega}+R)$, we obtain that $R$ satisfies the first order the linearized canonical system $$\frac{dR}{dt}=-J\left[\begin{array}[]{cc}H_{\alpha_{1}}+\omega&0\\ 0&H_{\alpha_{2}}+\omega\\ \end{array}\right]R\ =\ \left[\begin{array}[]{cc}0&L_{2}\\ -L_{1}&0\\ \end{array}\right]R\ \equiv LR\ ,$$ (11) and $L_{j}=H_{\alpha_{j}}+\omega$ for $j=1,2$, $\alpha_{1}=-(2\sigma+1)\frac{\sqrt{\omega}}{4\pi}$ and $\alpha_{2}=-\frac{\sqrt{\omega}}{4\pi}$. As recalled before (see ADO , Section IV A, and Appendix VI A of this paper) in the case $\sigma\in(1/\sqrt{2},1)$ the discrete spectrum of $L$ consists of the eigenvalue $0$ with algebraic multiplicity $2$ and of two purely imaginary eigenvalues $\pm i\xi$ with $$\xi=2\sigma\sqrt{1-\sigma^{2}}\omega,$$ (12) and corresponding eigenvectors $\Psi$ and $\Psi^{*}$ (see Appendix VI C of this paper). As a consequence, the domain of the operator $L$ can be decomposed in three symplectic subspaces, more precisely $$D(L)=X^{0}\oplus X^{1}\oplus X^{c},$$ where $X^{0}$, $X^{1}$, and $X^{c}$ are respectively the generalized kernel of $L$, the eigenspace corresponding to the $L$ eigenfunctions $\Psi$ and $\Psi^{*}$, and the subspace associated to the absolutely continuos spectrum of $L$. In particular $X^{0}$ is generated by the tangent vectors to the symplectic solitary submanifold. The corresponding symplectic projection operators from $L^{2}(\mathbb{R}^{3})$ onto $X^{0}$, $X^{1}$ and $X^{c}$ are denoted by $P^{0}$, $P^{1}$, $P^{c}$ respectively (see appendix C for explicit representation .) A further step in the analysis consists in decomposing the solution of equation (9) in the sum of a soliton-like part $e^{i\Theta(t)}\Phi_{\omega(t)}(x)$ and a fluctuating part $\chi(t,x)$, introducing the Ansatz $$u(t,x)=e^{i\Theta(t)}\left(\Phi_{\omega(t)}(x)+\chi(t,x)\right),$$ (13) with $$\Theta(t)=\int_{0}^{t}\omega(s)ds+\gamma(t),$$ (14) and $$\chi(t,x)=z(t)\Psi(t,x)+\overline{z(t)}\Psi^{*}(t,x)+f(t,x)\equiv\psi(t,x)+f(t% ,x),\ \ \psi\in X^{1},\ \ f\in X^{c}$$ (15) with $\omega(t)$, $\gamma(t)$, $z(t)$ and $f(t,x)$ up to now unprejudiced. Notice that the parameters are now time dependent, as well as the functions $\Psi$ and $\overline{\Psi}$: this is due to the fact that $\Psi$ depends on $t$ through $L$ and then through $\omega$. The goal now is to show that the fluctuating part is decaying ($\Psi$-$\Psi^{*}$ component) or dispersing ($f$-component), to provide convergence of parameters $\omega(t),\gamma(t)$ to possibly unknown asymptotic values; all of this finally gives relaxation to the solitary manifold. To this end one has to fix the above underdetermined representation, and then obtain equations for $\omega(t)$, $\gamma(t)$ and $\chi(t,x)$. This is achieved by requiring the fluctuation $\chi$ to be symplectically orthogonal to the solitary manifold $\mathcal{M}$ (or, equivalently, orthogonal to the generalized kernel $N_{g}(L)$) for every time $t\geq 0$. This procedure yields the so called “modulation equations” for $\omega(t)$, $\gamma(t)$ and $\chi(t,x)$. Moreover, exploiting further orthogonality relations between $\Psi,\Psi^{*},\Phi_{\omega}$, one also gets equations for the coefficient $z$ and the dispersive term $f$ in $\chi$ (See Theorem II.1 in Section II for more details). At this point two problems occur. The first is that the modulation equation for the fluctuating component $\chi$, due to the introduction of the time dependent Ansatz, contains a non autonomous linear part. So, the use of dispersive estimates to show the wanted decay of this component requires to preliminarily “freeze” the dynamics at a certain fixed time $T$; the subsequent step is to show the uniform character of the estimates in $T$. The frozen equations are written in Section II. In the same section the leading terms in the modulation equations are identified, and the estimates on the remainder terms are displayed. This is however not yet sufficient to get rid of the complex oscillation described by $z$ and $z^{*}$. So the modulation equations are rewritten in a canonical way by means of a Poincaré normal form pushed to the third order which preserves estimates on the remainders. In particular, the transformed oscillating component $z_{1}=z_{1}(z,\bar{z})$ satisfies the equation $$\dot{z_{1}}=i\xi z_{1}+iK|z_{1}|^{2}z_{1}+\widehat{Z}_{R}\ .$$ (16) The asymptotic behavior of the solution of the $z_{1}$ component depends on the coefficient $iK$ and more precisely on the sign of its real part. This is the point where nonlinear Fermi golden rule (FGR) enters the game. It turns out in relevant examples that if a resonance condition between a higher harmonic (a multiple) of $i\xi$ and the continuous spectrum of linearization is satisfied (the FGR), then $\Re(iK)$ is strictly negative; this gives decay of the oscillating modes of the linearization (see the seminal paper cited above and moreover TY , TY2 and BS ). The decay is ultimately due to coupling of oscillating modes with the continuous spectrum assured by FGR, and consequent drift of energy from the discrete component to the continuous one; so the mechanism is dissipation by dispersion. Furthermore, exploiting the hamiltonian structure of the system it can be shown that the above situation is in some sense generic (CM ; Cu1 ,B and reference therein). In the present model things go as follows. In the first place the second harmonic $2i\xi$ of the discrete eigenvalue $i\xi$ lies in the interior of the continuous spectrum if the nonlinearity power $\sigma\in(\frac{1}{\sqrt{2}},\frac{\sqrt{3}+1}{2\sqrt{2}})$. Let us then denote by $N_{2}(q,q)$ the quadratic terms coming from the Taylor expansion of the nonlinearity (see formula (20)), and by $\Psi(\omega_{0})=\left(\begin{array}[]{cc}\Psi_{1}(\omega_{0})\\ \Psi_{2}(\omega_{0})\end{array}\right)$ the eigenfunction of the linearized operator associated to $i\xi_{0}$ (given explicitely in Appendix VI C, Proposition VI 4). The nonlinear Fermi golden rule (FGR) adapted to our case is the following non-degeneracy condition: $$JN_{2}(q_{\Psi(\omega_{0})},q_{\Psi(\omega_{0})})\overline{q_{\Psi_{+}(2i\xi_{% 0})}}\neq 0,$$ (17) where $\Psi_{+}(2i\xi_{0})$ is the generalized eigenfunction associated to $+2i\xi_{0}$ (for the explicit representation see Appendix D). This nondegeneracy condition should imply $\Re(iK)<0$ in (16). As a matter of fact, thanks to the explicit character of our model, we are able to directly verify, without use of the nondegeneracy condition, that $\Re(iK)<0$ (that gives dissipation in (16)) holds for any $\sigma$ in the range $\left(\frac{1}{\sqrt{2}},\sigma^{*}\right)$, for a certain $\sigma^{*}\in\left(\frac{1}{\sqrt{2}},\frac{\sqrt{3}+1}{2\sqrt{2}}\right]$ (see Section III.4). Moreover, we have numerical evidence that this is true on the whole interval $\left(\frac{1}{\sqrt{2}},\frac{\sqrt{3}+1}{2\sqrt{2}}\right)$. With these premises, we eventually prove the following result. Theorem (Asymptotic stability in the case of purely imaginary eigenvalues) Assume that $u\in C(\mathbb{R}^{+},V)$ is a solution to equation (9) with a power nonlinearity (see (2)) given by $\sigma\in(\frac{1}{\sqrt{2}},\sigma^{*})$, for a certain $\sigma^{*}\in(\frac{1}{\sqrt{2}},\frac{\sqrt{3}+1}{2\sqrt{2}}]$. Moreover, suppose that the initial datum is close to a standing wave of (9), in the sense that $$u(0)=u_{0}=e^{i\omega_{0}+\gamma_{0}}\Phi_{\omega_{0}}+e^{i\omega_{0}+\gamma_{% 0}}[(z_{0}+\overline{z_{0}})\Psi_{1}+i(z_{0}-\overline{z_{0}})\Psi_{2}]+f_{0}% \in V\cap L^{1}_{w}(\mathbb{R}^{3}),$$ with $\omega_{0}>0$, $\gamma_{0}$, $z_{0}\in\mathbb{R}$, and $f_{0}\in L^{2}(\mathbb{R}^{3})\cap L^{1}_{w}(\mathbb{R}^{3})$ and $$|z_{0}|\leq\epsilon^{1/2}\qquad\textrm{and}\qquad\|f_{0}\|_{L^{1}_{w}}\leq c% \epsilon^{3/2},$$ where $c$, $\epsilon>0$. Then, provided $\epsilon$ is sufficiently small, the solution $u(t)$ can be asymptotically decomposed as $$u(t)=e^{i\omega_{\infty}t+ib_{1}\log(1+\epsilon k_{\infty}t)}\Phi_{\omega_{% \infty}}+U_{t}*\psi_{\infty}+r_{\infty},\quad\textrm{as}\;\;t\rightarrow+\infty,$$ where $\omega_{\infty}$, $\epsilon k_{\infty}>0$, $b_{1}\in\mathbb{R}$, and $\psi_{\infty}$, $r_{\infty}\in L^{2}(\mathbb{R}^{3})$ such that $$\|r_{\infty}\|_{L^{2}}=O(t^{-1/4})\quad\textrm{as}\;\;t\rightarrow+\infty,$$ in $L^{2}(\mathbb{R}^{3})$. Some remarks are in order. We stress again that the range of the admitted nonlinearities $\sigma$ implies that $\pm 2i\xi$ is in the essential spectrum of the linearized operator. Notice that $\frac{\sqrt{3}+1}{2\sqrt{2}}\approx 0.96$, and that when $\sigma\to 1$ the discrete eigenvalues of linearization, $\pm i\xi$, collide at zero. So, in order to cover a larger range of values of $\sigma$ one has to consider harmonics higher than the second and consider normal form pushed to order higher than the third. As shown in ADO the standing waves in the case $\sigma=1$ are unstable by blow-up. We recall that $\sigma=\frac{1}{{\sqrt{2}}}$ is a threshold resonance for the linearization. Its presence does not allow enough decay of the linearized dynamics to obtain relaxation to solitary manifold. The asymptotic stability result is achieved following the outline of BS and KKS and using the machinery already developed in ADO . In particular, in KKS the same problem for the analogous one-dimensional model is studied. Nevertheless, the three-dimensional case presents some differences. The first one is that the concentrated nonlinearity imposes to develop the analysis at the form level. This means that the estimates on the evolution of the initial data are more delicate. The second main difference is the faster decay of the propagator of the free Laplacian. This allows to develop the the analysis using just the structural weight $w=1+\frac{1}{|x|}$ which arises from the dispersive estimate instead of introducing new weighted spaces as done in the one-dimensional case treated in BKKS ; KKS . Finally, the eigenfunctions associated to the purely imaginary eigenvalues do not have oscillating terms occurring in the one-dimensional case but they exponentially decrease as $|x|\rightarrow+\infty$. This fact will be useful in order to get the decay in time of the radiation term. A last comment of general nature is in order. As in the one dimensional case studied by Buslaev, Komech, Kopylova, and Stuart in BKKS and Komech, Kopylova, and Stuart in KKS , and the three dimensional model analyzed in ADO , the analysis of a specific model allows to obtain asymptotic stability of standing waves without a priori assumptions. In particular the nonlinearity is fixed, of power type and subcritical, in the sense that it falls in the range of global well posedness of the equation (see KM for a different example where asymptotic stability is proven in subcritical regime); no spectral assumptions are needed; no smallness of initial data is required, in the sense that we give results for every standing wave of the model and initial data near the family of standing waves. As a final remark, while Komech, Kopylova, and Stuart in KKS find a link between the Fermi Golden Rule and the decay of normal modes of linearization, here such decay is directly verified. This fact seems to indicate that some of these assumptions or hypotheses are in fact unnecessary when enough information about the model is known. For the sake of completeness, in the course of the paper we will repeat proofs requiring some modifications because of the facts mentioned above; on the contrary, where the arguments hold unchanged, just a reference will be given. Moreover in the appendices we give information about linearization operator recalling useful material from ADO and giving detailed properties of discrete and continuous eigenfunctions. Acknowledgments The authors are grateful to Scipio Cuccagna, Gianfausto Dell’Antonio, Alexander Komech and Galina Perelman for several discussions and correspondence. R.A. and D.N. are partially supported by the FIRB 2012 grant Dispersive Dynamics: Fourier Analysis and Variational Methods, reference number RBFR12MXPO. R.A. is member of the INdAM Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni; D. N. is member of the INdAM Gruppo Nazionale per la Fisica Matematica. II Modulation equations We recall that the operators we are dealing with have different domains while the forms associated share the common domain $V$, introduced in (6). It proves convenient to describe $V$ in an alternative way: fixed an arbitrary $\lambda>0$ and denoted $G_{\lambda}(x):=e^{-\lambda|x|}/4\pi$, one finds $$V=\left\{u=\phi_{\lambda}+qG_{\lambda},\,\textrm{with}\,\phi_{\lambda}\in H^{1% }(\mathbb{R}^{3}),\,q\in\mathbb{C}\right\}.$$ The arbitrary positive parameter $\lambda$ is customary in the description of a point interaction and moreover allows the regular part of the element’s domain to be in the Sobolev space $H^{1}$ instead of the corresponding homogeneous counterpart, which is often useful. Everything is in fact independent on the choice of $\lambda\ .$ Correspondingly, we will perform computations at the form level. In order to do that let us recall that the variational formulation of equation (1) is $$\left(i\frac{du}{dt}(t),v\right)=Q_{\alpha}(u(t),v)\quad\forall v\in V.$$ (18) where, given $u,v\in V$ with $u=\phi_{u,\lambda}+q_{u}G_{\lambda}$, $v=\phi_{v,\lambda}+q_{v}G_{\lambda}$, $$Q_{\alpha}(u,v)=(\nabla\phi_{u,\lambda},\nabla\phi_{v,\lambda})_{L^{2}}+\bar{q% }_{u}q_{v}\left(\alpha+\frac{\sqrt{\lambda}}{8\pi}\right)-\lambda\bar{q}_{u}(G% _{\lambda},\phi_{v,\lambda})_{L^{2}}-\lambda q_{v}(\phi_{u,\lambda},G_{\lambda% })_{L^{2}}$$ Note that the equation (18) makes sense because $V$ is independent of the positive parameter $\lambda$ and it is a Hilbert space with the norm $$\|u\|_{V}^{2}=\|\nabla\phi_{\lambda}\|_{L^{2}}^{2}+|q|^{2},\quad\forall u\in V.$$ In order to inspect the asymptotic stability of equation (1) it is useful to represent the solution $u$ by the aid of the Ansatz (13) together with defintions (14) and (15). From now on we always refer to such formulas. Hence, we want to construct a solution of equation (1) close at each time to a solitary wave. Let us notice that the solitary wave does not need to be the same at every time, which means that the parameters $\omega(t)$ and $\Theta(t)$ are free to vary in time. Exactly as in the case $\sigma\in\left(0,\frac{1}{\sqrt{2}}\right)$ (see ADO , Section V) the function $\chi$ solves $$\left(i\frac{d\chi}{dt}(t),v\right)_{L^{2}}=Q_{\alpha,Lin}(\chi(t),v)+\dot{% \gamma}(t)(\Phi_{\omega(t)}+\chi(t),v)_{L^{2}}+$$ (19) $$+\dot{\omega}(t)\left(-i\frac{d\Phi_{\omega(t)}}{d\omega},v\right)_{L^{2}}+N(q% _{\chi}(t),q_{v}),$$ for all $v\in V$. I Here $Q_{\alpha,Lin}$ is the quadratic form of the linearization operator acting as $$Q_{\alpha,Lin}(\chi,v)=(\nabla\phi_{\chi},\nabla\phi_{v})_{L^{2}}-\frac{\sqrt{% \omega}}{4\pi}\Re(q_{\chi}\overline{q_{v}})-\sigma\frac{\sqrt{\omega}}{2\pi}% \Re q_{\chi}\Re q_{v}+\omega(\chi,v)_{L^{2}}\ ,$$ and the nonlinear remainder $N(q_{\chi},q_{v})$ is given by $$N(q_{\chi},q_{v})=-\nu|q_{\chi}+q_{\omega}|^{2\sigma}\Re((q_{\chi}+q_{\omega})% \overline{q_{v}})+\nu(2\sigma+1)|q_{\omega}|^{2\sigma}\Re q_{\chi}\Re q_{v}+% \nu|q_{\omega}|^{2\sigma}\Im q_{\chi}\Im q_{v}+\nu|q_{\omega}|^{2\sigma}\Re(q_% {\omega}\overline{q_{v}}),$$ (20) where (see section II B in ADO ), $q_{\omega}=\left(\frac{\sqrt{\omega}}{4\pi\nu}\right)^{\frac{1}{2\sigma}}.$ Since $\omega(t)$, $\gamma(t)$, and $\chi(x,t)$ are unknown and the propagator grows in time along the directions of the generalized kernel of the operator $L$, the idea is to get a determined system requiring the function $\chi(t)$ to be orthogonal to the generalized kernel of $L$ at any time $t\geq 0$. Hence, one obtains that $\omega$, $\gamma$, $z$, and $f$ must solve the following system of equations. Theorem II.1. (Modulation equations) If $\chi(t)$ is a solution of equation (19) such that $P_{0}\chi(t)=0$ for all $t\geq 0$ and $\omega(t)$ and $\gamma(t)$ are continuously differentiable in time, then $\omega$ and $\gamma$ are solutions of $$\dot{\omega}=\frac{\Re\ (JN(q_{\chi})\overline{q_{P_{0}^{*}(\Phi_{\omega}+\chi% )}})}{\left(\varphi_{\omega}-\frac{dP_{0}}{d\omega}\chi,\Phi_{\omega}+\chi% \right)_{L^{2}}},$$ (21) $$\dot{\gamma}=\frac{\Re\ (JN(q_{\chi})\overline{q_{J({\varphi_{\omega}}-\frac{% dP_{0}}{d\omega}\chi)}})}{\left(\varphi_{\omega}-\frac{dP_{0}}{d\omega}\chi,% \Phi_{\omega}+\chi\right)_{L^{2}}}.$$ (22) Furthermore, $z$ and $f$ satisfy $$(\Psi,J\Psi)_{L^{2}}(\dot{z}-i\xi z)=\Re(JN(q_{\chi})\overline{q_{J\Psi}})+% \dot{\omega}\left[\left(f,J\frac{d\Psi}{d\omega}\right)_{L^{2}}-\left(\frac{d% \psi}{d\omega},J\Psi\right)_{L^{2}}\right]+\dot{\gamma}(\chi,\Psi)_{L^{2}},$$ (23) $$\left(\frac{df}{dt},v\right)_{L^{2}}=Q_{L}(f,v)+\left(-\dot{\omega}\left(zP^{c% }\frac{d\Psi}{d\omega}+\overline{z}P^{c}\frac{d\Psi^{*}}{d\omega}\right)+\dot{% \gamma}P^{c}J\chi,v\right)_{L^{2}}+$$ (24) $$+(8\pi\sqrt{\lambda}P^{c}JN(q_{\chi})G_{\lambda},q_{v}G_{\lambda})_{L^{2}},$$ for all $v\in V$. Proof. Equations (21) and (22) can be proved with the same argument exploited in the case $\sigma\in\left(0,\frac{1}{\sqrt{2}}\right)$ (see ADO , Theorems V.3 and V.6). Equation (23) can be obtained taking $v=J\Psi$ as test function and noting that • $\frac{d\chi}{dt}=\dot{z}\Psi+\dot{\overline{z}}\Psi^{*}+\dot{\omega}\left(z% \frac{d\Psi}{d\omega}+\overline{z}\frac{d\Psi^{*}}{d\omega}\right)+\frac{df}{dt}$, • $(\Psi^{*},J\Psi)_{L^{2}}=0$, • $\left(\frac{d\Phi_{\omega}}{d\omega},J\Psi\right)_{L^{2}}=0$, • $\left(\frac{df}{dt},J\Psi\right)_{L^{2}}=-\dot{\omega}\left(f,J\frac{d\Psi}{dt% }\right)_{L^{2}}$, and • $\dot{\omega}\left(\frac{d\Psi^{*}}{d\omega},J\Psi\right)_{L^{2}}=-\left(\Psi^{% *},J\frac{d\Psi}{dt}\right)_{L^{2}}$. Finally, equation (24) follows taking the projection onto the continuous spectrum $P^{c}$ of both sides of equation (19) and recalling that $f\in X^{c}$. ∎ II.1 Frozen spectral decomposition The goal of this subsection is to get an autonomous linearized equation for the component $f$. According to (11), having fixed $T>0$ we define $$L_{T}=L(\omega_{T}):=-J\left[\begin{array}[]{cc}H_{\alpha_{1}(T)}+\omega_{T}&0% \\ 0&H_{\alpha_{2}(T)}+\omega_{T}\\ \end{array}\right],$$ where $\alpha_{1}(T)=-(2\sigma+1)\frac{\sqrt{\omega_{T}}}{4\pi}$, $\alpha_{2}=-\frac{\sqrt{\omega_{T}}}{4\pi}$. and $\omega_{T}=\omega(T)$. Then for any $t\in[0,T]$ one can decompose $f(t)\in X^{c}=X^{c}(t)$ as $$f=g+h\qquad\textrm{with}\quad g\in X^{d}_{T}=X^{0}_{T}\oplus X^{1}_{T},\;h\in X% ^{c}_{T},$$ where the subscript $T$ means that the time is fixed at $t=T$. We recall that $P^{0}$ and $P^{1}$ are the symplectic projections on the kernel and nonzero eigenvalues of the linearization (see Appendix C). Correspondingly we denote by $P^{0}_{T}$ and $P^{1}_{T}$ the projections on the analogous subspaces frozen at time $T$. Finally denote $P^{d}_{T}=P^{0}_{T}+P^{1}_{T}$. Then for the quadratic form $$Q_{L}(u,v)-Q_{L_{T}}(u,v)=\frac{\sqrt{\omega}-\sqrt{\omega_{T}}}{4\pi}\Re(% \mathbb{T}q_{u}\overline{q_{v}})-(\omega_{T}-\omega)(Ju,v)_{L^{2}},$$ for all $u$, $v\in V$, where $$\mathbb{T}=\left[\begin{array}[]{cc}0&-1\\ 2\sigma+1&0\\ \end{array}\right].$$ Hence, observing that $P^{c}\Psi=0$, the equation (24) for $f$ is equivalent to $$\left(\frac{df}{dt},v\right)_{L^{2}}=Q_{L_{T}}(f,v)+\left((\omega-\omega_{T})% Jf+\dot{\omega}\frac{dP^{c}}{d\omega}\psi+\dot{\gamma}P^{c}J\chi,v\right)_{L^{% 2}}+$$ (25) $$+\left(8\pi\sqrt{\lambda}\left(\frac{\sqrt{\omega}-\sqrt{\omega_{T}}}{4\pi}% \mathbb{T}q_{f}+P^{c}JN(q_{\chi})\right)G_{\lambda},q_{v}G_{\lambda}\right)_{L% ^{2}},$$ for all $v\in V$. Since our dispersive estimate holds only on the continuous spectral subspace, we need to prove that it is enough to estimate the symplectic projection of $\chi(t)$ onto that subspace. This is stated in the following lemma where we denote by $\mathcal{R}(a)$ a bounded continuous real valued functions vanishing as $a\to 0$, and $$\mathcal{R}_{1}(\omega)=\mathcal{R}(\|\omega-\omega_{0}\|_{C^{0}([0,T])}).$$ Along the paper, with a slight abuse, we shall use the symbol $\mathcal{R}(a,b)$ to denote bounded continuous real valued functions vanishing as $a$, $b\rightarrow 0$. Lemma II.2. If $|\omega-\omega_{T}|$ is small enough, then the function $g$ can be estimated in terms of $h$ as follows: $$\|g\|_{L^{\infty}_{w^{-1}}}\leq\mathcal{R}_{1}(\omega)|\omega-\omega_{T}|\|h\|% _{L^{\infty}_{w^{-1}}}.$$ The last lemma can be proved following the proof of Lemma 3.2 in KKS . As a consequence, one can apply the operator $P^{c}_{T}$ to both sides of the equation for $f$ and obtain $$\left(\frac{dh}{dt},v\right)_{L^{2}}=Q_{L_{T}}(h,v)+\left(P^{c}_{T}\left[(% \omega-\omega_{T})Jf+\dot{\omega}\frac{dP^{c}}{d\omega}\psi+\dot{\gamma}P^{c}J% \chi\right],v\right)_{L^{2}}+$$ (26) $$+\left(8\pi\sqrt{\lambda}P^{c}_{T}\left(\frac{\sqrt{\omega}-\sqrt{\omega_{T}}}% {4\pi}\mathbb{T}q_{f}+P^{c}JN(q_{\chi})\right)G_{\lambda},q_{v}G_{\lambda}% \right)_{L^{2}},$$ for any $v\in V$. II.2 Asymptotic expansion of dynamics In order to prove the asymptotic stability of the ground state we need to show that for large times $z$ and $h$ are small. For this purpose, in this section we expand the inhomogeneous terms in the modulation equations, then rewrite the equations of $\omega$, $\gamma$, $z$ and $h$ as in (29), (30), (31) and (36), and for each equation we estimate the error terms. In what follows we denote $$(q,p)=q_{1}p_{1}+q_{2}p_{2},\qquad\forall p,q\in\mathbb{C}^{2}.$$ With an abuse of notation we denote by $q_{\omega}=\left(\begin{array}[]{cc}\left(\frac{\sqrt{\omega}}{4\pi\nu}\right)% ^{1/(2\sigma)}\\ 0\end{array}\right)$ the charge of the function $\left(\begin{array}[]{cc}\Phi_{\omega}\\ 0\end{array}\right)$. As a preliminary step, we expand the nonlinear part of the equation (19) $N(q_{\chi})$ as $$N(q_{\chi})=N_{2}(q_{\chi})+N_{3}(q_{\chi})+N_{R}(q_{\chi}),$$ (27) where $N_{2}$ and $N_{3}$ are the quadratic and cubic terms in $q_{\chi}$ respectively, while $N_{R}$ is the remainder. Exploiting the Taylor expansion of the function $F(t)=t^{\sigma}$ around $|q_{\omega}|^{2}$, one gets $$\Re(N_{2}(q_{\chi})\overline{q_{v}})=\Re((\sigma|q_{\omega}|^{2(\sigma-1)}|q_{% \chi}|^{2}q_{\omega}+2\sigma|q_{\omega}|^{2(\sigma-1)}(q_{\omega},q_{\chi})q_{% \chi}+2(\sigma-1)\sigma|q_{\omega}|^{2(\sigma-2)}(q_{\omega},q_{\chi})^{2}q_{% \omega})\overline{q_{v}}),$$ and $$\Re(N_{3}(q_{\chi})\overline{q_{v}})=\Re((\sigma|q_{\omega}|^{2(\sigma-1)}|q_{% \chi}|^{2}q_{\chi}+2(\sigma-1)\sigma|q_{\omega}|^{2(\sigma-2)}(q_{\omega},q_{% \chi})^{2}q_{\chi}+$$ $$+2(\sigma-1)\sigma|q_{\omega}|^{2(\sigma-2)}(q_{\omega},q_{\chi})|q_{\chi}|^{2% }q_{\omega}+\frac{4}{3}(\sigma-2)(\sigma-1)\sigma|q_{\omega}|^{2(\sigma-3)}(q_% {\omega},q_{\chi})^{3}q_{\omega})\overline{q_{v}}),$$ for any $q_{v}\in\mathbb{C}$. For later convenience, let us define the following symmetric forms $$N_{2}(q_{1},q_{2})=\sigma|q_{\omega}|^{2(\sigma-1)}(q_{1},q_{2})q_{\omega}+% \sigma|q_{\omega}|^{2(\sigma-1)}[(q_{\omega},q_{1})q_{2}+(q_{\omega},q_{2})q_{% 1}]+$$ $$+2(\sigma-1)\sigma|q_{\omega}|^{2(\sigma-2)}(q_{\omega},q_{1})(q_{\omega},q_{2% })q_{\omega},$$ and $$N_{3}(q_{1},q_{2},q_{3})=\frac{1}{6}\sigma|q_{\omega}|^{2(\sigma-1)}\sum_{i,j,% k=1}^{3}(q_{i},q_{j})q_{k}+\frac{1}{3}(\sigma-1)\sigma|q_{\omega}|^{2(\sigma-2% )}\sum_{i,j,k=1}^{3}(q_{\omega},q_{i})(q_{\omega},q_{j})q_{k}+$$ $$+\frac{1}{3}(\sigma-1)\sigma|q_{\omega}|^{2(\sigma-2)}\sum_{i,j,k=1}^{3}(q_{% \omega},q_{i})(q_{j},q_{k})q_{\omega}+\frac{4}{3}(\sigma-2)(\sigma-1)\sigma|q_% {\omega}|^{2(\sigma-3)}(q_{\omega},q_{1})(q_{\omega},q_{2})(q_{\omega},q_{3})q% _{\omega}.$$ In order to prove the asymptotic stability result, we shall prove in Section 2.4, the following asymptotics $$\|f(t)\|_{L^{\infty}_{w^{-1}}}\sim t^{-1},\qquad z(t)\sim t^{-\frac{1}{2}},% \qquad\|\psi(t)\|_{V}\sim t^{-\frac{1}{2}},$$ (28) as $t\rightarrow+\infty$. Remark II.3. As in KKS , the first step in proving these expected asymptotics is to separate leading terms and remainders in the right hand sides of the modulation equations (21) - (23), (26). Basically, in the next subsections, we will expand the expression for $\dot{\omega}$, $\dot{\gamma}$, and $\dot{z}$ up to and including the terms of order $t^{-3/2}$, and for $\dot{h}$ up to and including $t^{-1}$. Remark II.4. Note that since the nonlinearity depends only on the charges the same holds for its Taylor expansion. II.2.1 Equation for $\omega$ Substituting the expansion for the nonlinear part $N$ given in (27) in equation (21) and considering the asymptotics (28) one gets $$\dot{\omega}=\frac{1}{\Delta}\Re((JN_{2}(q_{\psi})+2JN_{2}(q_{\psi},q_{f})+JN_% {3}(q_{\psi}))q_{\omega})+\frac{1}{\Delta^{2}}\left(\psi,\frac{d\Phi_{\omega}}% {d\omega}\right)_{L^{2}}\Re(JN_{2}(q_{\psi})q_{\omega})+\Omega_{R},$$ where $\Delta=\frac{1}{2}\frac{d}{d\omega}\|\Phi_{\omega}\|_{L^{2}}^{2}$ and the remainder $\Omega_{R}$ is estimated by $$|\Omega_{R}|\leq\mathcal{R}(\omega,|z|+\|f\|_{L^{\infty}_{w^{-1}}})(|z|^{2}+\|% f\|_{L^{\infty}_{w^{-1}}})^{2}.$$ Recalling that $\psi=z\Psi+\overline{z}\Psi^{*}$, one can rewrite the previous equation for $\dot{\omega}$ as $$\dot{\omega}=\Omega_{20}z^{2}+\Omega_{11}z\overline{z}+\Omega_{02}\overline{z}% ^{2}+\Omega_{30}z^{3}+\Omega_{21}z^{2}\overline{z}+\Omega_{12}z\overline{z}^{2% }+\Omega_{03}\overline{z}^{3}+z(q_{f},\Omega^{\prime}_{10})+\overline{z}(q_{f}% ,\Omega^{\prime}_{01})+\Omega_{R},$$ (29) where the $\Omega_{ij}$’s and the $\Omega_{ij}^{\prime}$’s are suitable coefficients, while $\Omega_{R}$ is a remainder term. Remark II.5. Since the second component of the vector $q_{\omega}$ equals $0$, one has $$\Omega_{11}=2\frac{q_{\omega}}{\Delta}Re(JN_{2}(q_{\Psi})\overline{q_{\Psi^{*}% }})=0.$$ This fact will turn out to be useful in writing the canonical form of the modulation equations. II.2.2 Equation for $\gamma$ As in the previous subsection the equation for $\dot{\gamma}$ (22) can expanded as $$\dot{\gamma}=\frac{1}{\Delta}\Re((JN_{2}(q_{\psi})+2JN_{2}(q_{\psi},q_{f})+JN_% {3}(q_{\psi}))\overline{q_{J\frac{d\Phi_{\omega}}{d\omega}}})+\frac{1}{\Delta^% {2}}\left(\psi,J\frac{d^{2}\Phi_{\omega}}{d^{2}\omega}\right)_{L^{2}}\Re(JN_{2% }(q_{\psi})q_{\omega})+\Gamma_{R},$$ where the remainder $\Gamma_{R}$ is estimated by $$|\Gamma_{R}|\leq\mathcal{R}(\omega,|z|+\|f\|_{L^{\infty}_{w^{-1}}})(|z|^{2}+\|% f\|_{L^{\infty}_{w^{-1}}})^{2}.$$ As before, the equation for $\dot{\gamma}$ shall be written in the form $$\dot{\gamma}=\Gamma_{20}z^{2}+\Gamma_{11}z\overline{z}\Gamma_{02}\overline{z}^% {2}+\Gamma_{30}z^{3}+\Gamma_{21}z^{2}\overline{z}+\Gamma_{12}z\overline{z}^{2}% +\Gamma_{03}\overline{z}^{3}+z(q_{f},\Gamma^{\prime}_{10})+\overline{z}(q_{f},% \Gamma^{\prime}_{01})+\Gamma_{R}.$$ (30) Remark II.6. In this case $\Gamma_{11}$ does not vanish as in equation (29). II.2.3 Equation for $z$ Exploiting the results of the previous subsections, equation (23) can be expanded as $$\begin{array}[]{ll}\dot{z}-i\xi z&=\frac{2}{\kappa}\Re(JN_{2}(q_{\psi})% \overline{q_{f}})+\frac{1}{\kappa}\Re((JN_{2}(q_{\psi})+JN_{3}(q_{\psi}))% \overline{q_{J\Psi}})+\\ &-\frac{1}{\Delta\kappa}\left(\frac{d\psi}{d\omega},J\Psi\right)_{L^{2}}\Re(JN% _{2}(q_{\psi})q_{\omega})+\frac{1}{\Delta\kappa}(\psi,\Psi)_{L^{2}}\Re(JN_{2}(% q_{\psi})\overline{q_{J\frac{d\Phi_{\omega}}{d\omega}}})+Z_{R},\end{array}$$ where $\kappa=-(\Psi,J\Psi)_{L^{2}}$ and $$|Z_{R}|\leq\mathcal{R}(\omega,|z|+\|f\|_{L^{\infty}_{w^{-1}}})(|z|^{2}+\|f\|_{% L^{\infty}_{w^{-1}}})^{2}.$$ Mimicking the notation employed in (29) and (30), the previous equation can be written in the form $$\dot{z}=i\xi z+Z_{20}z^{2}+Z_{11}z\overline{z}+Z_{02}\overline{z}^{2}+Z_{30}z^% {3}+Z_{21}z^{2}\overline{z}+Z_{12}z\overline{z}^{2}+Z_{03}\overline{z}^{3}+z% \Re(q_{f}\overline{Z^{\prime}_{10}})+\overline{z}\Re(q_{f}\overline{Z^{\prime}% _{01}})+Z_{R},$$ (31) and it turns out that $$\begin{array}[]{ll}Z_{11}=&\frac{2}{\kappa}\Re(JN_{2}(q_{\Psi},q_{\Psi^{*}})% \overline{q_{\Psi}}),\quad Z_{20}=\frac{1}{\kappa}\Re(JN_{2}(q_{\Psi})% \overline{q_{\Psi}}),\quad Z_{02}=\frac{1}{\kappa}\Re(JN_{2}(q_{\Psi^{*}})% \overline{q_{\Psi}}),\\ \par Z_{21}=&\frac{3}{\kappa}\Re(JN_{3}(q_{\Psi^{*}},q_{\Psi},q_{\Psi})% \overline{q_{\Psi}})+\frac{1}{\Delta\kappa}\left[\left(\frac{d\Psi^{*}}{d% \omega},j\Psi\right)_{L^{2}}\Re(JN_{2}(q_{\Psi})\overline{q_{J\Phi_{\omega}}})% +\right.\\ \par&\left.-(\Psi^{*},\Psi)_{L^{2}}\Re(JN_{2}(q_{\Psi})\overline{q_{\frac{d% \Phi_{\omega}}{d\omega}}})-2\|\Psi\|_{L^{2}}^{2}\Re(JN_{2}(q_{\Psi^{*}},q_{% \Psi})\overline{q_{\frac{d\Phi_{\omega}}{d\omega}}})\right],\\ \par Z^{\prime}_{10}=&2\frac{JN_{2}(q_{\Psi^{*}},q_{\Psi})}{\overline{\kappa}}% ,\quad Z^{\prime}_{01}=2\frac{JN_{2}(q_{\Psi})}{\overline{\kappa}}.\end{array}$$ (32) II.2.4 Equation for $h$ In order to expand asymptotically the equation (26) for $h$, the following remark will be useful. Remark II.7. For any $f\in L^{2}(\mathbb{R}^{3})$ the following holds $$P^{c}_{T}P^{c}f=P^{c}_{T}(I-P^{d})f=P^{c}_{T}(P^{c}_{T}+P^{d}_{T}-P^{d})f=P^{c% }_{T}f+P^{c}_{T}(P^{d}_{T}-P^{d})f.$$ Let us denote $$\rho(t)=\omega(t)-\omega_{T}+\dot{\gamma}(t),$$ (33) then equation (26) can be rewritten as $$\left(\frac{dh}{dt},v\right)_{L^{2}}=Q_{L_{T}}(h,v)+(\rho P^{c}_{T}Jh,v)_{L^{2% }}+(8\pi P^{c}_{T}JN_{2}(q_{\psi})G_{\lambda},q_{v}G_{\lambda})_{L^{2}}+$$ $$+\left(P^{c}_{T}\left[\dot{\omega}\frac{dP^{c}}{d\omega}\psi+\dot{\gamma}P^{c}% J\psi+\rho Jg+\dot{\gamma}(P^{d}_{T}-P^{d})Jf\right],v\right)_{L^{2}}+$$ $$+\left(8\pi\sqrt{\lambda}P^{c}_{T}\left(\frac{\sqrt{\omega}-\sqrt{\omega_{T}}}% {4\pi}\mathbb{T}q_{f}+P^{c}JN(q_{\chi})-JN_{2}(q_{\psi})\right)G_{\lambda},q_{% v}G_{\lambda}\right)_{L^{2}},$$ for any $v\in V$. Denote $$H^{\prime}_{R}=P^{c}_{T}\left[\dot{\omega}\frac{dP^{c}}{d\omega}\psi+\dot{% \gamma}P^{c}J\psi+\rho Jg+\dot{\gamma}(P^{d}_{T}-P^{d})Jf\right],$$ and $$H^{\prime\prime}_{R}=8\pi\sqrt{\lambda}P^{c}_{T}\left(\frac{\sqrt{\omega}-% \sqrt{\omega_{T}}}{4\pi}\mathbb{T}q_{f}+P^{c}JN(q_{\chi})-JN_{2}(q_{\psi})% \right)G_{\lambda}.$$ We recall that by $\Pi^{\pm}$ we denote (see Appendix C) the projections onto the branches $\mathcal{C}_{\pm}$ of the continuous spectrum separately. Analogously we denote by $\Pi^{\pm}_{T}$ the corresponding projections of the linear generator $L_{T}$ frozen at time $T$. The following lemma is useful in the following. Lemma II.8. There exists a constant $C>0$ such that for each $h\in X^{c}_{T}$ holds $$\left\|[P^{c}_{T}J-i(\Pi^{+}_{T}-\Pi^{-}_{T})]h\right\|_{L^{1}_{w}}\leq C\|h\|% _{L^{\infty}_{w^{-1}}}.$$ The proof is in Appendix VI.4 for any $t>0$. Finally, let us define $$L_{M}(t)=L_{T}+i\rho(t)(\Pi^{+}_{T}-\Pi^{-}_{T}),$$ (34) then the previous equation becomes $$\left(\frac{dh}{dt},v\right)_{L^{2}}=Q_{L_{M}}(h,v)+(8\pi P^{c}_{T}JN_{2}(q_{% \psi})G_{\lambda},q_{v}G_{\lambda})_{L^{2}}+(\widetilde{H}_{R},v)_{L^{2}}+(H^{% \prime\prime}_{R},q_{v}G_{\lambda})_{L^{2}},$$ (35) for any $v\in V$, where we denoted $$\widetilde{H}_{R}=H^{\prime}_{R}+\rho[P^{c}_{T}J-i(\Pi^{+}_{T}-\Pi^{-}_{T})]h.$$ Finally, let us expand the second summand in the right hand side of (35), and get $$\left(\frac{dh}{dt},v\right)_{L^{2}}=Q_{L_{M}}(h,v)+(z^{2}H_{20}+z\overline{z}% H_{11}+\overline{z}^{2}H_{02})\overline{q_{v}}+(\widetilde{H}_{R},v)_{L^{2}}+(% H^{\prime\prime}_{R},q_{v}G_{\lambda})_{L^{2}},$$ (36) for any $v\in V$, where $$\begin{array}[]{lll}H_{20}=(8\pi\sqrt{\lambda}P^{c}_{T}JN_{2}(q_{\Psi})G_{% \lambda},G_{\lambda})_{L^{2}},\\ H_{11}=2(8\pi\sqrt{\lambda}P^{c}_{T}JN_{2}(q_{\Psi},q_{\Psi^{*}})G_{\lambda},G% _{\lambda})_{L^{2}},\\ H_{02}=(8\pi\sqrt{\lambda}P^{c}_{T}JN_{2}(q_{\Psi^{*}})G_{\lambda},G_{\lambda}% )_{L^{2}}.\end{array}$$ Thanks to the estimates done for the other equations and Lemma II.8, one can estimate the remainders in the following way: $$\|H^{\prime}_{R}\|_{L^{1}_{w}}\leq C\left(|z|(|\dot{\omega}|+|\dot{\gamma}|)+% \mathcal{R}_{1}(\omega)(|\omega-\omega_{T}|+|\dot{\gamma}|\|f\|_{L^{\infty}_{w% ^{-1}}})\right)\leq$$ $$\leq\mathcal{R}_{1}(\omega,|z|+\|f\|_{L^{\infty}_{w^{-1}}})\left(|z|^{3}+|z|\|% f\|_{L^{\infty}_{w^{-1}}}+\|f\|_{L^{\infty}_{w^{-1}}}^{2}+|\omega-\omega_{T}|% \|f\|_{L^{\infty}_{w^{-1}}}\right),$$ hence $$\|\mathcal{\widetilde{H}}_{R}\|_{L^{1}_{w}}\leq\mathcal{R}_{1}(\omega,|z|+\|f% \|_{L^{\infty}_{w^{-1}}})\left(|z|^{3}+|z|\|f\|_{L^{\infty}_{w^{-1}}}+\|f\|_{L% ^{\infty}_{w^{-1}}}^{2}+|\omega-\omega_{T}|\|f\|_{L^{\infty}_{w^{-1}}}\right),$$ (37) and $$\|H^{\prime\prime}_{R}\|_{L^{1}_{w}}\leq\mathcal{R}_{1}(\omega,|z|+\|f\|_{L^{% \infty}_{w^{-1}}})\left(|z|^{3}+|z|\|f\|_{L^{\infty}_{w^{-1}}}+\|f\|_{L^{% \infty}_{w^{-1}}}^{2}+|\omega-\omega_{T}|(|z|^{2}+\|f\|_{L^{\infty}_{w^{-1}}})% \right).$$ (38) Remark II.9. In the same way one directly expands the equation for the function $f$ getting $$\left(\frac{df}{dt},v\right)_{L^{2}}=Q_{L}(f,v)+(z^{2}F_{20}+z\overline{z}F_{1% 1}+\overline{z}^{2}F_{02})\overline{q_{v}}+(\widetilde{F}_{R},v)_{L^{2}}+(F^{% \prime\prime}_{R},q_{v}G_{\lambda})_{L^{2}},$$ (39) for any $v\in V$, where $$\begin{array}[]{lll}F_{20}=(8\pi\sqrt{\lambda}JN_{2}(q_{\Psi})G_{\lambda},G_{% \lambda})_{L^{2}},\\ F_{11}=2(8\pi\sqrt{\lambda}JN_{2}(q_{\Psi},q_{\Psi^{*}})G_{\lambda},G_{\lambda% })_{L^{2}},\\ F_{02}=(8\pi\sqrt{\lambda}JN_{2}(q_{\Psi^{*}})G_{\lambda},G_{\lambda})_{L^{2}}% .\end{array}$$ and $$\widetilde{F}_{R}=\dot{\omega}\frac{dP^{c}}{d\omega}\psi+\dot{\gamma}P^{c}J% \psi+\dot{\gamma}(P^{d}_{T}-P^{d})Jf,$$ $$F^{\prime\prime}_{R}=8\pi\sqrt{\lambda}\left(\frac{\sqrt{\omega}-\sqrt{\omega_% {T}}}{4\pi}\mathbb{T}q_{f}+P^{c}JN(q_{\chi})-JN_{2}(q_{\psi})\right)G_{\lambda}.$$ Furthermore, the $L^{1}_{w}$ norms of the remainders $\widetilde{F}_{R}$ and $F^{\prime\prime}_{R}$ can be estimated by the corresponding norms of the remainders $\widetilde{H}_{R}$ and $H^{\prime\prime}_{R}$. III Canonical form of the equations In this section we would like to use the technique of normal coordinates in order to transform the modulation equations for $\omega$, $\gamma$, $z$, and $h$ to a simpler canonical form. We will also try to keep the estimates of the remainders as much close as possible to the original ones. III.1 Canonical form of the equation for $h$ Our goal is to exploit a change of variable in such a way that the function $h$ is mapped in a new function decaying in time at least as $t^{-3/2}$. For this purpose one could expand $h$ as $$h=h_{1}+k+k_{1},$$ (40) where $$k=a_{20}z^{2}+a_{11}z\overline{z}+a_{02}\overline{z}^{2},$$ with some coefficients $a_{ij}=a_{ij}(x,\omega)$ such that $a_{ij}=\overline{a_{ji}}$, and $$k_{1}=-\textrm{exp}\left(\int_{0}^{t}L_{M}(s)ds\right)k(0),$$ where the operator $L_{M}$ was defined in (34). Note that $h_{1}(0)=h(0)$, since $k_{1}(0)=-k(0)$. Proposition III.1. There exist $a_{ij}\in L^{\infty}_{w^{-1}}(\mathbb{R}^{3})$, for $i$, $j=0$, $1$, $2$, such that the equation for $h_{1}$ has the form $$\left(\frac{dh_{1}}{dt},v\right)_{L^{2}}=Q_{L_{M}}(h_{1},v)+(\widehat{H}_{R},v% )_{L^{2}}+(H^{\prime\prime}_{R},q_{v}G_{\lambda})_{L^{2}},$$ (41) for all $v\in V$, where $\widehat{H}_{R}=\widetilde{H}_{R}+\overline{H}_{R}$ with $$\overline{H}_{R}=-\left[\dot{\omega}\left(\frac{da_{20}}{d\omega}z^{2}+\frac{% da_{11}}{d\omega}z\overline{z}+\frac{da_{02}}{d\omega}\overline{z}^{2}\right)+% (2a_{20}z+a_{11}\overline{z})(\dot{z}-i\xi_{T}z)+\right.$$ (42) $$\left.+(a_{11}z+2a_{20}\overline{z})(\dot{\overline{z}}+i\xi_{T}\overline{z})-% \rho(\Pi^{+}_{T}-\Pi^{-}_{T})k\right].$$ Proof. The thesis is proved substituting (40) into (35) and equating the coefficients of the quadratic powers of $z$ which leads to the system $$\left\{\begin{array}[]{ll}Q_{L_{T}}(a_{20},v)+\Re(H_{20}\overline{q_{v}})-(2i% \xi_{T}a_{20},v)_{L^{2}}=0\\ Q_{L_{T}}(a_{11},v)+\Re(H_{11}\overline{q_{v}})=0\\ Q_{L_{T}}(a_{02},v)+\Re(H_{02}\overline{q_{v}})+(2i\xi_{T}a_{02},v)_{L^{2}}=0% \end{array}\right.,$$ (43) for all $v\in V$. The former system admits the solution $$\begin{array}[]{ll}a_{11}=-L_{T}^{-1}H_{11}\\ \par a_{20}=-(L_{T}-2i\xi_{T}-0)^{-1}H_{20}\\ \par a_{02}=\overline{a_{02}}=-(L_{T}+2i\xi_{T}-0)^{-1}H_{02}\end{array}$$ ∎ Remark III.2. From the explicit structure of the remainder $\widehat{H}_{R}$ it follows that it still satisfies estimate (37). We will need to apply the next lemma which can be proved as Proposition 2.3 in KKS . Lemma III.3. If $\sigma\in\left(\frac{1}{\sqrt{2}},\frac{\sqrt{3}+1}{2\sqrt{2}}\right)$ and $f\in V\cap L^{1}_{w}$, then there exists some constant $C>0$ such that for any $t\geq 0$ $$\|e^{-L_{T}t}(L_{T}+2i\xi_{T}-0)^{-1}P_{T}^{c}f\|_{L^{\infty}_{w^{-1}}}\leq C(% 1+t)^{-3/2}\|f\|_{L^{1}_{w}}.$$ Remark III.4. Let us note that $$h=P^{c}_{T}h=P^{c}_{T}h_{1}+P^{c}_{T}k+P^{c}_{T}k_{1},$$ hence, in order to estimate the decay of $\|h\|_{L^{\infty}_{w^{-1}}}$, it suffices to estimate the decay of $$\|P^{c}_{T}h_{1}\|_{L^{\infty}_{w^{-1}}},\quad\|P^{c}_{T}k\|_{L^{\infty}_{w^{-% 1}}},\quad\textrm{and}\quad\|P^{c}_{T}k_{1}\|_{L^{\infty}_{w^{-1}}}.$$ III.2 Canonical form of the equation for $\omega$ Since $\Omega_{11}=0$, we can exploit the method by Buslaev and Sulem in BS , Proposition 4.1 and get the following proposition. Proposition III.5. There exist coefficients $b_{ij}=b_{ij}(\omega)$, with $i$, $j=0$, $1$, $2$, $3$, and vector functions $b^{\prime}_{ij}=b^{\prime}_{ij}(x,\omega)$, with $i$, $j=0$, $1$, such that function $$\omega_{1}=\omega+b_{20}z^{2}+b_{11}z\overline{z}+b_{02}\overline{z}^{2}+b_{30% }z^{3}+b_{21}z^{2}\overline{z}+b_{12}z\overline{z}^{2}+b_{03}\overline{z}^{3}+$$ $$+z(f,b^{\prime}_{10})_{L^{2}}+\overline{z}(f,b^{\prime}_{01})_{L^{2}},$$ solves a differential equation of the form $$\dot{\omega}_{1}=\widehat{\Omega}_{R},$$ for some remainder $\widehat{\Omega}_{R}$. Proof. Substituting the equations (29), (31), and (39) into the derivative with respect to time of the expression for $\omega_{1}$ and equating the coefficients of $z^{2}$, $z\overline{z}$, $\overline{z}^{2}$, $z$, and $\overline{z}$ one gets the following system $$\left\{\begin{array}[]{ll}\Omega_{20}+2i\xi b_{20}=0\\ \Omega_{02}-2i\xi b_{02}=0\\ \Omega_{30}+3i\xi b_{30}+2Z_{20}b_{20}+\Re(F_{20}\overline{q_{b^{\prime}_{10}}% })=0\\ \Omega_{03}-3i\xi b_{03}+2Z_{02}b_{02}+\Re(F_{02}\overline{q_{b^{\prime}_{01}}% })=0\\ \Omega_{21}+i\xi b_{21}+2Z_{11}b_{20}+2Z_{20}b_{02}+\Re(F_{11}\overline{q_{b^{% \prime}_{10}}}+F_{20}\overline{q_{b^{\prime}_{01}}})=0\\ \Omega_{12}-i\xi b_{12}+2Z_{11}b_{02}+2Z_{20}b_{20}+\Re(F_{11}\overline{q_{b^{% \prime}_{01}}}+F_{20}\overline{q_{b^{\prime}_{10}}})=0\\ (q_{f},\Omega^{\prime}_{10})+i\xi(f,b^{\prime}_{10})_{L^{2}}+Q_{L}(f,b^{\prime% }_{10})=0\\ (q_{f},\Omega^{\prime}_{01})+i\xi(f,b^{\prime}_{01})_{L^{2}}+Q_{L}(f,b^{\prime% }_{01})=0\end{array}\right..$$ The last two equations of this system can be solved in a way similar to the ones system (43), and the proof follows. ∎ Remark III.6. From the proof of the previous proposition it also follows that the remainder $\widehat{\Omega}_{R}$ can be estimated as $\Omega_{R}$, namely $$|\widehat{\Omega}_{R}|\leq\mathcal{R}(\omega,|z|+\|f\|_{L^{\infty}_{w^{-1}}})(% |z|^{2}+\|f\|_{L^{\infty}_{w^{-1}}})^{2}.$$ In the next lemma we prove a uniform bound for $|\omega_{T}-\omega|$ on the interval $[0,T]$. For later convenience let us denote $$\mathcal{R}_{2}(\omega,|z|+\|f\|_{L^{\infty}_{w^{-1}}})=\mathcal{R}\left(\max_% {0\leq t\leq T}|\omega_{T}-\omega|,\max_{0\leq t\leq T}(|z|+\|f\|_{L^{\infty}_% {w^{-1}}})\right).$$ Remark III.7. Let us note that $|\omega|\leq|\omega_{0}|+|\omega_{0}-\omega_{T}|+|\omega-\omega_{T}|$, then $$\max_{0\leq t\leq T}\mathcal{R}(\omega,|z|+\|f\|_{L^{\infty}_{w^{-1}}})=% \mathcal{R}\left(\max_{0\leq t\leq T}|\omega_{T}-\omega|,\max_{0\leq t\leq T}(% |z|+\|f\|_{L^{\infty}_{w^{-1}}})\right).$$ The next lemma can be proved as in Section 3.5 of KKS . Lemma III.8. For any $t\in[0,T]$ we have $$|\omega_{T}-\omega|\leq\mathcal{R}_{2}(\omega,|z|+\|f\|_{L^{\infty}_{w^{-1}}})% \left[\int_{t}^{T}(|z(\tau)|+\|f(\tau)\|_{L^{\infty}_{w^{-1}}})^{2}d\tau+\right.$$ $$\left.+(|z_{T}|+\|f_{T}\|_{L^{\infty}_{w^{-1}}})^{2}+(|z|+\|f\|_{L^{\infty}_{w% ^{-1}}})^{2}\right].$$ III.3 Canonical form of the equation for $\gamma$ Equations (30) for $\gamma$ and (29) for $\omega$ differ just because in general $\Gamma_{11}\neq 0$. But we can perform the same change of variable in the previous subsection, namely $$\gamma_{1}=\gamma+d_{20}z^{2}+d_{02}\overline{z}^{2}+d_{30}z^{3}+d_{21}z^{2}% \overline{z}+d_{12}z\overline{z}^{2}+d_{03}\overline{z}^{3}+z(f,d^{\prime}_{10% })_{L^{2}}+\overline{z}(f,d^{\prime}_{01})_{L^{2}},$$ for some suitable coefficients $d_{ij}=d_{ij}(\omega)$, with $i$, $j=0$, $1$, $2$, $3$, and vector functions $d^{\prime}_{ij}=d^{\prime}_{ij}(x,\omega)$, with $i$, $j=0$, $1$. Then the function $\gamma_{1}$ solves the differential equation $$\dot{\gamma}_{1}=\Gamma_{11}(\omega)z\overline{z}+\widehat{\Gamma}_{R},$$ for some remainder $\widehat{\Gamma}_{R}$, which can be estimated as $\Gamma_{R}$, i.e. $$|\widehat{\Gamma}_{R}|\leq\mathcal{R}(\omega,|z|+\|f\|_{L^{\infty}_{w^{-1}}})(% |z|^{2}+\|f\|_{L^{\infty}_{w^{-1}}})^{2}.$$ III.4 Canonical form of the equation for $z$ Exploiting the change of variable (40) used to obtain the canonical form of equation (35) for $h$, one can prove the following proposition. Proposition III.9. There exist coefficients $c_{ij}=c_{ij}(\omega)$, with $i$, $j=0$, $1$, $2$, $3$, such that function $$z_{1}=z+c_{20}z^{2}+c_{11}z\overline{z}+c_{02}\overline{z}^{2}+c_{30}z^{3}+c_{% 21}z^{2}\overline{z}+c_{03}\overline{z}^{3},$$ solves a differential equation of the form $$\dot{z_{1}}=i\xi z_{1}+iK|z_{1}|^{2}z_{1}+\widehat{Z}_{R},$$ (44) where $$iK=Z_{21}+Z^{\prime}_{21}+\frac{i}{\xi}Z_{20}Z_{11}-\frac{i}{\xi}Z_{11}^{2}-% \frac{2i}{3\xi}Z_{02}^{2},$$ with the coefficient $Z_{ij}$, $i$, $j=0,1,3$, defined in (32), and $$\widetilde{Z}_{R}=(g+P^{c}_{T}h_{1}+P^{c}_{T}k_{1},Z^{\prime}_{10})z+(g+P^{c}_% {T}h_{1}+P^{c}_{T}k_{1},Z^{\prime}_{01})\overline{z}+Z_{R}.$$ The proof is a matter of calculation, but we give it explicitly to stress the role of the functions $a_{ij}$, $i$, $j=0$, $1$, $2$. Proof. Substituting (40) in the equation (31) the differential equation for $z$ becomes $$\dot{z}=i\xi z+Z_{20}z^{2}+Z_{11}z\overline{z}+Z_{02}\overline{z}^{2}+Z_{30}z^% {3}+Z_{21}z^{2}\overline{z}+Z_{12}z\overline{z}^{2}+Z_{03}\overline{z}^{3}+$$ (45) $$+Z^{\prime}_{30}z^{3}+Z^{\prime}_{21}z^{2}\overline{z}+Z^{\prime}_{12}z% \overline{z}^{2}+Z^{\prime}_{03}\overline{z}^{3}+\widetilde{Z}_{R},$$ where $$Z^{\prime}_{30}=\Re(q_{a_{20}}\overline{Z^{\prime}_{10}}),$$ $$Z^{\prime}_{03}=\Re(q_{a_{02}}\overline{Z^{\prime}_{01}}),$$ $$Z^{\prime}_{21}=\Re(q_{a_{11}}\overline{Z^{\prime}_{10}})+\Re(q_{a_{20}}% \overline{Z^{\prime}_{01}}),$$ $$Z^{\prime}_{12}=\Re(q_{a_{11}}\overline{Z^{\prime}_{01}})+\Re(q_{a_{02}}% \overline{Z^{\prime}_{10}}),$$ and the remainder $\widetilde{Z}_{R}$ is as in the statement of the proposition. Inserting equation (45) into the time derivative of the expression for $z_{1}$ and equating the coefficients of $z^{2}$, $z\overline{z}$, $\overline{z}^{2}$, $z^{3}$, $z\overline{z}^{2}$, and $\overline{z}^{3}$ one obtains the system $$\left\{\begin{array}[]{ll}i\xi c_{20}+Z_{20}=0\\ -i\xi c_{11}+Z_{11}=0\\ -3i\xi c_{02}+Z_{02}=0\\ 2i\xi c_{30}+Z_{30}+Z^{\prime}_{30}+2c_{20}Z_{20}+c_{11}Z_{20}=0\\ Z_{12}+Z^{\prime}_{12}+2c_{20}Z_{20}+c_{11}(Z_{11}+Z_{02})+2c_{02}Z_{11}-2i\xi c% _{12}=0\\ -4i\xi c_{03}+Z_{03}+Z^{\prime}_{03}+c_{11}Z_{02}=0\end{array}\right.$$ The theorem follows from the fact the the above system is solvable and in particular $$c_{20}=\frac{i}{\xi}Z_{20},\quad c_{11}=-\frac{i}{\xi}Z_{11},\quad\textrm{and}% \quad c_{02}=\frac{i}{3\xi}Z_{02}.$$ ∎ Remark III.10. For later convenience let us note that, since $Z_{21}$, $Z_{20}$, $Z_{11}$, and $Z_{02}$ are purely imaginary, one has $$\Re(iK)=\Re(Z^{\prime}_{21}).$$ Moreover, we need the following lemma. Lemma III.11. There exists $\sigma^{*}\in\left(\frac{1}{\sqrt{2}},\frac{\sqrt{3}+1}{2\sqrt{2}}\right]$ such that if $\sigma\in\left(\frac{1}{\sqrt{2}},\sigma^{*}\right)$, then $$\Re(Z^{\prime}_{21})<0,$$ $\forall\omega$ belonging to an open neighbourhood of $\omega_{0}$. Proof. First of all recall that $\xi_{T}=2\sigma\sqrt{1-\sigma^{2}}\omega_{T}$, then one can compute $$\kappa=-(\Psi,J\Psi)_{L^{2}}=\frac{i}{4\pi\sqrt{\omega_{T}}}\left(\frac{1}{% \sqrt{1-2\sigma\sqrt{1-\sigma^{2}}}}-\frac{(\sqrt{1-\sigma^{2}}-1)^{2}}{\sigma% ^{2}}\frac{1}{\sqrt{1+2\sigma\sqrt{1-\sigma^{2}}}}\right)=$$ (46) $$=\frac{i}{4\pi\sqrt{\omega_{T}}}\frac{\sigma^{2}\sqrt{1+2\sigma\sqrt{1-\sigma^% {2}}}-(\sqrt{1-\sigma^{2}}-1)^{2}\sqrt{1-2\sigma\sqrt{1-\sigma^{2}}}}{\sigma^{% 2}(2\sigma^{2}-1)}.$$ Since $\kappa$ is purely imaginary with positive imaginary part and $L_{T}^{-1}2P_{T}^{c}J$ is self-adjoint, for the first summand in the expression for $\Re(Z^{\prime}_{21})$ one gets $$\Re(q_{a_{11}}\overline{Z^{\prime}_{10}})=-2\Re\left(\frac{q_{L_{T}^{-1}2P_{T}% ^{c}JN_{2}(q_{\Psi},q_{\Psi^{*}})}\overline{JN_{2}(q_{\Psi},q_{\Psi^{*}})}}{% \kappa}\right)=0.$$ Hence, $$\Re(Z^{\prime}_{21})=-2\Re\left(\frac{q_{a_{20}}\overline{JN_{2}(q_{\Psi})}}{% \kappa}\right).$$ By direct computations one has $$a_{20}(x)=(L_{T}-(2i\xi_{T}+0))^{-1}H_{20}=A\frac{e^{-\sqrt{\omega_{T}+2\xi_{T% }}|x|}}{4\pi|x|}\left(\begin{array}[]{cc}1\\ -i\end{array}\right)+C\frac{e^{-i\sqrt{-\omega_{T}+2\xi_{T}}|x|}}{4\pi|x|}% \left(\begin{array}[]{cc}1\\ i\end{array}\right),$$ with $$\begin{array}[]{ll}A=-\frac{4\pi}{d}[((2\sigma+1)\sqrt{\omega_{T}}-i\sqrt{-% \omega_{T}+2\xi_{T}})(H_{20})_{1}+(i\sqrt{\omega_{T}}+\sqrt{-\omega_{T}+2\xi_{% T}})(H_{20})_{2}]\\ \par C=\frac{4\pi}{d}[((2\sigma+1)\sqrt{\omega_{T}}-\sqrt{\omega_{T}+2\xi_{T}}% )(H_{20})_{1}-(i\sqrt{\omega_{T}}-i\sqrt{\omega_{T}+2\xi_{T}})(H_{20})_{2}]% \end{array},$$ where $d=2i(2\sigma+1)\omega_{T}+2(\sigma+1)\sqrt{\omega_{T}}\sqrt{-\omega_{T}+2\xi_{% T}}-2i(\sigma+1)\sqrt{\omega_{T}}\sqrt{\omega_{T}+2\xi_{T}}-2\sqrt{\omega_{T}+% 2\xi_{T}}\sqrt{-\omega_{T}+2\xi_{T}}$. From which follows $$q_{a_{20}}=\frac{4\pi}{d}\left[\left(\begin{array}[]{cc}(i\sqrt{-\omega_{T}+2% \xi_{T}}-\sqrt{\omega_{T}+2\xi_{T}})(H_{20})_{1}\\ \par((2\sigma+1)\sqrt{\omega_{T}}-\sqrt{\omega_{T}+2\xi_{T}}-\sqrt{-\omega_{T}% +2\xi_{T}})(H_{20})_{1}\end{array}\right)+\right.$$ $$\left.+\left(\begin{array}[]{cc}-i(2\sqrt{\omega_{T}}+\sqrt{\omega_{T}+2\xi_{T% }}+i\sqrt{-\omega_{T}+2\xi_{T}})(H_{20})_{2}\\ (-\sqrt{\omega_{T}+2\xi_{T}}+i\sqrt{-\omega_{T}+2\xi_{T}})(H_{20})_{2}\end{% array}\right)\right].$$ Hence $$\begin{array}[]{ll}\Re((q_{a_{20}})_{1})=&\frac{16\pi}{|d|^{2}}[i(H_{20})_{1}(% -(\sigma+1)\sqrt{\omega_{T}}\sqrt{\omega_{T}+2\xi_{T}}\sqrt{-\omega_{T}+2\xi_{% T}}+((\sigma+1)\omega_{T}+\xi_{T})\sqrt{-\omega_{T}+2\xi_{T}})+\\ &(H_{20})_{2}(-(2(\sigma+1)\xi_{T}+(2\sigma+1)\omega_{T})\sqrt{\omega_{T}}+(% \xi_{T}+(2\sigma+1)\omega_{T})\sqrt{\omega_{T}+2\xi_{T}})]\\ \Im((q_{a_{20}})_{2})=&\frac{16\pi}{|d|^{2}}[i(H_{20})_{1}((2(\sigma+1)^{2}% \omega_{T}+\xi_{T})\sqrt{-\omega_{T}+2\xi_{T}}-(3\sigma+2)\sqrt{\omega_{T}}% \sqrt{\omega_{T}+2\xi_{T}}\sqrt{-\omega_{T}+2\xi_{T}})+\\ &+(H_{20})_{2}((\sigma+1)\omega_{T}^{3/2}+((\sigma+1)\omega_{T}-\xi_{T})\sqrt{% \omega_{T}+2\xi_{T}})].\end{array}$$ (47) Moreover, by (27) one gets $$JN_{2}(q_{\Psi})=\left(\begin{array}[]{cc}-2\sigma|q_{\omega_{T}}|^{2\sigma-1}% (q_{\Psi})_{1}(q_{\Psi})_{2}\\ \par\sigma|q_{\omega_{T}}|^{2\sigma-1}(3(q_{\Psi})_{1}^{2}+(q_{\Psi})_{2}^{2})% +2\sigma(\sigma-1)|q_{\omega_{T}}|^{2\sigma-1}(q_{\Psi})_{1}^{2}\end{array}% \right)=$$ (48) $$=\left(\begin{array}[]{cc}-2i\sigma|q_{\omega_{T}}|^{2\sigma-1}\left(1-\frac{% \sqrt{1-\sigma^{2}}-1}{\sigma}\right)\left(1+\frac{\sqrt{1-\sigma^{2}}-1}{% \sigma}\right)\\ \par 2\sigma|q_{\omega_{T}}|^{2\sigma-1}\left(1-\frac{\sqrt{1-\sigma^{2}}-1}{% \sigma}\right)\end{array}\right),$$ which implies $$H_{20}=(8\pi\sqrt{\omega_{T}}P^{c}_{T}JN_{2}(q_{\Psi})G_{\omega_{T}},G_{\omega% _{T}})_{L^{2}}=$$ (49) $$=JN_{2}(q_{\Psi})-\frac{(JN_{2}(q_{\Psi}))_{1}|q_{\omega_{T}}|}{16\pi\Delta% \omega^{3/2}}\left(\frac{1}{\sigma}-1\right)\left(\begin{array}[]{cc}1\\ 0\end{array}\right)+$$ $$+\frac{\sqrt{\omega_{T}}}{\kappa}\left(\begin{array}[]{cc}-(JN_{2}(q_{\Psi}))_% {2}\left(\frac{1}{\sqrt{\omega_{T}-\xi_{T}}+\sqrt{\omega_{T}}}-\frac{\sqrt{1-% \sigma^{2}}-1}{\sigma}\frac{1}{\sqrt{\omega_{T}+\xi_{T}}+\sqrt{\omega_{T}}}% \right)^{2}\\ \par(JN_{2}(q_{\Psi}))_{1}\left(\frac{1}{\sqrt{\omega_{T}-\xi_{T}}+\sqrt{% \omega_{T}}}+\frac{\sqrt{1-\sigma^{2}}-1}{\sigma}\frac{1}{\sqrt{\omega_{T}+\xi% _{T}}+\sqrt{\omega_{T}}}\right)^{2}\end{array}\right).$$ Let us notice that (48) and (49) imply $$\begin{array}[]{ll}i(H_{20})_{1}i(JN_{2}(q_{\Psi}))_{1}=&-\frac{1}{2\sigma-1}(% JN_{2}(q_{\Psi}))_{1}^{2}+\\ &+\frac{\sqrt{\omega_{T}}}{4\pi i\kappa}\left(\frac{1}{\sqrt{\omega_{T}-\xi_{T% }}+\sqrt{\omega_{T}}}-\frac{\sqrt{1-\sigma^{2}}-1}{\sigma}\frac{1}{\sqrt{% \omega_{T}+\xi_{T}}+\sqrt{\omega_{T}}}\right)^{2}i(JN_{2}(q_{\Psi}))_{1}(JN_{2% }(q_{\Psi}))_{2}\\ (H_{20})_{2}i(JN_{2}(q_{\Psi}))_{1}=&(JN_{2}(q_{\Psi}))_{2}i(JN_{2}(q_{\Psi}))% _{1}+\\ &-\frac{\sqrt{\omega_{T}}}{4\pi i\kappa}\left(\frac{1}{\sqrt{\omega_{T}-\xi_{T% }}+\sqrt{\omega_{T}}}+\frac{\sqrt{1-\sigma^{2}}-1}{\sigma}\frac{1}{\sqrt{% \omega_{T}+\xi_{T}}+\sqrt{\omega_{T}}}\right)^{2}(JN_{2}(q_{\Psi}))_{1}^{2}\\ i(H_{20})_{1}(JN_{2}(q_{\Psi}))_{2}=&\frac{1}{2\sigma-1}i(JN_{2}(q_{\Psi}))_{1% }(JN_{2}(q_{\Psi}))_{2}+\\ &+\frac{\sqrt{\omega_{T}}}{4\pi i\kappa}\left(\frac{1}{\sqrt{\omega_{T}-\xi_{T% }}+\sqrt{\omega_{T}}}-\frac{\sqrt{1-\sigma^{2}}-1}{\sigma}\frac{1}{\sqrt{% \omega_{T}+\xi_{T}}+\sqrt{\omega_{T}}}\right)^{2}(JN_{2}(q_{\Psi}))_{2}^{2}\\ (H_{20})_{2}(JN_{2}(q_{\Psi}))_{2}=&(JN_{2}(q_{\Psi}))_{2}^{2}+\\ &+\frac{\sqrt{\omega_{T}}}{4\pi i\kappa}\left(\frac{1}{\sqrt{\omega_{T}-\xi_{T% }}+\sqrt{\omega_{T}}}+\frac{\sqrt{1-\sigma^{2}}-1}{\sigma}\frac{1}{\sqrt{% \omega_{T}+\xi_{T}}+\sqrt{\omega_{T}}}\right)^{2}i(JN_{2}(q_{\Psi}))_{1}(JN_{2% }(q_{\Psi}))_{2},\end{array}$$ then by (46) and (47) it follows $$\Re(Z^{\prime}_{21})=-2\Re\left(\frac{q_{a_{20}}\overline{JN_{2}(q_{\Psi})}}{% \kappa}\right)=\frac{2}{i\kappa}(\Re((q_{a_{20}})_{1})i(JN_{2}(q_{\Psi}))_{1}+% \Im((q_{a_{20}})_{2})(JN_{2}(q_{\Psi}))_{2})=$$ $$=\frac{128\pi\omega_{T}^{3/2}|q_{\omega_{T}}|^{4\sigma-2}}{i\kappa|d|^{2}}% \sigma^{2}\left(1-\frac{\sqrt{1-\sigma^{2}}-1}{\sigma}\right)^{2}f(\sigma),$$ with $$f(\sigma)=\left(\left(2(1+\sigma)^{2}+2\sigma\sqrt{1-\sigma^{2}}\right)\sqrt{-% 1+4\sigma\sqrt{1-\sigma^{2}}}\right.-(2+3\sigma)\sqrt{-1+4\sigma\sqrt{1-\sigma% ^{2}}}\sqrt{1+4\sigma\sqrt{1-\sigma^{2}}}+$$ $$+\left(1+\frac{-1+\sqrt{1-\sigma^{2}}}{\sigma}\right)\left.\left((-1-\sigma)% \sqrt{-1+16\sigma^{2}-16\sigma^{4}}+\left(1+\sigma+2\sigma\sqrt{1-\sigma^{2}}% \right)\sqrt{-1+4\sigma\sqrt{1-\sigma^{2}}}\right)\right)\cdot$$ $$\cdot\left(\frac{1}{-1+2\sigma}-\frac{\left(\frac{1}{1+\sqrt{1-2\sigma\sqrt{1-% \sigma^{2}}}}-\frac{-1+\sqrt{1-\sigma^{2}}}{\sigma+\sigma\sqrt{1+2\sigma\sqrt{% 1-\sigma^{2}}}}\right)^{2}}{\frac{1}{\sqrt{1-2\sigma\sqrt{1-\sigma^{2}}}}-% \frac{\left(-1+\sqrt{1-\sigma^{2}}\right)^{2}}{\sigma^{2}\sqrt{1+2\sigma\sqrt{% 1-\sigma^{2}}}}}\right)+$$ $$+\left(1+\sigma+\left(1+\sigma-2\sigma\sqrt{1-\sigma^{2}}\right)\sqrt{1+4% \sigma\sqrt{1-\sigma^{2}}}+\right.$$ $$+\left(1+\frac{-1+\sqrt{1-\sigma^{2}}}{\sigma}\right)\left.\left(-1-2\sigma+% \left(-4\sigma-4\sigma^{2}\right)\sqrt{1-\sigma^{2}}+\left(1+2\sigma+2\sigma% \sqrt{1-\sigma^{2}}\right)\sqrt{1+4\sigma\sqrt{1-\sigma^{2}}}\right)\right)\cdot$$ $$\cdot\left(1-\frac{\left(\frac{1}{1+\sqrt{1-2\sigma\sqrt{1-\sigma^{2}}}}+\frac% {-1+\sqrt{1-\sigma^{2}}}{\sigma+\sigma\sqrt{1+2\sigma\sqrt{1-\sigma^{2}}}}% \right)^{2}}{\frac{1}{\sqrt{1-2\sigma\sqrt{1-\sigma^{2}}}}-\frac{\left(-1+% \sqrt{1-\sigma^{2}}\right)^{2}}{\sigma^{2}\sqrt{1+2\sigma\sqrt{1-\sigma^{2}}}}% }\right).$$ Notice that one has $f(\sigma)\rightarrow\widetilde{f}>0$, $d\rightarrow\widetilde{d}\neq 0$, and $i\kappa\rightarrow-\infty$ as $\sigma\rightarrow 1/\sqrt{2}$; this implies $$\lim_{\sigma\rightarrow 1/\sqrt{2}}\Re(Z_{21}^{\prime})=\frac{128\sqrt{2}% \omega_{T}^{3/2}|q_{\omega_{T}}|^{2\sqrt{2}-2}}{\pi|\widetilde{d}|^{2}}% \widetilde{f}\lim_{\sigma\rightarrow 1/\sqrt{2}}\frac{1}{i\kappa}=0^{-}.$$ Hence there is a neighborhood of $\frac{1}{\sqrt{2}}$ where $\Re(Z_{21}^{\prime})$ is strictly negative. A Mathematica plot of the function $f(\sigma)$ in the range $\left(\frac{1}{\sqrt{2}},\frac{\sqrt{3}+1}{2\sqrt{2}}\right)$ is given in figure 1. Summing up, one can conclude that there exists $\sigma^{*}\in\left(\frac{1}{\sqrt{2}},\frac{\sqrt{3}+1}{2\sqrt{2}}\right]$ such that $\Re(Z_{21}^{\prime})<0$ for $\sigma\in\left(\frac{1}{\sqrt{2}},\sigma^{*}\right)$. ∎ Remark III.12. The following reformulation of the equation for $z_{1}$ will be useful. First of all, if we denote $K_{T}=K(\omega_{T})$, then the ordinary differential equation for $z_{1}$ becomes $$\dot{z_{1}}=i\xi z_{1}+iK_{T}|z_{1}|^{2}z_{1}+\widehat{\widehat{Z}}_{R},$$ for some remainder $\widehat{\widehat{Z}}_{R}$. Secondly, let us notice that $z_{1}$ is oscillating while $y=|z_{1}|^{2}$ decreases at infinity. Hence, it is easier to deal with the variable $y$, which satisfies the equation $$\dot{y}=2\Re(iK_{T})y^{2}+Y_{R},$$ (50) where $Y_{R}$ is some suitable remainder. Remark III.13. From Lemma II.2 we have $$|(g+P^{c}_{T}h_{1}+P^{c}_{T}k_{1},Z^{\prime}_{10})|\leq\mathcal{R}(\omega)(\|g% \|_{L^{\infty}_{w^{-1}}}+\|P^{c}_{T}h_{1}\|_{L^{\infty}_{w^{-1}}}+\|P^{c}_{T}k% _{1}\|_{L^{\infty}_{w^{-1}}})\leq$$ $$\leq\mathcal{R}_{1}(\omega)(|\omega_{T}-\omega|\|h\|_{L^{\infty}_{w^{-1}}}+\|P% ^{c}_{T}h_{1}\|_{L^{\infty}_{w^{-1}}}+\|P^{c}_{T}k_{1}\|_{L^{\infty}_{w^{-1}}}),$$ hence $$|Y_{R}|=|\widehat{\widehat{Z}}_{R}||z|=|\widetilde{Z}_{R}+i(K-K_{T})|z_{1}|^{2% }z_{1}||z|\leq$$ $$\leq\mathcal{R}_{1}(\omega,|z|+\|f\|_{L^{\infty}_{w^{-1}}})|z|[(|z|^{2}+\|f\|_% {L^{\infty}_{w^{-1}}})^{2}+|z||\omega_{T}-\omega|(|z|^{2}+\|h\|_{L^{\infty}_{w% ^{-1}}})+$$ $$+|z|\|P^{c}_{T}k_{1}\|_{L^{\infty}_{w^{-1}}}+|z|\|P^{c}_{T}h_{1}\|_{L^{\infty}% _{w^{-1}}}].$$ IV Majorants In this section we exploit the so-called majorant method to prove large time asymptotic for the solutions of the modulation equations. Preliminarily, we need some assumptions on the initial conditions. IV.1 Initial conditions Let us fix some $\epsilon>0$ to be chosen later in order to obtain a uniform control in the estimates. Then we assume that $$\begin{array}[]{ll}|z(0)|\leq\epsilon^{1/2}\\ \|f(0)\|_{L^{1}_{w}}\leq c\epsilon^{1/2},\end{array}$$ (51) where $c>0$ is some positive constant. From the definition of $z_{1}$ (see Proposition III.5) one has $$z_{1}-z=\mathcal{R}(\omega)|z|^{2}.$$ Then the following estimate holds $$y(0)=|z_{1}(0)|^{2}\leq|z(0)|^{2}+\mathcal{R}(\omega,|z(0)|)|z(0)|^{3}\leq% \epsilon+\mathcal{R}(\omega,|z(0)|)\epsilon^{3/2}.$$ We also need an estimate for the initial datum of the function $h(t)$. For this purpose recall that, from the definition of $f$ and $h$ one has the decomposition $h=f+(P^{d}-P^{d}_{T})f\ .$ Hence, $$\|h(0)\|_{L^{1}_{w}}\leq\|f(0)\|_{L^{1}_{w}}+\|(P^{d}-P^{d}_{T})f(0)\|_{L^{1}_% {w}}\leq c\epsilon^{3/2}+\mathcal{R}_{1}(\omega)|\omega_{T}-\omega|\|f(0)\|_{L% ^{\infty}_{w^{-1}}},$$ for some constant $c>0$. Thanks to the former estimates, one can prove the following lemma. Lemma IV.1. Let us assume conditions (51) on the initial data. Then $$\|P^{c}_{T}k_{1}\|_{L^{\infty}_{w^{-1}}}\leq c\frac{|z(0)|^{2}}{(1+t)^{3/2}}% \leq\frac{c\epsilon}{(1+t)^{3/2}},$$ for all $t\geq 0$. Proof. Let us denote $\zeta=\int_{0}^{t}\rho(\tau)d\tau$ (the quantity $\rho$ was defined in (33)). From the definition of the exponential and the idempotency of the projections one gets $$e^{i\zeta\Pi^{\pm}_{T}}=\Pi^{\pm}_{T}e^{i\zeta}+\Pi^{\mp}_{T}+P^{d}_{T}.$$ Then it follows $$e^{i\zeta(\Pi^{+}_{T}-\Pi^{-}_{T})}=(\Pi^{+}_{T}e^{i\zeta}+\Pi^{-}_{T}+P^{d}_{% T})(\Pi^{-}_{T}e^{-i\zeta}+\Pi^{+}_{T}+P^{d}_{T})=\Pi^{+}_{T}e^{i\zeta}+\Pi^{-% }_{T}e^{-i\zeta}+P^{d}_{T}.$$ The lemma follows from the fact that $L_{T}$ commutes with the projectors $\Pi^{\pm}_{T}$, the definition (34) of the operator $L_{M}$ and the decay of the evolution of the functions $P_{T}^{c}a_{ij}$, $i$, $j=0$, $1$, $2$, stated in Lemma III.3, namely $$\|P^{c}_{T}k_{1}\|_{L^{\infty}_{w^{-1}}}=\left\|e^{\int_{0}^{t}L_{M}(\tau)d% \tau}P^{c}_{T}k(0)\right\|_{L^{\infty}_{w^{-1}}}=$$ $$=\|e^{L_{T}t}P^{c}_{T}(e^{i\zeta}\Pi^{+}_{T}+e^{-i\zeta}\Pi^{-}_{T}+P^{d}_{T})% (a_{20}z^{2}(0)+a_{11}z(0)\overline{z(0)}+a_{02}\overline{z(0)}^{2})\|_{L^{% \infty}_{w^{-1}}}\leq$$ $$\leq c\frac{|z(0)|^{2}}{(1+t)^{3/2}}\leq\frac{c\epsilon}{(1+t)^{3/2}}.$$ ∎ IV.2 Definition of the majorants We are now in the position to define the majorants: $$\displaystyle M_{0}(T)=\max_{0\leq t\leq T}|\omega_{T}-\omega|\left(\frac{% \epsilon}{1+\epsilon t}\right)^{-1}$$ (52) $$\displaystyle M_{1}(T)=\max_{0\leq t\leq T}|z(t)|\left(\frac{\epsilon}{1+% \epsilon t}\right)^{-1/2}$$ (53) $$\displaystyle M_{2}(T)=\max_{0\leq t\leq T}\|P^{c}_{T}h_{1}(t)\|_{L^{\infty}_{% w^{-1}}}\left(\frac{\epsilon}{1+\epsilon t}\right)^{-3/2}$$ (54) We shall use the following vector notation $$M=(M_{0},M_{1},M_{2}).$$ (55) Remark IV.2. From the estimates on $g$, $k_{1}$ and the definitions of the majorants follows $$\|f\|_{L^{\infty}_{w^{-1}}}=\|g+P^{c}_{T}h_{1}+P^{c}_{T}k+P^{c}_{T}k_{1}\|_{L^% {\infty}_{w^{-1}}}\leq$$ $$\leq\mathcal{R}_{1}(\omega)\left(|\omega_{T}-\omega|+|z|^{2}+\frac{\epsilon}{(% 1+t)^{3/2}}\|P^{c}_{T}h_{1}\|_{L^{\infty}_{w^{-1}}}\right)\leq$$ $$\leq\frac{\epsilon}{1+\epsilon t}\mathcal{R}_{1}(\omega)(M_{1}^{2}+\epsilon^{1% /2}M_{2}).$$ From the assumptions (51) on the initial data one obtains $$y(0)\leq\epsilon+\mathcal{R}(\epsilon^{1/2}M)\epsilon^{3/2}\leq\epsilon(1+% \mathcal{R}(\epsilon^{1/2}M)\epsilon^{1/2}),$$ $$\|h(0)\|_{L^{1}_{w}}\leq c\epsilon^{3/2}\mathcal{R}(\epsilon^{1/2}M)\epsilon^{% 2}M_{0}(1+M_{1}^{2}+\epsilon^{1/2}M_{2}).$$ IV.3 The equation for $y$ We aim at studying the asymptotic behavior of the solution of equation (50) for the variable $y$ introduced in Remark 2.18. To do that we need the following lemma which is the analogous of Lemma 4.1 in KKS . Lemma IV.3. The remainder $Y_{R}$ in equation (50) satisfies the estimate $$|Y_{R}|\leq\mathcal{R}(\epsilon^{1/2}M)\frac{\epsilon^{5/2}}{(1+\epsilon t)^{2% }\sqrt{\epsilon t}}(1+|M|)^{5}.$$ Hence, equation (50) is of the form $$\dot{y}=2\Re(iK_{T})y^{2}+Y_{R},$$ (56) with $$\begin{array}[]{lll}\Re(iK_{T})<0,\\ y(0)\leq\epsilon y_{0},\\ |Y_{R}|\leq\overline{Y}\frac{\epsilon^{5/2}}{(1+\epsilon t)^{2}\sqrt{\epsilon t% }},\end{array}$$ where $y_{0}$ and $\overline{Y}>0$ are some constants. Then we can apply Proposition 5.6 in BS and get the next lemma. Lemma IV.4. Assuming the initial condition and the source term of equation (50) as above, the solution $y(t)$ is bonded as follows for any $t>0$ $$\left|y(t)-\frac{y(0)}{1+2\Im(K_{T})y_{0}t}\right|\leq c\overline{Y}\left(% \frac{\epsilon}{1+\epsilon t}\right)^{3/2},$$ where $c=c(y_{0},\Im(K_{T}))$. IV.4 The equation for $P^{c}_{T}h_{1}$ As a first step let us estimate the remianders in the equation (41) for $h_{1}$. This is done in the next two lemmas. Lemma IV.5. The remainders $\widetilde{H}_{R}$ and $H^{\prime\prime}_{R}$ can be estimated as $$\|P^{c}_{T}\widetilde{H}_{R}\|_{L^{1}_{w}}\leq\mathcal{R}(\epsilon^{1/2}M)% \left(\frac{\epsilon}{1+\epsilon t}\right)^{3/2}((1+M_{1})^{3}+\epsilon^{1/2}(% 1+|M|)^{4}),$$ and $$\|P^{c}_{T}H^{\prime\prime}_{R}\|_{L^{1}_{w}}\leq\mathcal{R}(\epsilon^{1/2}M)% \left(\frac{\epsilon}{1+\epsilon t}\right)^{3/2}((1+M_{1})^{3}+\epsilon^{1/2}(% 1+|M|)^{4}).$$ Proof. From the estimate (37) on $\widetilde{H}_{R}$ one has $$\|P^{c}_{T}\widetilde{H}_{R}\|_{L^{1}_{w}}\leq\mathcal{R}_{2}(\omega,|z|+\|f\|% _{L^{\infty}_{w^{-1}}})[|z|^{3}+(|z|+|\omega_{T}-\omega|)(|z|^{2}+\|P^{c}_{T}k% _{1}\|_{L^{\infty}_{w^{-1}}}+$$ $$+\|P^{c}_{T}h_{1}\|_{L^{\infty}_{w^{-1}}})+(|z|^{2}+\|P^{c}_{T}k_{1}\|_{L^{% \infty}_{w^{-1}}}+\|P^{c}_{T}h_{1}\|_{L^{\infty}_{w^{-1}}})^{2}]\leq$$ $$\leq\mathcal{R}(\epsilon^{1/2}M)\left[\left(\frac{\epsilon}{1+\epsilon t}% \right)^{3/2}M_{1}^{3}+\right.$$ $$\left.+\left(\left(\frac{\epsilon}{1+\epsilon t}\right)^{1/2}M_{1}+\frac{% \epsilon}{1+\epsilon t}M_{0}\right)\left(\frac{\epsilon}{1+\epsilon t}M_{1}^{2% }+\frac{\epsilon}{(1+t)^{3/2}}+\left(\frac{\epsilon}{1+\epsilon t}\right)^{3/2% }M_{2}\right)+\right.$$ $$\left.+\left(\frac{\epsilon}{1+\epsilon t}M_{1}^{2}+\frac{\epsilon}{(1+t)^{3/2% }}+\left(\frac{\epsilon}{1+\epsilon t}\right)^{3/2}M_{2}\right)\right]\leq$$ $$\leq\mathcal{R}(\epsilon^{1/2}M)\left(\frac{\epsilon}{1+\epsilon t}\right)^{3/% 2}((1+M_{1})^{3}+\epsilon^{1/2}(1+|M|)^{4}).$$ The bound for $H^{\prime\prime}_{R}$ follows in the same way from the estimate (38). ∎ In the next lemma we get a estimate the evolution under the linear operator $L_{T}$ of the remainder $P^{c}_{T}\overline{H}_{R}$. Lemma IV.6. For any $t$, $s\geq 0$ the following estimate holds $$\|e^{L_{T}t}P^{c}_{T}\overline{H}_{R}(s)\|_{L^{\infty}_{w^{-1}}}(1+t)^{3/2}% \leq\mathcal{R}(\epsilon^{1/2}M)\left(\frac{\epsilon}{1+\epsilon s}\right)^{3/% 2}(M_{1}^{3}+\epsilon^{1/2}(1+|M|)^{3}).$$ Proof. From the analytic expression (42) of $\overline{H}_{R}$ and the estimates of the evolution of the functions $a_{20}$, $a_{11}$, and $a_{02}$ stated in Lemma III.3, one has $$\|e^{L_{T}t}P^{c}_{T}\overline{H}_{R}(s)\|_{L^{\infty}_{w^{-1}}}(1+t)^{3/2}\leq$$ $$\leq\mathcal{R}_{2}(\omega,|z|+\|f\|_{L^{\infty}_{w^{-1}}})|z|[|z||\omega_{T}-% \omega|+(|z|+\|k_{1}\|_{L^{\infty}_{w^{-1}}}+\|h_{1}\|_{L^{\infty}_{w^{-1}}})^% {2}]\leq$$ $$\leq\mathcal{R}(\epsilon^{1/2}M)\left(\frac{\epsilon}{1+\epsilon s}\right)^{1/% 2}M_{1}\left[\left(\frac{\epsilon}{1+\epsilon s}\right)^{3/2}M_{0}M_{1}+\right.$$ $$\left.+\left(\left(\frac{\epsilon}{1+\epsilon s}\right)^{1/2}M_{1}+\frac{% \epsilon}{(1+s)^{3/2}}+\left(\frac{\epsilon}{1+\epsilon s}\right)^{3/2}M_{2}% \right)^{2}\right]\leq$$ $$\leq\mathcal{R}(\epsilon^{1/2}M)\left(\frac{\epsilon}{1+\epsilon s}\right)^{3/% 2}(M_{1}^{3}+\epsilon^{1/2}(1+|M|)^{3}).$$ ∎ From the two previous lemmas we can get the following result. Lemma IV.7. Let us consider the equation for $P^{c}_{T}h_{1}$ $$\left(\frac{dP^{c}_{T}h_{1}}{dt},v\right)_{L^{2}}=Q_{L_{M}}(P^{c}_{T}h_{1},v)+% (P^{c}_{T}\widehat{H}_{R},v)_{L^{2}}+(P^{c}_{T}H^{\prime\prime}_{R},q_{v}G_{% \lambda})_{L^{2}},$$ with initial condition and source terms satisfying $$\|h_{1}(0)\|_{L^{1}_{w}}\leq\epsilon^{3/2}h_{0},$$ $$\widehat{H}_{R}=\widetilde{H}_{R}+H^{\prime\prime}_{R},$$ such that $$\|P^{c}_{T}\widetilde{H}_{R}\|_{L^{1}_{w}}\leq\overline{H}_{1}\left(\frac{% \epsilon}{1+\epsilon t}\right)^{3/2},$$ $$\|P^{c}_{T}H^{\prime\prime}_{R}\|_{L^{1}_{w}}\leq\overline{H}_{2}\left(\frac{% \epsilon}{1+\epsilon t}\right)^{3/2},$$ $$\|e^{L_{T}t}P^{c}_{T}\overline{H}_{R}(s)\|_{L^{\infty}_{w^{-1}}}(1+t)^{3/2}% \leq\overline{H}_{3}\left(\frac{\epsilon}{1+\epsilon s}\right)^{3/2}(M_{1}^{3}% +\epsilon^{1/2}(1+|M|)^{3}).$$ for some positive constant $h_{0}$, $\overline{H}_{1}$, $\overline{H}_{2}$ and $\overline{H}_{3}$. Then its solution is bounded as follows $$\|P^{c}_{T}h_{1}\|_{L^{\infty}_{w^{-1}}}\leq c\left(\frac{\epsilon}{1+\epsilon t% }\right)^{3/2}(h_{0}+\overline{H}_{1}+\overline{H}_{2}+\overline{H}_{3}),$$ where $c=c(\omega_{T})>0$. Proof. By the Duhamel representation one has $$(P^{c}_{T}h_{1},v)_{L^{2}}=\left(e^{\int_{0}^{t}L_{M}(\tau)d\tau}h_{1}(0)+\int% _{0}^{t}e^{\int_{s}^{t}L_{M}(\tau)d\tau}P^{c}_{T}\widehat{H}_{R}(s)ds,v\right)% _{L^{2}}+$$ $$+\left(\int_{0}^{t}e^{\int_{s}^{t}L_{M}(\tau)d\tau}P^{c}_{T}H^{\prime\prime}_{% R}(s)ds,q_{v}G_{\lambda}\right)_{L^{2}},$$ for all $v\in V$. Then from the dispersive estimate in Theorem VI.3 and the estimates on the remainders proved above in the duality paring defined by the inner product $L^{2}$, one has $$\|P^{c}_{T}h_{1}\|_{L^{\infty}_{w^{-1}}}=\sup_{0\neq v\in L^{1}_{w}}\frac{(P^{% c}_{T}h_{1},v)_{L^{2}}}{\|v\|_{V\cap L^{1}_{w}}}\leq$$ $$\leq c(\omega_{T})\left(\frac{1}{(1+t)^{3/2}}\|h_{1}(0)\|_{L^{1}_{w}}+\int_{0}% ^{t}\frac{1}{(1+t-s)^{3/2}}(\|P^{c}_{T}\widetilde{H}_{R}(s)\|_{L^{1}_{w}}+\|P^% {c}_{T}H^{\prime\prime}_{R}(s)\|_{L^{1}_{w}})ds+\right.$$ $$\left.+\int_{0}^{t}\|e^{L_{T}(t-s)}P^{c}_{T}\overline{H}_{R}(s)\|_{L^{\infty}_% {w^{-1}}}ds\right)\leq$$ $$\leq c(\omega_{T})\left(\left(\frac{\epsilon}{1+\epsilon t}\right)^{3/2}h_{0}+% \int_{0}^{t}\frac{1}{(1+t-s)^{3/2}}\left(\frac{\epsilon}{1+\epsilon s}\right)^% {3/2}ds(\overline{H}_{1}+\overline{H}_{2}+\overline{H}_{3})\right).$$ The lemma follows from the fact that $$\int_{0}^{t}\frac{1}{(1+t-s)^{3/2}}\left(\frac{\epsilon}{1+\epsilon s}\right)^% {3/2}ds\leq c\left(\frac{\epsilon}{1+\epsilon t}\right)^{3/2},$$ for some constant $c>0$. ∎ IV.5 Uniform bounds for the majorants To prove that the majorants are uniformly bounded, the following lemma will be useful. Lemma IV.8. For any $T>0$ the majorants $M_{0}$, $M_{1}$, and $M_{2}$ satisfy the following inequalities $$\begin{array}[]{lll}M_{0}(T)\leq\mathcal{R}(\epsilon^{1/2}M)[(1+M_{1})^{4}+% \epsilon(1+|M|)^{2}],\\ \par(M_{1}(T))^{2}\leq\mathcal{R}(\epsilon^{1/2}M)[1+\epsilon^{1/2}(1+|M|)^{5}% ],\\ \par M_{2}(T)\leq\mathcal{R}(\epsilon^{1/2}M)[(1+M_{1})^{3}+\epsilon^{1/2}(1+|% M|)^{4}].\\ \end{array}$$ Proof. It follows form Lemma IV.4 and IV.7 as Lemma 4.6 in KKS , but we give the proof for sake of completeness. Step 1. Let us begin noting that $$|z|^{2}+\|f\|_{L^{\infty}_{w^{-1}}}\leq\mathcal{R}_{2}(\omega,|z|+\|f\|_{L^{% \infty}_{w^{-1}}})(|z|^{2}+\|P^{c}_{T}k_{1}\|_{L^{\infty}_{w^{-1}}}+\|P^{c}_{T% }h_{1}\|_{L^{\infty}_{w^{-1}}})\leq$$ $$\leq\mathcal{R}(\epsilon^{1/2}M)\left(\frac{\epsilon}{(1+t)^{3/2}}+\frac{% \epsilon}{1+\epsilon t}M_{1}^{2}+\left(\frac{\epsilon}{1+\epsilon t}\right)^{3% /2}M_{2}\right)\leq$$ $$\leq\mathcal{R}(\epsilon^{1/2}M)\frac{\epsilon}{1+\epsilon t}(1+M_{1}^{2}+% \epsilon^{1/2}M_{2}).$$ Then by the definition of $M_{0}$ and the bound on $|\omega_{T}-\omega|$: $$M_{0}(T)\leq\max_{0\leq t\leq T}\left[\left(\frac{\epsilon}{1+\epsilon t}% \right)^{-1}\mathcal{R}(\epsilon^{1/2}M)\left(\int_{t}^{T}\left(\frac{\epsilon% }{1+\epsilon\tau}\right)^{2}(1+M_{1}(\tau)^{2}+\right.\right.$$ $$\left.\left.+\epsilon^{1/2}M_{2}(\tau))^{2}d\tau+\left(\frac{\epsilon}{1+% \epsilon t}\right)^{2}(1+M_{1}^{2}+\epsilon^{1/2}M_{2})^{2}\right)\right]\leq$$ $$\leq\mathcal{R}(\epsilon^{1/2}M)[(1+M_{1})^{4}+\epsilon(1+|M|)^{2}].$$ Step 2. Since $y=|z_{1}|^{2}$, we can exploit the inequality proved in Lemma IV.4, the fact that $\overline{Y}=\mathcal{R}(\epsilon^{1/2}M)(1+|M|)^{5}$ and $y(0)\leq\epsilon y_{0}$, one gets $$y\leq\mathcal{R}(\epsilon^{1/2}M)\left[\frac{\epsilon}{1+\epsilon t}+\left(% \frac{\epsilon}{1+\epsilon t}\right)^{3/2}(1+|M|)^{5}\right].$$ From which follows $$|z|^{2}\leq y+\mathcal{R}(\omega)|z|^{3}\leq$$ $$\leq\mathcal{R}(\epsilon^{1/2}M)\left[\frac{\epsilon}{1+\epsilon t}+\left(% \frac{\epsilon}{1+\epsilon t}\right)^{3/2}(1+|M|)^{5}+\left(\frac{\epsilon}{1+% \epsilon t}\right)^{3/2}M_{1}^{3}\right]\leq\mathcal{R}(\epsilon^{1/2}M)[1+% \epsilon^{1/2}(1+|M|)^{5}].$$ Step 3. Recall that $$\|h(0)\|_{L^{1}_{w}}\leq c\epsilon^{3/2}\mathcal{R}(\epsilon^{1/2}M)\epsilon^{% 2}M_{0}(1+M_{1}^{2}+\epsilon^{1/2}M_{2}),$$ $$\overline{H}_{1}=\mathcal{R}(\epsilon^{1/2}M)((1+M_{1})^{3}+\epsilon^{1/2}(1+|% M|)^{4}),$$ $$\overline{H}_{2}=\mathcal{R}(\epsilon^{1/2}M)((1+M_{1})^{3}+\epsilon^{1/2}(1+|% M|)^{4}),$$ $$\overline{H}_{3}=\mathcal{R}(\epsilon^{1/2}M)(M_{1}^{3}+\epsilon^{1/2}(1+|M|)^% {3}).$$ Hence from Lemma IV.7 follows $$\|P^{c}_{T}h_{1}\|_{L^{\infty}_{w^{-1}}}\leq\mathcal{R}(\epsilon^{1/2}M)\left(% \frac{\epsilon}{1+\epsilon t}\right)^{3/2}((1+M_{1})^{3}+\epsilon^{1/2}(1+|M|)% ^{4}),$$ which implies the inequality for $M_{2}$. ∎ We are now in the position to prove the uniform boundedness of the majorants. Proposition IV.9. If $\epsilon>0$ is sufficiently small, there exist a positive constant $\overline{M}$ independent of $T$ and $\epsilon$ such that $$|M(T)|\leq\overline{M},$$ for all $T>0$. Proof. From the previous lemma follows $$|M|^{2}\leq\mathcal{R}(\epsilon^{1/2}M)[(1+M_{1})^{8}+\epsilon^{1/2}(1+|M|)^{8% }]\leq\mathcal{R}(\epsilon^{1/2}M)(1+\epsilon^{1/2}F(M)),$$ where in the last inequality we have replaced the estimate for $M_{1}^{2}$, and $F(M)$ is a suitable polynomial function. Furthermore, $M(0)$ is small and $M(T)$ is a continuous function. Hence it follows that $|M|$ is bounded independent of $\epsilon\ll 1$. ∎ The last proposition gives a summary of the behavior of the functions $\omega(t)$, $z(t)$, $P^{c}_{T}h_{1}(t)$, and $f(t)$. Corollary IV.10. There exists a finite limit $\omega_{\infty}$ for the function $\omega(t)$ as $t\rightarrow+\infty$. Moreover the following holds for all $t>0$ $$\begin{array}[]{llll}|\omega_{\infty}-\omega(t)|\leq\overline{M}\frac{\epsilon% }{1+\epsilon t},\\ |z(t)|\leq\overline{M}\left(\frac{\epsilon}{1+\epsilon t}\right)^{1/2},\\ \|P^{c}_{T}h_{1}(t)\|_{L^{\infty}_{w^{-1}}}\leq\overline{M}\left(\frac{% \epsilon}{1+\epsilon t}\right)^{3/2},\\ \|f(t)\|_{L^{\infty}_{w^{-1}}}\leq\overline{M}\frac{\epsilon}{1+\epsilon t}.\\ \end{array}$$ V Large time behavior of the solution and scattering asymptotics V.1 Large time behavior of the solution of equation (1) The results of the previous section lead to the following theorem. Theorem V.1. Let $u(t)$ be a solution of equation (1) with initial datum $u_{0}\in V\cap L^{1}_{w}$ of the form $$u_{0}(x)=e^{i\theta_{0}}\Phi_{\omega_{0}}(x)+z_{0}\Psi(x)+\overline{z}_{0}\Psi% ^{*}(x)+f_{0}(x),$$ where $\theta_{0}\in\mathbb{R}$, $\omega_{0}>0$, $z_{0}\in\mathbb{C}$ with $$|z(0)|\leq\epsilon^{1/2},\qquad\|f_{0}\|_{L^{1}_{w}}\leq c\epsilon^{3/2},$$ for some $\epsilon$, $c>0$. Then, provided $\epsilon$ is small enough, there exist $\omega(t)$, $\gamma(t)$, $z(t)\in C^{1}([0,+\infty))$ solutions of the modulation equations (21)-(23), and two constants $\omega_{\infty}$, $\overline{M}>0$ such that $\displaystyle\omega_{\infty}=\lim_{t\rightarrow+\infty}\omega(t)$ and for all $t\geq 0$ $$u(t,x)=e^{i(\int_{0}^{t}\omega(s)ds+\gamma(t))}\left(\Phi_{\omega(t)}(x)+z(t)% \Psi(t,x)+\overline{z(t)}\Psi^{*}(t,x)+f(t,x)\right),$$ where $$|\omega_{\infty}-\omega(t)|\leq\overline{M}\frac{\epsilon}{1+\epsilon t},\quad% |z(t)|\leq\overline{M}\left(\frac{\epsilon}{1+\epsilon t}\right)^{1/2},\quad\|% f(t)\|_{L^{\infty}_{w^{-1}}}\leq\overline{M}\frac{\epsilon}{1+\epsilon t}.$$ Proof. Let us recall that the decomposition of the function $f$ as $$f=g+h_{1}+k+k_{1}$$ depends on the quantity $\omega(T)$. On the other hand Corollary IV.10 claims that the function $\omega(t)$ converges to some $\omega_{\infty}>0$ as $t\rightarrow+\infty$. As a consequence, one can reformulate the decomposition by choosing $T=+\infty$. Moreover, all the estimates obtained before for finite $T$ can be extended to $T=+\infty$ without modification. Hence the theorem. ∎ The next goal is to construct precise asymptotic expressions for $\omega(t)$, $\gamma(t)$, and $z(t)$. For later convenience let us define (recall that $\xi$ depends explicitly on $\omega$, see (12); and similarly for $K$, see (44) and subsequent, and $\gamma$) $$\xi_{\infty}=\xi(\omega_{\infty}),$$ $$\gamma_{\infty}=\gamma(\omega_{\infty}),$$ $$K_{\infty}=K(\omega_{\infty}).$$ Lemma V.2. Under the assumption of Theorem V.1 the functions $\omega(t)$, $\gamma(t)$, and $z(t)$ have the following asymptotic behavior as $t\rightarrow+\infty$: $$\omega(t)=\omega_{\infty}+\frac{q_{1}}{1+\epsilon k_{\infty}t}+\frac{q_{2}}{1+% \epsilon k_{\infty}t}\cos(2\xi_{\infty}t+a_{1}\log(1+\epsilon k_{\infty}t)+a_{% 2})+O(t^{-3/2}),$$ $$\gamma(t)=\gamma_{\infty}+b_{1}\log(1+\epsilon k_{\infty}t)+O(t^{-1}),$$ $$z(t)=z_{\infty}\frac{e^{i\int_{0}^{t}\xi(\tau)d\tau}}{(1+\epsilon k_{\infty}t)% ^{\frac{1-i\delta}{2}}}+O(t^{-1}),$$ where $$z_{\infty}=z_{1}(0)+\int_{0}^{+\infty}e^{-i\int_{0}^{s}\xi(\tau)d\tau}(1+% \epsilon k_{\infty}s)^{\frac{1-i\delta}{2}}Z_{1}(s)ds,$$ $\epsilon k_{\infty}=2\Im(K_{\infty})y_{0}$, $\delta=\frac{\Re(K_{\infty})}{\Im(K_{\infty})}$, and $q_{1}$, $q_{2}$, $a_{1}$, $a_{2}$, $b_{1}$ are constants. Proof. We will prove the asymptotics for $z(t)$ only; the formulas for $\omega(t)$ and $\gamma(t)$ can be deduced as in Sections 6.1 and 6.2 of BS . In order to do that let us recall that the equation for $z_{1}(t)$ can be written as $$\dot{z_{1}}=i\xi z_{1}+iK_{\infty}|z_{1}|^{2}z_{1}+\widehat{\widehat{Z}}_{R},$$ moreover Remark III.13 and the inequalities satisfied by the majorants in Lemma IV.8 justify the following estimates on $\widehat{\widehat{Z}}_{R}$ $$|\widehat{\widehat{Z}}_{R}|\leq\mathcal{R}_{1}(\omega,|z|+\|f\|_{L^{\infty}_{w% ^{-1}}})[(|z|^{2}+\|f\|_{L^{\infty}_{w^{-1}}})^{2}+|z||\omega_{T}-\omega|(|z|^% {2}+\|h\|_{L^{\infty}_{w^{-1}}})+$$ $$+|z|\|P^{c}_{T}k_{1}\|_{L^{\infty}_{w^{-1}}}+|z|\|P^{c}_{T}h_{1}\|_{L^{\infty}% _{w^{-1}}}]\leq$$ $$\leq\mathcal{R}(\epsilon^{1/2}M)\frac{\epsilon^{2}}{(1+\epsilon t)^{3/2}\sqrt{% \epsilon t}}(1+\overline{M}^{4})=O(t^{-2}),$$ as $t\rightarrow+\infty$. On the other hand, Lemma IV.4 implies $$y(t)=\frac{y(0)}{1+2\Im(K_{\infty})y(0)t}+O(t^{-3/2}),\qquad\textrm{as}\;t% \rightarrow+\infty.$$ Let us note that $|z_{1}|$ satisfies the same bound of $|z|$, namely $$|z_{1}|\leq\overline{M}\left(\frac{\epsilon}{1+\epsilon t}\right)^{1/2},$$ then the equation for $z_{1}(t)$ can be rewritten in the formulas $$\dot{z_{1}}=i\xi z_{1}+iK_{\infty}\frac{y(0)}{1+2\Im(K_{\infty})y(0)t}z_{1}+Z_% {1},$$ where $Z_{1}=O(t^{-2})$ as $t\rightarrow+\infty$. Since $y(0)=\epsilon y_{0}$, one has $\epsilon K_{\infty}y_{0}=\frac{i}{2}\epsilon k_{\infty}(1-i\delta)$ and the equation for $z_{1}(t)$ becomes $$\dot{z_{1}}=\left(i\xi-\frac{i}{2}\epsilon k_{\infty}(1-i\delta)\frac{1}{1+% \epsilon k_{\infty}t}\right)z_{1}+Z_{1}.$$ Hence, one gets $$z_{1}(t)=\frac{e^{i\int_{0}^{t}\xi(\tau)d\tau}}{(1+\epsilon k_{\infty}t)^{% \frac{1-i\delta}{2}}}\left(z_{1}(0)+\int_{0}^{s}e^{-i\int_{0}^{t}\xi(\tau)d% \tau}(1+\epsilon k_{\infty}s)^{\frac{1-i\delta}{2}}ds\right)=z_{\infty}\frac{e% ^{i\int_{0}^{t}\xi(\tau)d\tau}}{(1+\epsilon k_{\infty}t)^{\frac{1-i\delta}{2}}% }+z_{R},$$ where $z_{\infty}$ is as in the statement of the lemma and $$z_{R}=-\int_{t}^{+\infty}e^{i\int_{s}^{t}\xi(\tau)d\tau}\left(\frac{1+\epsilon k% _{\infty}s}{1+\epsilon k_{\infty}t}\right)^{\frac{1-i\delta}{2}}Z_{1}(s)ds.$$ The bound on $Z_{1}$ implies $z_{R}=O(t^{-1})$. Therefore $z(t)$ has the asymptotic behavior as $t\rightarrow+\infty$ stated in the lemma because $$z(t)=z_{1}(t)+O(t^{-1})=z_{\infty}\frac{e^{i\int_{0}^{t}\xi(\tau)d\tau}}{(1+% \epsilon k_{\infty}t)^{\frac{1-i\delta}{2}}}+O(t^{-1}).$$ ∎ V.2 Scattering asymptotics Let us make the following ansatz $$u(t,x)=s(t,x)+\zeta(t,x)+f(t,x),$$ where $$s(t,x)=e^{i\Theta(t)}\Phi_{\omega(t)}(x),$$ is the modulated soliton and $$\zeta(t,x)=e^{i\Theta(t)}[(z(t)+\overline{z}(t))\Psi_{1}(x)+i(z(t)-\overline{z% }(t))\Psi_{2}(x)]$$ is the fluctuating component. Recall that the functions $\Phi_{\omega}$, $\Psi_{1}$ and $\Psi_{2}$ satisfy $$\omega\Phi_{\omega}=-H_{\alpha}\Phi_{\omega},$$ $$\omega\Psi_{1}=-i\xi\Psi_{2}-H_{\alpha_{1}}\Psi_{1},$$ $$\omega\Psi_{2}=i\xi\Psi_{1}-H_{\alpha_{2}}\Psi_{2}.$$ Therefore from equation (1) one gets $$\left(i\frac{df}{dt},v\right)_{L^{2}}=Q_{0}(f,v)-\nu(|q_{u}|^{2\sigma}q_{u}-|q% _{s}|^{2\sigma}q_{s}-\alpha_{1}q_{(z+\overline{z})\Psi_{1}}-\alpha_{2}q_{(z-% \overline{z})\Psi_{2}})\overline{q_{v}}+$$ $$+(\dot{\gamma}(s+\zeta)-i\dot{\omega}\frac{d}{d\omega}(s+\zeta)-ie^{i\Theta}[(% \dot{z}-i\xi z)(\Psi_{1}+i\Psi_{2})+(\dot{\overline{z}}-i\xi\overline{z})(\Psi% _{1}-i\Psi_{2})],v)_{L^{2}},$$ for all $v\in V$, where $Q_{0}$ is the quadratic form of the free Laplacian. Hence, as in ADFT , the solution $f(t)$ can be formally expressed as $$f(t,x)=U_{t}*f_{0}(x)+i\int_{0}^{t}U_{t-\tau}(x)q_{f}(\tau)d\tau-i\int_{0}^{t}% U_{t-\tau}*G(\tau)d\tau,$$ where we have denoted $$G(t)=\dot{\gamma}(t)(s(t)+\zeta(t))-i\dot{\omega}(t)\frac{d}{d\omega}(s(t)+% \zeta(t))+$$ $$-ie^{i\Theta(t)}[(\dot{z}(t)-i\xi z(t))(\Psi_{1}(t)+i\Psi_{2}(t))+(\dot{% \overline{z}}(t)-i\xi\overline{z}(t))(\Psi_{1}(t)-i\Psi_{2}(t))]$$ and $U_{t}(x)=\frac{e^{i\frac{|x|^{2}}{4t}}}{(4\pi it)^{3/2}}$ is the propagator of the free Laplacian in $\mathbb{R}^{3}$. In order to prove the asymptotic stability result we need the two following lemmas. Lemma V.3. If the assumptions of Theorem V.1 hold true, then $$\int_{0}^{t}U_{t-\tau}(x)q_{f}(\tau)d\tau=U_{t}*\int_{0}^{+\infty}U_{-\tau}(x)% q_{f}(\tau)d\tau-\int_{t}^{+\infty}U_{t-\tau}(x)q_{f}(\tau)d\tau=U_{t}*\phi_{0% }+r_{0},$$ where $\phi_{0}\in L^{2}(\mathbb{R}^{3})$ and $r_{0}=O(t^{-1/4})$ as $t\rightarrow+\infty$ in $L^{2}(\mathbb{R}^{3})$. Proof. The strategy is similar to the one exploited in the case without eigenvalues (see the proof of Theorem 7.1 in ADO ): since $\phi_{0}(x)=\frac{1}{(4\pi i)^{3/2}}\widetilde{\phi}_{0}\left(\frac{|x|^{2}}{4% }\right)$, for some function $\widetilde{\phi}_{0}:\mathbb{R}^{+}\rightarrow\mathbb{C}$, one gets $$\|\phi_{0}\|^{2}_{L^{2}}=\frac{1}{(4\pi)^{2}}\int_{0}^{+\infty}\left|% \widetilde{\phi}_{0}\left(\frac{r^{2}}{4}\right)\right|^{2}r^{2}dr=\frac{1}{(2% \pi)^{2}}\int_{0}^{+\infty}|\widetilde{\phi}_{0}(y)|^{2}\sqrt{y}dy.$$ Hence $\phi_{0}\in L^{2}(\mathbb{R}^{3})$ if and only if $\widetilde{\phi}_{0}\in L^{2}(\mathbb{R}^{+},\sqrt{y}dy)$. On the other hand, one can make the change of variables $u=\frac{1}{\tau}$ in the integral that defines the function $\widetilde{\phi}_{0}$ and get $$\widetilde{\phi}_{0}(y)=\int_{0}^{+\infty}e^{-iyu}\frac{1}{u}q_{f}\left(\frac{% 1}{u}\right)\sqrt{u}du,$$ then $\widehat{\widetilde{\phi}_{0}}=\frac{1}{u}q_{f}\left(\frac{1}{u}\right)$. Moreover, by Corollary IV.10 one has $$\|\widehat{\widetilde{\phi}_{0}}\|_{L^{2}}^{2}=\int_{0}^{+\infty}\frac{1}{u^{2% }}\left|q_{f}\left(\frac{1}{u}\right)\right|^{2}\sqrt{u}du\leq C\int_{0}^{+% \infty}\frac{\sqrt{u}}{(u+\epsilon)^{2}}du\leq C,$$ for some constant $C>0$, hence the Plancherel identity implies $$\widetilde{\phi}_{0}\in L^{2}(\mathbb{R}^{+},\sqrt{y}dy).$$ In the same way, for any $t>0$ the following holds $$\|r_{0}\|_{L^{2}}^{2}=\frac{1}{(2\pi)^{2}}\left\|\frac{1}{u}q_{f}\left(t+\frac% {1}{u}\right)\right\|_{L^{2}(\mathbb{R}^{+},\sqrt{u}du)}^{2}\leq C\frac{1}{% \sqrt{1+\epsilon t}},$$ for some constant $C>0$ independent of $t$. Which concludes the proof. ∎ The analogous result for the integral function $\int_{0}^{t}U_{t-\tau}*G(\tau)d\tau$ requires different tools. Lemma V.4. Assume that the assumptions of Theorem V.1 hold true, then $$\int_{0}^{t}U_{t-\tau}*G(\tau)d\tau=U_{t}*\int_{0}^{+\infty}U_{-\tau}*G(\tau)d% \tau-U_{t}*\int_{t}^{+\infty}U_{-\tau}*G(\tau)d\tau=U_{t}*\phi_{1}+r_{1},$$ where $\phi_{1}\in L^{2}(\mathbb{R}^{3})$ and $r_{1}=O(t^{-1/2})$ as $t\rightarrow+\infty$ in $L^{2}(\mathbb{R}^{3})$. Proof. We exploit the idea used in KKS to prove Lemma 5.5. Step 1: restriction to the leading terms. From the expansions (29), (30) and (31) for $\dot{\omega}(t)$, $\dot{\gamma}(t)$ and $\dot{z}(t)-i\xi z(t)$ follow that the function $G(t)$ is made by a quadratic part consisting in the terms multiplied $e^{i\Theta(t)}z_{\infty}^{2}$, $e^{i\Theta(t)}\overline{z_{\infty}}^{2}$ or $e^{i\Theta(t)}|z_{\infty}|^{2}$, with $$z_{\infty}=\frac{e^{i\xi_{\infty}t}}{\sqrt{1+\epsilon k_{\infty}t}},$$ which are of order $t^{-1}$ and a remainder of order $t^{-3/2}$. The convergence and the decay of the remainder is trivial from the unitarity of $U_{t}$. Furthermore, from the analytic definition of $G$ it follows that it is a complex linear combination of functions of the form $$Q(x)=e^{-\sqrt{\alpha}|x|^{2}},\qquad\alpha=\omega_{\infty},\omega_{\infty}+% \nu_{\infty},\omega_{\infty}-\nu_{\infty}.$$ Hence it suffices to prove the lemma for the functions $\Pi(t)Q(x)$, where $\Pi(t)$ is one between $e^{i\Theta(t)}z_{\infty}^{2}$, $e^{i\Theta(t)}\overline{z_{\infty}}^{2}$ and $e^{i\Theta(t)}|z_{\infty}|^{2}$. Step 2: decomposition of $U_{t}*Q$. Let us note that we can rewrite the convolution product as follows $$U_{t}*Q=\frac{e^{i\frac{|x|^{2}}{4t}}}{(4\pi it)^{3/2}}\int_{\mathbb{R}^{3}}e^% {-i\frac{(x,y)}{2t}}Q(y)dy+\frac{e^{i\frac{|x|^{2}}{4t}}}{(4\pi it)^{3/2}}\int% _{\mathbb{R}^{3}}e^{-i\frac{(x,y)}{2t}}(e^{i\frac{|y|^{2}}{4t}}-1)Q(y)dy=$$ $$=\frac{e^{i\frac{|x|^{2}}{4t}}}{(2it)^{3/2}}\widehat{Q}\left(\frac{x}{2t}% \right)+\frac{e^{i\frac{|x|^{2}}{4t}}}{(2it)^{3/2}}\widehat{Q_{t}}\left(\frac{% x}{2t}\right),$$ (57) where $Q_{t}(y)=(e^{i\frac{|y|^{2}}{4t}}-1)Q(y)$. Since $|e^{i\theta}-1|\leq\theta$ and the function $G(y)$ is exponentially decaying as $|y|\rightarrow+\infty$, the $L^{2}$ norm of the second term of (57) can be estimated in the following way for any $t>1$ $$\frac{1}{(2t)^{3/2}}\left\|\widehat{Q_{t}}\left(\frac{\cdot}{2t}\right)\right% \|_{L^{2}}=\|\widehat{Q_{t}}(\cdot)\|_{L^{2}}\leq\frac{1}{4t}\left(\int_{% \mathbb{R}^{3}}|y|^{4}|Q(y)|^{2}dy\right)^{1/2}\leq\frac{C}{t},$$ for some constant $C>0$. Hence, recalling that $\Pi(\tau)\leq(1+\epsilon k_{\infty}\tau)^{-1}$, we obtain $$\int_{0}^{+\infty}\Pi(\tau)U_{\tau}*Q_{t}d\tau\in L^{2}(\mathbb{R}^{3}),$$ and $$\int_{t}^{+\infty}\Pi(\tau)U_{\tau}*Q_{t}d\tau=O(t^{-1}),$$ as $t\rightarrow+\infty$ in $L^{2}(\mathbb{R}^{3})$. Step 3: Analysis of the first term in (57) in a particular case. Let us first show how to treat the terms with the phase $\Theta(t)$ replaced by $\omega_{\infty}t$. Note that $$\widehat{Q}(x)=\frac{1}{\alpha+|x|^{2}},$$ Hence, in the case of the summands with $|z_{\infty}|^{2}$ it suffices to prove the integrability of the function $$I(x)=\int_{0}^{\infty}e^{i(\omega_{\infty}\tau-\frac{|x|^{2}}{4\tau})}\frac{% \sqrt{\tau}}{(1+\epsilon k_{\infty}\tau)(|x|^{2}+4\alpha\tau^{2})}d\tau=$$ $$=A(x)\int_{0}^{\infty}e^{i(\omega_{\infty}\tau-\frac{|x|^{2}}{4\tau})}\left(% \frac{\sqrt{\tau}}{(1+\epsilon k_{\infty}\tau)}-\frac{4\alpha}{\epsilon k_{% \infty}}\frac{\tau\sqrt{\tau}}{(|x|^{2}+4\alpha\tau^{2})}\right)d\tau+$$ $$+\frac{4\alpha}{\epsilon^{2}k_{\infty}^{2}}A(x)\int_{0}^{\infty}e^{i(\omega_{% \infty}\tau-\frac{|x|^{2}}{4\tau})}\frac{\sqrt{\tau}}{(|x|^{2}+4\alpha\tau^{2}% )}d\tau=I_{1}(x)+I_{2}(x),$$ and the decay of $$I_{t}(x)=\int_{t}^{\infty}e^{i(\omega_{\infty}\tau-\frac{|x|^{2}}{4\tau})}% \frac{\sqrt{\tau}}{(1+\epsilon k_{\infty}\tau)(|x|^{2}+4\alpha\tau^{2})}d\tau=$$ $$=A(x)\int_{t}^{\infty}e^{i(\omega_{\infty}\tau-\frac{|x|^{2}}{4\tau})}\left(% \frac{\sqrt{\tau}}{(1+\epsilon k_{\infty}\tau)}-\frac{4\alpha}{\epsilon k_{% \infty}}\frac{\tau\sqrt{\tau}}{(|x|^{2}+4\alpha\tau^{2})}\right)d\tau+$$ $$+\frac{4\alpha}{\epsilon^{2}k_{\infty}^{2}}A(x)\int_{t}^{\infty}e^{i(\omega_{% \infty}\tau-\frac{|x|^{2}}{4\tau})}\frac{\sqrt{\tau}}{(|x|^{2}+4\alpha\tau^{2}% )}d\tau=I_{1,t}(x)+I_{2,t}(x),$$ where $A(x)=\frac{\epsilon^{2}k_{\infty}^{2}}{4\alpha+\epsilon^{2}k_{\infty}^{2}|x|^{% 2}}.$ For the function $I_{2}(x)$ one has $$|I_{2}(x)|\leq\frac{4\alpha}{\epsilon^{2}k_{\infty}^{2}}A(x)\int_{0}^{\infty}% \frac{\sqrt{\tau}}{(|x|^{2}+4\alpha\tau^{2})}d\tau=C\frac{A(x)}{\sqrt{|x|}}\in L% ^{2}(\mathbb{R}^{3}).$$ With the same estimate it is trivial to prove $$I_{2,t}(x)=O(t^{-1/2})$$ as $t\rightarrow+\infty$, in $L^{2}(\mathbb{R}^{3})$. In order to treat $I_{1}$ note that $$\frac{\sqrt{\tau}}{(1+\epsilon k_{\infty}t)}-\frac{4\alpha}{\epsilon k_{\infty% }}\frac{\tau\sqrt{\tau}}{(|x|^{2}+4\alpha\tau^{2})}=-\frac{1}{\epsilon k_{% \infty}\sqrt{\tau}(1+\epsilon k_{\infty}\tau)}+\frac{|x|^{2}}{\epsilon k_{% \infty}\sqrt{\tau}(|x|^{2}+4\alpha\tau^{2})}.$$ Since $\frac{1}{\epsilon k_{\infty}\sqrt{\tau}(1+\epsilon k_{\infty}\tau)}=O(t^{-3/2})$ as $t\rightarrow+\infty$, is integrable on $(0,+\infty)$ and $A(x)\in L^{2}(\mathbb{R}^{3})$ one has to prove $$\frac{|x|^{2}A(x)}{\epsilon k_{\infty}}\int_{0}^{+\infty}e^{i(\omega_{\infty}% \tau-\frac{|x|^{2}}{4\tau})}\frac{1}{\sqrt{\tau}(|x|^{2}+4\alpha\tau^{2})}d\tau=$$ $$=A(x)\int_{0}^{+\infty}e^{i\omega_{\infty}(\tau-\frac{|x|^{2}}{4\omega_{\infty% }\tau})}\frac{1}{\sqrt{\tau}}d\tau-4\alpha A(x)\int_{0}^{+\infty}e^{i(\omega_{% \infty}\tau-\frac{|x|^{2}}{4\tau})}\frac{\tau^{3/2}}{(|x|^{2}+4\alpha\tau^{2})% }d\tau\in L^{2}(\mathbb{R}^{3}).$$ From formulas 3.871.3 and 3.871.4 in tavole one has $$A(x)\int_{0}^{+\infty}e^{i\omega_{\infty}(\tau-\frac{|x|^{2}}{4\omega_{\infty}% \tau})}\frac{1}{\sqrt{\tau}}d\tau=\frac{e^{i\pi/4}}{\sqrt{\pi\omega_{\infty}}}% A(x)|x|^{3/2}e^{-\sqrt{\omega_{\infty}}|x|}\in L^{2}(\mathbb{R}^{3}).$$ It remains to handle with the second integral in the former sum which can be done integrating by parts in the following way $$\left|A(x)\int_{0}^{+\infty}e^{i(\omega_{\infty}\tau-\frac{|x|^{2}}{4\tau})}% \frac{\tau^{3/2}}{(|x|^{2}+4\alpha\tau^{2})}d\tau\right|=$$ $$=4A(x)\left|\int_{0}^{+\infty}e^{i(\omega_{\infty}\tau-\frac{|x|^{2}}{4\tau})}% \frac{d}{d\tau}\left[\frac{\tau^{7/2}}{(|x|^{2}+4\alpha\tau^{2})(|x|^{2}+4% \omega_{\infty}\tau^{2})}\right]d\tau\right|\leq$$ $$\leq CA(x)\int_{0}^{+\infty}\frac{\tau^{5/2}}{(|x|^{2}+4\min\{\alpha,\omega_{% \infty}\}\tau^{2})^{2}}d\tau\leq C\frac{A(x)}{\sqrt{|x|}}\in L^{2}(\mathbb{R}^% {3}).$$ Then we are done. In order to estimate the decay of $I_{1,t}$ it suffices to study the decay of $$\frac{|x|^{2}A(x)}{\epsilon k_{\infty}}\int_{t}^{+\infty}e^{i(\omega_{\infty}% \tau-\frac{|x|^{2}}{4\tau})}\frac{1}{\sqrt{\tau}(|x|^{2}+4\alpha\tau^{2})}d\tau=$$ $$=A(x)\int_{t}^{+\infty}e^{i\omega_{\infty}(\tau-\frac{|x|^{2}}{4\omega_{\infty% }\tau})}\frac{1}{\sqrt{\tau}}d\tau-4\alpha A(x)\int_{t}^{+\infty}e^{i(\omega_{% \infty}\tau-\frac{|x|^{2}}{4\tau})}\frac{\tau^{3/2}}{(|x|^{2}+4\alpha\tau^{2})% }d\tau,$$ which can be done integrating by parts as before. Let us do that for the second term (the computation for the first one are analogous and simpler): $$\left|A(x)\int_{t}^{+\infty}e^{i(\omega_{\infty}\tau-\frac{|x|^{2}}{4\tau})}% \frac{\tau^{3/2}}{(|x|^{2}+4\alpha\tau^{2})}d\tau\right|=$$ $$=4A(x)\left|\int_{t}^{+\infty}e^{i(\omega_{\infty}\tau-\frac{|x|^{2}}{4\tau})}% \frac{d}{d\tau}\left[\frac{\tau^{7/2}}{(|x|^{2}+4\alpha\tau^{2})(|x|^{2}+4% \omega_{\infty}\tau^{2})}\right]d\tau\right|\leq$$ $$\leq CA(x)\left[t^{-1/2}+\int_{t}^{+\infty}\frac{\tau^{5/2}}{(|x|^{2}+4\min\{% \alpha,\omega_{\infty}\}\tau^{2})^{2}}d\tau\right]\leq$$ $$\leq CA(x)\left(1+\frac{1}{\sqrt{|x|}}\right)t^{-1/2}.$$ The case of the summands with $z_{\infty}^{2}$ is analogous, while the case of $\overline{z_{\infty}}^{2}$ is more difficult because $|x|^{2}+4(\omega_{\infty}-2\xi_{\infty})\tau^{2}=0$ for $$\tau=t^{*}=\frac{|x|}{2\sqrt{2\xi_{\infty}-\omega_{\infty}}}.$$ Let $g:\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}$ be a continuous function with the properties: $$0<g(t^{*})<t^{*}\quad\forall t^{*}>0,\qquad\textrm{and}\qquad A(x)g(t^{*})\in L% ^{2}(\mathbb{R}^{3}).$$ It follows that $g(t^{*})=O(t^{*})=O(|x|)$ as $|x|\rightarrow+\infty$. Hence, one can represent $(0,+\infty)=(0,t^{*}-g(t^{*})]\cup(t^{*}-g(t^{*}),t^{*}+g(t^{*})]\cup(t^{*}+g(% t^{*}),+\infty)$. Integrating by parts once more one has $$\left|A(x)\int_{0}^{t^{*}-g(t^{*})}e^{i((\omega_{\infty}-2\xi_{\infty})\tau-% \frac{|x|^{2}}{4\tau})}\frac{t^{3/2}}{|x|^{2}+4\alpha\tau^{2}}d\tau\right|\leq$$ $$\leq CA(x)\left((t^{*}-g(t^{*}))^{-1/2}+\int_{0}^{t^{*}-g(t^{*})}\frac{t^{5/2}% }{(|x|^{2}+4\alpha\tau^{2})||x|^{2}+4(\omega_{\infty}-2\xi_{\infty})\tau^{2}|}% d\tau+\right.$$ $$\left.+\int_{0}^{t^{*}-g(t^{*})}\frac{t^{9/2}}{(|x|^{2}+4\alpha\tau^{2})^{2}||% x|^{2}+4(\omega_{\infty}-2\xi_{\infty})\tau^{2}|}d\tau+\right.$$ $$\left.+\int_{0}^{t^{*}-g(t^{*})}\frac{t^{9/2}}{(|x|^{2}+4\alpha\tau^{2})||x|^{% 2}+4(\omega_{\infty}-2\xi_{\infty})\tau^{2}|^{2}}d\tau\right)\leq$$ $$\leq CA(x)((t^{*}-g(t^{*}))^{-1/2}+(t^{*}-g(t^{*}))^{3/8})\in L^{2}(\mathbb{R}% ^{3}),$$ where the last inequality follows from formula 3.194.1 in tavole . In the same way (exploiting formula 3.194.2 instead of 3.194.1 in tavole ), one has $$\left|A(x)\int_{t^{*}+g(t^{*})}^{\infty}e^{i((\omega_{\infty}-2\xi_{\infty})% \tau-\frac{|x|^{2}}{4\tau})}\frac{t^{3/2}}{|x|^{2}+4\alpha\tau^{2}}d\tau\right|\leq$$ $$\leq CA(x)((t^{*}+g(t^{*}))^{-1/8}+(t^{*}+g(t^{*}))^{-9/8}+(t^{*}-g(t^{*}))^{-% 1/2})\in L^{2}(\mathbb{R}^{3}).$$ Finally, $$\left|A(x)\int_{t^{*}-g(t^{*})}^{t^{*}+g(t^{*})}e^{i((\omega_{\infty}-2\xi_{% \infty})\tau-\frac{|x|^{2}}{4\tau})}\frac{t^{3/2}}{|x|^{2}+4\alpha\tau^{2}}d% \tau\right|\leq$$ $$\leq CA(x)\int_{t^{*}-g(t^{*})}^{t^{*}+g(t^{*})}\frac{1}{\sqrt{\tau}}d\tau\leq C% \frac{A(x)g(t^{*})}{\sqrt{t^{*}-g(t^{*})}}\in L^{2}(\mathbb{R}^{3}).$$ Summing up, the integrability of the integral function $$\int_{0}^{+\infty}\Pi(\tau)U_{\tau}*Qd\tau$$ is achieved. It is left to study the decay of $$A(x)\int_{t}^{+\infty}e^{i((\omega_{\infty}-2\xi_{\infty})\tau-\frac{|x|^{2}}{% 4\tau})}\frac{t^{3/2}}{|x|^{2}+4\alpha\tau^{2}}d\tau.$$ First of all, let us note that integrating by parts one obtains $$\left|A(x)\int_{(0,t^{*}-g(t^{*})]\cap[t,+\infty)}e^{i((\omega_{\infty}-2\xi_{% \infty})\tau-\frac{|x|^{2}}{4\tau})}\frac{t^{3/2}}{|x|^{2}+4\alpha\tau^{2}}d% \tau\right|\leq$$ $$\leq CA(x)\int_{t}^{t^{*}-g(t^{*})}\left|\frac{d}{d\tau}\frac{t^{7/2}}{(|x|^{2% }+4\alpha\tau^{2})(|x|^{2}+4(\omega_{\infty}-2\xi_{\infty})\tau^{2})}\right|d\tau\leq$$ $$\leq CA(x)\left(t^{-1/2}+\int_{t}^{t^{*}-g(t^{*})}\frac{\sqrt{\tau}}{|x|^{2}+4% \alpha\tau^{2}}d\tau+\int_{t}^{t^{*}-g(t^{*})}\frac{\sqrt{\tau}}{||x|^{2}+4(% \omega_{\infty}-2\xi_{\infty})\tau^{2}|}d\tau+\right.$$ $$\left.+\int_{t}^{t^{*}-g(t^{*})}\frac{\tau^{9/2}}{(|x|^{2}+4\alpha\tau^{2})||x% |^{2}+4(\omega_{\infty}-2\xi_{\infty})\tau^{2}|^{2}}d\tau\right).$$ The three integrals in the last inequality can be estimated in the following way: (i) $\int_{t}^{t^{*}-g(t^{*})}\frac{\sqrt{\tau}}{|x|^{2}+4\alpha\tau^{2}}d\tau\leq C% \int_{t}^{+\infty}\tau^{-3/2}d\tau\leq Ct^{-1/2}$; (ii) $\int_{t}^{t^{*}-g(t^{*})}\frac{\sqrt{\tau}}{|x|^{2}+4(2\xi_{\infty}-\omega_{% \infty})\tau^{2}}d\tau=\int_{t}^{t^{*}-g(t^{*})}\frac{\sqrt{\tau}}{(|x|+2\sqrt% {2\xi_{\infty}-\omega_{\infty}}\tau)||x|-2\sqrt{2\xi_{\infty}-\omega_{\infty}}% \tau|}d\tau$ $\leq Ct^{-1/2}\int_{0}^{t^{*}-g(t^{*})}\frac{1}{||x|-2\sqrt{2\xi_{\infty}-% \omega_{\infty}}\tau|}d\tau\leq Ct^{-1/2}$; (iii) $\int_{t}^{t^{*}-g(t^{*})}\frac{\tau^{9/2}}{(|x|^{2}+4\alpha\tau^{2})||x|^{2}+4% (\omega_{\infty}-2\xi_{\infty})\tau^{2}|^{2}}d\tau\leq Ct^{-1/2}\int_{0}^{t^{*% }-g(t^{*})}\frac{\tau}{||x|-2\sqrt{\omega_{\infty}-2\xi_{\infty}}\tau|^{2}}d\tau$ $\leq Ct^{-1/2}(1+\ln||x|-2\sqrt{\omega_{\infty}-2\xi_{\infty}}(t^{*}-g(t^{*}))|)$. Hence, since $A(x)\ln||x|-2\sqrt{\omega_{\infty}-2\xi_{\infty}}(t^{*}-g(t^{*}))|\in L^{2}(% \mathbb{R}^{3})$, one can conclude $$A(x)\int_{(0,t^{*}-g(t^{*})]\cap[t,+\infty)}e^{i((\omega_{\infty}-2\xi_{\infty% })\tau-\frac{|x|^{2}}{4\tau})}\frac{t^{3/2}}{|x|^{2}+4\alpha\tau^{2}}d\tau=O(t% ^{-1/2})$$ as $t\rightarrow+\infty$, in $L^{2}(\mathbb{R}^{3})$. Let us now observe that $$\left|A(x)\int_{(t^{*}-g(t^{*}),+\infty)\cap[t,+\infty)}e^{i((\omega_{\infty}-% 2\xi_{\infty})\tau-\frac{|x|^{2}}{4\tau})}\frac{t^{3/2}}{|x|^{2}+4\alpha\tau^{% 2}}d\tau\right|\leq$$ $$\leq CA(x)\int_{t}^{+\infty}\left|\frac{d}{d\tau}\frac{t^{7/2}}{(|x|^{2}+4% \alpha\tau^{2})(|x|^{2}+4(\omega_{\infty}-2\xi_{\infty})\tau^{2})}\right|d\tau\leq$$ $$\leq B(x)A(x)\left(t^{-1/2}+\int_{t}^{+\infty}\frac{\sqrt{\tau}}{|x|^{2}+4% \alpha\tau^{2}}d\tau\right)\leq CB(x)A(x)t^{-1/2}\in L^{2}(\mathbb{R}^{3}),$$ where $B:\mathbb{R}^{3}\rightarrow\mathbb{R}^{+}$ is a continuous bounded function. Finally, $$\left|A(x)\int_{(t^{*}-g(t^{*}),(t^{*}+g(t^{*})]\cap[t,+\infty)}e^{i((\omega_{% \infty}-2\xi_{\infty})\tau-\frac{|x|^{2}}{4\tau})}\frac{t^{3/2}}{|x|^{2}+4% \alpha\tau^{2}}d\tau\right|\leq$$ $$\leq CA(x)\int_{t}^{t^{*}+g(t^{*})}\frac{1}{\sqrt{\tau}}d\tau\leq CA(x)g(t^{*}% )t^{-1/2}\in L^{2}(\mathbb{R}^{3}).$$ Summing up, thanks to the unitarity of $U_{t}$, we proved $$U_{t}*\int_{t}^{+\infty}\Pi(\tau)U_{\tau}*Qd\tau=O(t^{-1/2}),$$ as $t\rightarrow+\infty$, in $L^{2}(\mathbb{R}^{3})$. Step 4: conclusion of the proof. The conclusions of the previous step hold true if the phase $\omega_{\infty}t$ is replaced by $\Theta(t)$. In fact, the estimates which involve the integral of the absolute value are totally unaffected by change of phase, then it is only left to adjust the argument involving integration by parts. This can be done integrating by parts exactly as before, which leaves a factor $e^{i(\Theta(t)-\omega_{\infty}t)}$ in the integrand. Then, the boundary terms can be treated in the same way because $|e^{i(\Theta(t)-\omega_{\infty}t)}|=1$. Finally, the extra contribution to the integrand can be estimated as it is done for the summand arising from differentiation of $t^{7/2}$ since $|\dot{\Theta}(t)-\omega_{\infty}|\leq\frac{C}{1+\epsilon k_{\infty}t}$ for all $t>0$, where $C$ is a positive constant. ∎ Summing up, we have proved the following asymptotic stability result. Theorem V.5. Let $\sigma\in\left(\frac{1}{\sqrt{2}},\sigma^{*}\right)$, for a certain $\sigma^{*}\in\left(\frac{1}{\sqrt{2}},\frac{\sqrt{3}+1}{2\sqrt{2}}\right]$ and $u(t)\in C(\mathbb{R}^{+},V)$ be a solution of equation (1) with $$u(0)=u_{0}=e^{i\omega_{0}t+\gamma_{0}}\Phi_{\omega_{0}}+e^{i\omega_{0}t+\gamma% _{0}}[(z_{0}+\overline{z_{0}})\Psi_{1}+i(z_{0}-\overline{z_{0}})\Psi_{2}]+f_{0% }\in V\cap L^{1}_{w}(\mathbb{R}^{3}),$$ for some $\omega_{0}>0$, $\gamma_{0}$, $z_{0}\in\mathbb{R}$ and $f_{0}\in L^{2}(\mathbb{R}^{3})\cap L^{1}_{w}(\mathbb{R}^{3})$. Furthermore, assume that the initial datum $u_{0}$ is close to a solitary wave, i.e. $$|z_{0}|\leq\epsilon^{1/2}\qquad\textrm{and}\qquad\|f_{0}\|_{L^{1}_{w}}\leq c% \epsilon^{3/2},$$ where $c$, $\epsilon>0$. Then, if $\epsilon$ is sufficiently small, the solution $u(t)$ can be asymptotically decomposed as follows $$u(t)=e^{i\omega_{\infty}t+ib_{1}\log(1+\epsilon k_{\infty}t)}\Phi_{\omega_{% \infty}}+U_{t}*\phi_{\infty}+r_{\infty}(t),\quad\textrm{as}\;\;t\rightarrow+\infty,$$ where $\omega_{\infty}$, $\epsilon k_{\infty}>0$, $b_{1}\in\mathbb{R}$ and $\phi_{\infty}$, $r_{\infty}(t)\in L^{2}(\mathbb{R}^{3})$ with $$\|r_{\infty}(t)\|_{L^{2}}=O(t^{-1/4})\quad\textrm{as}\;\;t\rightarrow+\infty,$$ in $L^{2}(\mathbb{R}^{3})$. Remark V.6. Numerical evidences (see Lemma III.11) suggest $\sigma^{*}=\frac{\sqrt{3}+1}{2\sqrt{2}}\simeq 0,96$. VI Appendices In the following appendices we collect auxiliary material on the model studied. In Appendices A and B we recall results from ADO regarding resolvent, spectrum and dispersive behavior of linearization operator $L$ (see (11)) to help the independent reading of the present paper. In the subsequent appendices C,D,E we state and prove further details regarding spectral properties of the linearized operators, in particular the structure of eigenfunctions associated to the discrete spectrum, the structure of the generalized eignevectors, and the proof of Lemma II.8. VI.1 Resolvent and spectrum of linearization We denote $$G_{\omega\pm i\lambda}(x)=\frac{e^{i\sqrt{-\omega\mp i\lambda}|x|}}{4\pi|x|}% \qquad\omega>0,\lambda\in\mathbb{C},$$ (58) with the prescription $\Im{\sqrt{-\omega\pm i\lambda}}>0$. Furthermore, we make use of the notation $\langle g,h\rangle:=\int_{\mathbb{R}^{3}}g(x)h(x)\,dx$. The resolvent of linearized operator $L$ is described in the following Theorem VI.1. The resolvent $R(\lambda)=(L-\lambda I)^{-1}$ of the operator $L$ is given by $$R(\lambda)=\left[\begin{array}[]{cc}-\lambda\mathcal{G}_{\lambda^{2}}*&-\Gamma% _{\lambda^{2}}*\\ \Gamma_{\lambda^{2}}*&-\lambda\mathcal{G}_{\lambda^{2}}*\\ \end{array}\right]+\frac{4\pi}{W(\lambda^{2})}i\left[\begin{array}[]{cc}% \Lambda_{1}&i\Sigma_{2}\\ -i\Sigma_{1}&\Lambda_{2}\\ \end{array}\right],$$ (59) where $$W(\lambda^{2})=32\pi 2\alpha_{1}\alpha_{2}-4i\pi(\alpha_{1}+\alpha_{2})\left(% \sqrt{-\omega+i\lambda}+\sqrt{-\omega-i\lambda}\right)-2\sqrt{-\omega+i\lambda% }\sqrt{-\omega-i\lambda},$$ and formula (59) holds for all $\lambda\in\mathbb{C}\setminus\{\lambda\in\mathbb{C}:\;W(\lambda^{2})=0,\;\;% \textrm{or}\;\;\Re(\lambda)=0\;\textrm{and}\;|\Im(\lambda)|\geq\omega\}$. Furthermore, the symbol $*$ in (59) denotes the convolution and $$\mathcal{G}_{\lambda^{2}}(x)=\frac{1}{2i\lambda}\left(G_{\omega-i\lambda}(x)-G% _{\omega+i\lambda}(x)\right),\quad\Gamma_{\lambda^{2}}(x)=\frac{1}{2}\left(G_{% \omega-i\lambda}(x)+G_{\omega+i\lambda}(x)\right).$$ Finally, the entries of the second matrix are finite rank operators whose action on $f\in L^{2}(\mathbb{R}^{3})$ reads $$\Lambda_{1}f=[i\lambda(4\pi\alpha_{2}-i\sqrt{-\omega+i\lambda})\langle\mathcal% {G}_{\lambda^{2}},f\rangle-(4\pi\alpha_{1}-i\sqrt{-\omega+i\lambda})\langle% \Gamma_{\lambda^{2}},f\rangle]G_{\omega+i\lambda}+$$ (60) $$+[i\lambda(4\pi\alpha_{2}-i\sqrt{-\omega-i\lambda})\langle\mathcal{G}_{\lambda% ^{2}},f\rangle+(4\pi\alpha_{1}-i\sqrt{-\omega-i\lambda})\langle\Gamma_{\lambda% ^{2}},f\rangle]G_{\omega-i\lambda},$$ $$\Lambda_{2}f=[i\lambda(4\pi\alpha_{1}-i\sqrt{-\omega+i\lambda})\langle\mathcal% {G}_{\lambda^{2}},f\rangle-(4\pi\alpha_{2}-i\sqrt{-\omega+i\lambda})\langle% \Gamma_{\lambda^{2}},f\rangle]G_{\omega+i\lambda}+$$ $$+[i\lambda(4\pi\alpha_{1}-i\sqrt{-\omega-i\lambda})\langle\mathcal{G}_{\lambda% ^{2}},f\rangle+(4\pi\alpha_{2}-i\sqrt{-\omega-i\lambda})\langle\Gamma_{\lambda% ^{2}},f\rangle]G_{\omega-i\lambda},$$ $$\Sigma_{1}f=-[i\lambda(4\pi\alpha_{2}-i\sqrt{-\omega+i\lambda})\langle\mathcal% {G}_{\lambda^{2}},f\rangle-(4\pi\alpha_{1}-i\sqrt{-\omega+i\lambda})\langle% \Gamma_{\lambda^{2}},f\rangle]G_{\omega+i\lambda}+$$ $$+[i\lambda(4\pi\alpha_{2}-i\sqrt{-\omega-i\lambda})\langle\mathcal{G}_{\lambda% ^{2}},f\rangle+(4\pi\alpha_{1}-i\sqrt{-\omega-i\lambda})\langle\Gamma_{\lambda% ^{2}},f\rangle]G_{\omega-i\lambda},$$ $$\Sigma_{2}f=-[i\lambda(4\pi\alpha_{1}-i\sqrt{-\omega+i\lambda})\langle\mathcal% {G}_{\lambda^{2}},f\rangle-(4\pi\alpha_{2}-i\sqrt{-\omega+i\lambda})\langle% \Gamma_{\lambda^{2}},f\rangle]G_{\omega+i\lambda}+$$ $$+[i\lambda(4\pi\alpha_{1}-i\sqrt{-\omega-i\lambda})\langle\mathcal{G}_{\lambda% ^{2}},f\rangle+(4\pi\alpha_{2}-i\sqrt{-\omega-i\lambda})\langle\Gamma_{\lambda% ^{2}},f\rangle]G_{\omega-i\lambda}.$$ The resolvent determines also the spectrum of the operator $L$, as given in the following Proposition VI.2. The spectrum of the linearized operator L has the following structure (a) $\sigma_{ess}(L)=\{\lambda\in\mathbb{C}:\;\Re(\lambda)=0\,\text{and}\,|\Im(% \lambda)|\geq\omega\}$ (b) If $\sigma\in(0,1/\sqrt{2})$, the only eigenvalue of $L$ is $0$ with algebraic multiplicity $2$. (c) If $\sigma=1/\sqrt{2}$, L has resonances $\pm i\omega$ at the border of the essential spectrum and the eigenvalue $0$ with algebraic multiplicity $2$. (d) If $\sigma\in(1/\sqrt{2},1)$, $L$ has two simple eigenvalues $\pm i\xi==\pm i2\sigma\sqrt{1-\sigma^{2}}\omega$ and the eigenvalue $0$ with algebraic multiplicity $2$. (e) If $\sigma=1$, the only eigenvalue of $L$ is $0$ with algebraic multiplicity $4$. (f) If $\sigma\in(1,+\infty)$, $L$ has two simple eigenvalues $\pm 2\sigma\sqrt{\sigma^{2}-1}\omega$ and the eigenvalue $0$ with algebraic multiplicity $2$. VI.2 Dispersive estimates We recall that the propagator $e^{-tL}$ is the inverse Laplace transform of the resolvent. So the dispersive behaviour associated to the linearized dynamics projected on the continuous spectrum is controlled by the following result, proved in ADO , Theorem 4.8. Theorem VI.3. Let $\sigma\neq\frac{1}{{\sqrt{2}}}$. There exists a constant $C>0$ such that $$\left|\frac{1}{2\pi i}\int_{\mathbb{R}^{3}}\int_{\mathcal{C}_{+}\cup\mathcal{C% }_{-}}(R(\lambda+0)-R(\lambda-0))(x)e^{-\lambda t}f(y)\,d\lambda dy\right|\leq C% \left(1+\frac{1}{|x|}\right)t^{-\frac{3}{2}}\int_{\mathbb{R}^{3}}\left(1+\frac% {1}{|y|}\right)|f(y)|dy$$ for any $f\in L^{1}_{w}(\mathbb{R}^{3})$, where $$\mathcal{C}_{+}=\{\lambda\in\mathbb{C}:\;\Re(\lambda)=0\,\textrm{and}\,\ \Im(% \lambda)\geq\omega\},\ \ \ \mathcal{C}_{-}=\{\lambda\in\mathbb{C}:\;\Re(% \lambda)=0\,\textrm{and}\,\ \Im(\lambda)\leq-\omega\}\ .$$ VI.3 Eigenfunctions associated to $\pm i\xi$ and generalized eigenfunctions VI.3.1 The eigenfunctions associated to $\pm i\xi$ Here we describe the eigenspaces associated to the simple purely imaginary eigenvalues $\pm i\xi=\pm i2\sigma\sqrt{1-\sigma^{2}}\omega$. Let us start with the eigenvalue $i\xi$. The following proposition holds true. Proposition VI.4. The eigenspace associated to $i\xi$ is spanned by $$\Psi(x)=\left(\begin{array}[]{cc}\Psi_{1}(x)\\ \Psi_{2}(x)\end{array}\right)=\frac{e^{-\sqrt{\omega-\xi}|x|}}{4\pi|x|}\left(% \begin{array}[]{cc}1\\ i\end{array}\right)-\frac{\sqrt{1-\sigma^{2}}-1}{\sigma}\,\frac{e^{-\sqrt{% \omega+\xi}|x|}}{4\pi|x|}\left(\begin{array}[]{cc}1\\ -i\end{array}\right).$$ Proof. In order to prove the proposition we need to solve the equation $$L\Psi=i\xi\Psi$$ in $D(L)$. For $x\neq 0$, the previous equation is equivalent to the system $$\left\{\begin{array}[]{ll}(-\triangle+\omega)^{2}\Psi_{1}-\xi^{2}\Psi_{1}=0\\ \Psi_{2}=\frac{i}{\xi}(-\triangle+\omega)\Psi_{1}\end{array}\right.,$$ from which follows that $\Psi_{1}$ must belong to $L^{2}(\mathbb{R}^{3})$ and solve the equation $$(-\triangle+\omega-\xi)(-\triangle+\omega+\xi)\Psi_{1}=0.$$ Hence, the solutions in $L^{2}(\mathbb{R}^{3})$ are of the form $$\left\{\begin{array}[]{ll}\Psi_{1}(x)=A\frac{e^{-\sqrt{\omega-\xi}|x|}}{4\pi|x% |}+B\frac{e^{-\sqrt{\omega+\xi}|x|}}{4\pi|x|}\\ \Psi_{2}(x)=iA\frac{e^{-\sqrt{\omega-\xi}|x|}}{4\pi|x|}-iB\frac{e^{-\sqrt{% \omega+\xi}|x|}}{4\pi|x|}\end{array}\right.,$$ for any $A$, $B\in\mathbb{C}$. It is left to look for $A$, $B\in\mathbb{C}$ such that $\Psi_{i}\in D(L_{i})$ for $i=1$, $2$, i.e. $$\left\{\begin{array}[]{ll}-\frac{\sqrt{\omega-\xi}}{4\pi}A-\frac{\sqrt{\omega+% \xi}}{4\pi}B=-(2\sigma+1)\frac{\sqrt{\omega}}{4\pi}(A+B)\\ -i\frac{\sqrt{\omega-\xi}}{4\pi}A+i\frac{\sqrt{\omega+\xi}}{4\pi}B=-\frac{% \sqrt{\omega}}{4\pi}(iA-iB)\end{array}\right..$$ Exploiting the fact that $\xi=2\sigma\sqrt{1-\sigma^{2}}\omega$ one can show that the two equations of the previous system are linearly dependent and $$B=-\frac{\sqrt{1-\sigma^{2}}+1}{\sigma}A.$$ The thesis follows by setting $A=1$. ∎ Let us note that in the previous proof we have chosen the constant in such a way that $\Psi_{1}(x)\in\mathbb{R}$ and $\Psi_{2}(x)\in i\mathbb{R}$ for any $x\in\mathbb{R}^{3}\setminus\{0\}$. This fact will be used to prove the next proposition. Proposition VI.5. The eigenspace associated to $-i\xi$ is spanned by $$\Psi^{*}=\left(\begin{array}[]{cc}\Psi_{1}\\ -\Psi_{2}\end{array}\right).$$ Proof. In the previous proposition we proved that $$\left\{\begin{array}[]{ll}L_{2}\Psi_{2}=i\xi\Psi_{1}\\ -L_{1}\Psi_{1}=i\xi\Psi_{2}\end{array}\right.,$$ with $\Psi_{1}$ real and $\Psi_{2}$ purely imaginary. Taking the conjugate of both equations and recalling that the operators $L_{i}$, $i=1$, $2$ act on the real and imaginary parts separately, one has $$\left\{\begin{array}[]{ll}L_{2}(-\Psi_{2})=-i\xi\Psi_{1}\\ -L_{1}\Psi_{1}=-i\xi(-\Psi_{2})\end{array}\right.,$$ which is equivalent to $$L\Psi^{*}=-i\xi\Psi^{*},$$ because the operators $L_{i}$, $i=1$, $2$ are linear. The proof is complete. ∎ VI.3.2 The generalized eigenfunctions Our goal is to compute the generalized eigenfunctions associated to the continuous spectrum. In order to do that, we treat the two branches $\mathcal{C}_{+}$ and $\mathcal{C}_{-}$ of the continuous spectrum separately. Proposition VI.6. The generalized eigenfunctions associated to $\mathcal{C}_{+}$ are $$\Psi_{+}(x)=A\frac{e^{-\sqrt{\omega+\eta}|x|}}{4\pi|x|}\left(\begin{array}[]{% cc}1\\ -i\end{array}\right)+C\frac{e^{-i\sqrt{\eta-\omega}|x|}}{4\pi|x|}\left(\begin{% array}[]{cc}1\\ i\end{array}\right)+D\frac{e^{i\sqrt{\eta-\omega}|x|}}{4\pi|x|}\left(\begin{% array}[]{cc}1\\ i\end{array}\right),$$ for any $\eta\in[\omega,+\infty)$ and $D\in\mathbb{C}$, with $$\begin{array}[]{ll}A=\frac{\sigma\sqrt{\omega}}{\sqrt{\omega+\eta}-(\sigma+1)% \sqrt{\omega}}(C+D),\\ C=\frac{(2\sigma+1)\omega+(\sigma+1)\sqrt{\omega}(i\sqrt{\eta-\omega}-\sqrt{% \eta+\omega})-i\sqrt{\eta^{2}-\omega^{2}}}{-(2\sigma+1)\omega+(\sigma+1)\sqrt{% \omega}(i\sqrt{\eta-\omega}+\sqrt{\eta+\omega})-i\sqrt{\eta^{2}-\omega^{2}}}D.% \\ \end{array}$$ Proof. For any $\eta\in[\omega,+\infty)$, we need to solve the system $$L\Psi_{+}=i\eta\Psi_{+},$$ where $\Psi_{+}\in L^{\infty}(\mathbb{R}^{3})$ does not necessary belongs to $L^{2}(\mathbb{R}^{3})$. As in the computation for the eigenfunction at $\pm i\xi$, if $x\neq 0$ the former equation is equivalent to the system $$\left\{\begin{array}[]{ll}(-\triangle+\omega-\xi)(-\triangle+\omega+\xi)(\Psi_% {+})_{1}=0\\ (\Psi_{+})_{2}=\frac{i}{\xi}(-\triangle+\omega)(\Psi_{+})_{1}\end{array}\right.,$$ which leads to $$\Psi_{+}(x)=A\frac{e^{-\sqrt{\omega+\eta}|x|}}{4\pi|x|}\left(\begin{array}[]{% cc}1\\ -i\end{array}\right)+B\frac{e^{\sqrt{\omega+\eta}|x|}}{4\pi|x|}\left(\begin{% array}[]{cc}1\\ i\end{array}\right)+C\frac{e^{-i\sqrt{\eta-\omega}|x|}}{4\pi|x|}\left(\begin{% array}[]{cc}1\\ i\end{array}\right)+D\frac{e^{i\sqrt{\eta-\omega}|x|}}{4\pi|x|}\left(\begin{% array}[]{cc}1\\ i\end{array}\right),$$ for some $A$, $B$, $C$, $D\in\mathbb{C}$. Since we require $\Psi_{+}\in L^{\infty}(\mathbb{R}^{3})$, we get $B=0$. Moreover, the boundary conditions in the domain of the operators $L_{1}$ and $L_{2}$ must be satisfied by $(\Psi_{+})_{1}$ and $(\Psi_{+})_{2}$ respectively. Then $A$, $C$, and $D$ solve the system $$\left\{\begin{array}[]{ll}-\frac{\sqrt{\omega+\eta}}{4\pi}A-i\frac{\sqrt{\eta+% \omega}}{4\pi}C+i\frac{\sqrt{\eta+\omega}}{4\pi}D=-\frac{(2\sigma+1)\sqrt{% \omega}}{4\pi}(A+C+D)\\ i\frac{\sqrt{\omega+\eta}}{4\pi}A+\frac{\sqrt{\eta+\omega}}{4\pi}C-\frac{\sqrt% {\eta+\omega}}{4\pi}D=-\frac{\sqrt{\omega}}{4\pi}(-iA+iC+iD)\end{array}\right.,$$ which concludes the proof. ∎ In the same way, one can prove the analogous result about $\mathcal{C}_{-}$. Proposition VI.7. The generalized eigenfunctions associated to $\mathcal{C}_{-}$ are $$\Psi_{-}(x)=A\frac{e^{-\sqrt{\omega-\eta}|x|}}{4\pi|x|}\left(\begin{array}[]{% cc}1\\ i\end{array}\right)+C\frac{e^{-i\sqrt{-(\eta+\omega)}|x|}}{4\pi|x|}\left(% \begin{array}[]{cc}1\\ -i\end{array}\right)+D\frac{e^{i\sqrt{-(\eta+\omega)}|x|}}{4\pi|x|}\left(% \begin{array}[]{cc}1\\ -i\end{array}\right),$$ for any $\eta\in(-\infty,-\omega]$, where $D\in\mathbb{C}$ and $$\begin{array}[]{ll}A=\frac{\sigma\sqrt{\omega}}{\sqrt{\omega-\eta}-(\sigma+1)% \sqrt{\omega}}(C+D),\\ C=\frac{(2\sigma+1)\omega+(\sigma+1)\sqrt{\omega}(i\sqrt{-(\eta+\omega)}-\sqrt% {\omega-\eta})-i\sqrt{\eta^{2}-\omega^{2}}}{-(2\sigma+1)\omega+(\sigma+1)\sqrt% {\omega}(i\sqrt{-(\eta+\omega)}+\sqrt{\eta+\omega})-i\sqrt{\eta^{2}-\omega^{2}% }}D.\\ \end{array}$$ It is easy to see that the projection operators from $L^{2}(\mathbb{R}^{3})$ onto $X^{0}$, $X^{1}$ and $X^{c}$ are given by $$\begin{array}[]{ll}\displaystyle P^{0}f=-\frac{2}{\Delta}\Omega\left(f,\frac{d% \Phi_{\omega}}{d\omega}\right)J\Phi_{\omega}+\frac{2}{\Delta}\Omega\left(f,J% \Phi_{\omega}\right)\frac{d\Phi_{\omega}}{d\omega},&\Delta=\frac{d}{d\omega}\|% \Phi_{\omega}\|_{L^{2}},\\ \displaystyle P^{1}f=\frac{\Omega(f,\Psi)}{\kappa}\Psi+\frac{\Omega(f,\Psi^{*}% )}{\kappa}\Psi^{*},&\kappa=\Omega(\Psi,\Psi^{*}),\\ \displaystyle P^{c}f=f-P^{0}f-P^{1}f,\end{array}$$ Moreover, we denote with $\Pi^{\pm}$ the projections onto the branches $\mathcal{C}_{\pm}$ of the continuous spectrum separately. VI.4 Proof of Lemma II.8 In this appendix we prove Lemma II.8 whose statement is recalled for the reader’s convenience. Lemma VI.8. There exists a constant $C>0$ such that for each $h\in X^{c}$ holds $$\left\|[P^{c}J-i(\Pi^{+}-\Pi^{-})]h\right\|_{L^{1}_{w}}\leq C\|h\|_{L^{\infty}% _{w^{-1}}}.$$ Proof. From the definitions of the operators $P^{c}$ and $\Pi^{\pm}$ one gets $$P^{c}J-i(\Pi^{+}-\Pi^{-})=\Pi^{+}(J-iI)+\Pi^{-}(J+iI)=$$ $$=\frac{1}{2\pi i}\left[\int_{\mathcal{C}^{+}}(R(\lambda+0)-R(\lambda-0))(J-iI)% d\lambda+\int_{\mathcal{C}^{-}}(R(\lambda+0)-R(\lambda-0))(J+iI)d\lambda\right].$$ We will estimate just the first integral because the second one can be handled in the same way. Exploiting the explicit form of the resolvent (VI.1) it follows that $$R(\lambda)(J-iI)=\left(i\lambda\mathcal{G}_{\lambda^{2}}*+\Gamma_{\lambda^{2}}% *\right)\left[\begin{array}[]{cc}1&i\\ -i&1\\ \end{array}\right]+\frac{4\pi}{D(\lambda^{2})}\left[\begin{array}[]{cc}\Lambda% _{1}+\Sigma_{2}&i(\Lambda_{1}+\Sigma_{2})\\ -i(\Lambda_{2}+\Sigma_{1})&\Lambda_{2}+\Sigma_{1}\\ \end{array}\right]=$$ $$=R_{*}(\lambda)(J-iI)+R_{m}(\lambda)(J-iI),$$ where $R_{*}$ and $R_{m}$ correspond to the convolution term of the resolvent and the multiplicative term. Note that $$i\lambda\mathcal{G}_{\lambda^{2}}(x-y)+\Gamma_{\lambda^{2}}(x-y)=2G_{\omega-i% \lambda}(x-y)=\frac{e^{i\sqrt{-\omega+i\lambda}|x-y|}}{2\pi|x-y|}$$ is continuous on $\mathcal{C}_{+}$. Hence, the integral on $\mathcal{C}_{+}$ of the convolution addends vanishes. Let us now consider the multiplicative addends in the integral on $\mathcal{C}_{+}$. From the explicit formulas for $\Lambda_{1}$ and $\Sigma_{2}$ given in Proposition VI.1 one can compute $$(\Lambda_{1}+\Sigma_{2})(x,y)=8\pi(\alpha_{2}-\alpha_{1})G_{\omega-i\lambda}(y% )G_{\omega+i\lambda}(x)+[8\pi(\alpha_{2}+\alpha_{1})-4i\sqrt{-\omega-i\lambda}% ]G_{\omega-i\lambda}(y)G_{\omega-i\lambda}(x)=$$ $$=4\sigma\sqrt{\omega}\frac{e^{i\sqrt{-\omega+i\lambda}|y|}}{4\pi|y|}\frac{e^{i% \sqrt{-\omega-i\lambda}|x|}}{4\pi|x|}-[4(\sigma+1)\sqrt{\omega}-4i\sqrt{-% \omega-i\lambda}]\frac{e^{i\sqrt{-\omega+i\lambda}(|x|+|y|)}}{(4\pi)^{2}|x||y|}.$$ Denote $$D_{\pm}(\lambda^{2})=D((\lambda\pm 0)^{2}).$$ Then it follows $$\int_{\mathcal{C}^{+}}[(R_{m}(\lambda+0)-R_{m}(\lambda-0))(J-iI)]_{1,1}d\lambda=$$ $$=\int_{\mathcal{C}^{+}}\frac{\sigma\sqrt{\omega}e^{i\sqrt{-\omega+i\lambda}|y|% }e^{-i\sqrt{-\omega-i\lambda}|x|}+((\sigma+1)\sqrt{\omega}-i\sqrt{-\omega-i% \lambda})e^{i\sqrt{-\omega+i\lambda}(|x|+|y|)}}{\pi|x||y|D_{+}(\lambda^{2})}d\lambda+$$ $$-\int_{\mathcal{C}^{+}}\frac{\sigma\sqrt{\omega}e^{i\sqrt{-\omega+i\lambda}|y|% }e^{i\sqrt{-\omega-i\lambda}|x|}+((\sigma+1)\sqrt{\omega}+i\sqrt{-\omega-i% \lambda})e^{i\sqrt{-\omega+i\lambda}(|x|+|y|)}}{\pi|x||y|D_{-}(\lambda^{2})}d\lambda.$$ If we compute the change of variable $k=\sqrt{-\omega-i\lambda}$ in the first integral of the last equality, and $k=-\sqrt{-\omega-i\lambda}$ in the second one, then one has $$\left|\int_{\mathcal{C}^{+}}[(R_{m}(\lambda+0)-R_{m}(\lambda-0))(J-iI)]_{1,1}d% \lambda\right|=$$ $$=\left|\frac{4i}{\pi|x||y|}\left(\int_{-\infty}^{+\infty}\sigma\sqrt{\omega}k% \frac{e^{-\sqrt{k^{2}+2\omega}|y|}}{D(k)}e^{-ik|x|}dk+\int_{-\infty}^{+\infty}% 2ik^{2}\frac{e^{-\sqrt{k^{2}+2\omega}(|x|+|y|)}}{D(k)}dk\right)\right|\leq$$ $$\leq C\frac{e^{-\sqrt{2\omega}|y|}}{|y||x|}\min\left\{\frac{1}{|x|},e^{-\sqrt{% 2\omega}|x|}\right\}\leq C\frac{e^{-\sqrt{2\omega}|y|}}{|y|}\frac{e^{-\sqrt{2% \omega}|x|}}{|x|},$$ where the first inequality is obtained integrating by parts both integrals. The integral of the other three elements of the matrix operator $(R_{m}(\lambda+0)-R_{m}(\lambda-0))(J-iI)$ can be estimated in the same way and this implies the statement of the lemma. ∎ References (1) R. Adami, G. Dell’Antonio, R. Figari, and A. Teta. The Cauchy problem for the Schrödinger equation in dimension three with concentrated nonlinearity. Ann. I. H. Poincaré, 20:477–500, 2003. (2) R. Adami, G. Dell’Antonio, R. Figari, and A. Teta. Blow-up solutions for the Schrödinger equation in dimension three with a concentrated nonlinearity. Ann. I. H. Poincaré, 21:121–137, 2004. (3) R. Adami, D. Noja, and C. Ortoleva. Orbital and asymptotic stability for standing waves of a NLS equation with concentrated nonlinearity in dimension three. J. Math. Phys., 54 013501 (2013) (4) S. Albeverio, F. Gesztesy, R. Högh-Krohn, and H. Holden. Solvable models in quantum mechanics. American Mathematical Society, Providence, 2005. (5) D. Bambusi, Asymptotic stability of ground states in some Hamiltonian PDEs with symmetry. Comm. Math. Phys. 320 499–542 (2013). (6) V. S. Buslaev, A. I. Komech, A.E. Kopylova, and D. Stuart. On asymptotic stability of solitary waves in Schrödinger equation coupled to nonlinear oscillator. Communications in partial differential equations, 33:669–705, 2008. (7) V. S. Buslaev and G. Perelman. Scattering for the nonlinear Schrödinger equation: states close to a soliton. St.Petersbourg Math J., 4:1111–1142, 1993. (8) V. S. Buslaev and G. Perelman. On the stability of solitary waves for nonlinear Schrödinger equations. Amer.Math.Soc.Transl., 164(2):75–98, 1995. (9) V. S. Buslaev and C. Sulem. On asymptotic stability of solitary waves for nonlinear Schrödinger equation. Ann. I. H. Poincaré, 20:419–475, 2003. (10) C.Cacciapuoti, D.Finco, D.Noja and S.Teta, The NLS Equation in Dimension One with Spatially Concentrated Nonlinearities: the Pointlike Limit, Lett. Math. Phys., 104, 1557–1570, 2014 (11) C.Cacciapuoti, D.Finco, D.Noja and S.Teta, The NLS Equation in Dimension Three with Spatially Concentrated Nonlinearities: the Pointlike Limit, (In Preparation) (12) S. Cuccagna. Stabilization of solution to nonlinear Schrödinger equations. Comm.Pure App.Math., 54:1110–1145, 2001. erratum ibid. 58, 147 (2005). (13) S. Cuccagna and T. Mizumachi. On asymptotic stability in energy space of ground states for nonlinear Schrödinger equations. Comm.Math.Phys., 284:51–87, 2008. (14) S. Cuccagna. The Hamiltonian structure of the nonlinear Schrödinger equation and the asymptotic stability of its ground states. Comm. Math. Phys., 305, 279– 331, 2011 (15) Z. Gang and I. M. Sigal. Relaxation of solitons in nonlinear schrödinger equations with potentials. Adv. Math., 216:443–490, 2007. (16) I. S. Gradshteyn and I.M. Ryzhik. Tables of integrals, series and products. 1965. (17) E. Kirr and Ö. Mizrak. Asymptotic stability of ground states in 3d nonlinear Schrödinger equation including subcritical cases. Journal of functional analysis, 257:3691–3747, 2009. (18) A. I. Komech, E. A. Kopylova, and D. Stuart. On asymptotic stability of solitary waves for Schrödinger equation coupled to nonlinear oscillator, II. Comm. Pure Appl. Anal., 202:1063–1079, 2012. (19) D. Noja and A. Posilicano. Wave equations with concentrated nonlinearities. J.Phys.A:Math.Gen., 38:5011–5022, 2005. (20) I.M. Sigal. Nonlinear wave and Schrödinger equations. I. Instability of periodic and quasiperiodic solutions. Commun. Math. Phys., 2:297–320, 1993. (21) A. Soffer and M. Weinstein. Multichannel nonlinear scattering for nonintegrable equations. Comm.Math.Phys., 133:119–146, 1990. (22) A. Soffer and M. Weinstein. Multichannel nonlinear scattering for nonintegrable equations II. the case of anisotropic potentials and data. J.Diff.Eq., 98:376–390, 1992. (23) T. P. Tsai and H. T. Yau. Asymptotic dynamics of nonlinear Schrödinger equations: resonance-dominated and dispersion-dominated solutions. Comm.Pure.Appl.Math, 55:153–216, 2002. (24) Tai-Peng Tsai and Horng-Tzer Yau. Asymptotic dynamics of nonlinear Schrödinger equations. Comm. Pure. Appl. Math, LV:0153–0216, 2002. (25) T.P. Tsai and H.T. Yau. Relaxation of excited states in nonlinear Schrödinger equations. Int.Math.Res.Not., 31:1629–1673, 2002.
Abstract We present an ultraviolet complete theory for the $R(D^{*})$ and $R(D)$ anomaly in terms of a low mass $W_{R}^{\pm}$ gauge boson of a class of left-right symmetric models. These models, which are based on the gauge symmetry $SU(3)_{c}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}$, utilize vector-like fermions to generate quark and lepton masses via a universal seesaw mechanism. A parity symmetric version as well as an asymmetric version are studied. A light sterile neutrino emerges naturally in this setup, which allows for new decay modes of $B$-meson via right-handed currents. We show that these models can explain $R(D^{*})$ and $R(D)$ anomaly while being consistent with LHC and LEP data as well as low energy flavor constraints arising from $K_{L}-K_{S},B_{d,s}-\bar{B}_{d,s}$, $D-\bar{D}$ mixing, etc., but only for a limited range of the $W_{R}$ mass: $1.2\,(1.8)~{}{\rm TeV}\leq M_{W_{R}}\leq 3~{}{\rm TeV}$ for parity asymmetric (symmetric) Yukawa sectors. The light sterile neutrinos predicted by the model may be relevant for explaining the MiniBoone and LSND neutrino oscillation results. The parity symmetric version of the model provides a simple solution to the strong CP problem without relying on the axion. It also predicts an isospin singlet top partner with a mass $M_{T}=(1.5-2.5)$ TeV. OSU-HEP-18-06 MI-TH-181 UMD-PP-018-08 A Theory of ${R(D^{*},D)}$ Anomaly [0.1in] With Right-Handed Currents  K.S. Babu${}^{a,}$***E-mail: babu@okstate.edu,  Bhaskar Dutta${}^{b,}$†††E-mail: dutta@physics.tamu.edu and  Rabindra N. Mohapatra${}^{c,}$‡‡‡E-mail: rmohapat@physics.umd.edu ${}^{a}$Department of Physics, Oklahoma State University, Stillwater, OK, 74078, USA ${}^{b}$Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics & Astronomy, Texas A & M University, College Station, TX, 77845, USA ${}^{c}$Maryland Center for Fundamental Physics, Department of Physics, University of Maryland, College Park, MD 20742, USA 1 Introduction The observations by BaBar [1, 2], Belle [3, 4, 5] and LHCb [6] experiments of deviations in the ratio of $B$ meson decays $R(D^{*})=\frac{\Gamma(B\to D^{*}\tau\nu)}{\Gamma(B\to D^{*}\ell\nu)}$ and $R(D)=\frac{\Gamma(B\to D\tau\nu)}{\Gamma(B\to D\ell\nu)}$ from their standard model predictions at $\sim 4\sigma$ level have posed quite a theoretical challenge. Recently LHCb has released its first measurement of the ratio of branching ratios $\frac{{\cal B}(B^{+}_{c}\to J/\psi\tau\nu)}{{\cal B}(B^{+}_{c}\to J/\psi\mu\nu)}$ [7] which also differs from its standard model prediction at the 2 $\sigma$ level, apparently supporting the above anomaly. An intriguing possibility discussed recently [8, 9, 10] is that there may be additional contributions only to the $D\tau\nu$ decay mode of $B$-meson mediated by a low mass $SU(2)_{L}$ singlet $W^{\prime}$ boson which exclusively couples to $\bar{b}_{R}\gamma_{\mu}c_{R}$ and $\bar{\tau}_{R}\gamma_{\mu}\nu_{R}$ currents with a gauge coupling $g_{R}$. For $g_{R}$ equal to the weak $SU(2)_{L}$ gauge coupling $g_{L}$, and with no mixing angle suppression in the $\bar{b}_{R}\gamma_{\mu}c_{R}$ vertex, resolving the $R(D,D^{*})$ anomaly would require that $M_{W^{\prime}}\simeq 700$ GeV. The question then is what kind of an ultraviolet complete theory would lead to such an interaction. It would be of great interest to see if such a $W^{\prime}$ can be identified with a low mass right handed $W_{R}^{\pm}$ boson of left-right symmetric theories [11] discussed extensively in the literature. We explore this question in this paper. A major hurdle that any left-right symmetric embedding of low mass $W^{\prime}$ should overcome is the current lower bound on $M_{W_{R}}$ from flavor changing neutral current observables such as $K_{L}-K_{S}$, $B_{s}-\bar{B}_{s}$ and $B_{d}-\bar{B}_{d}$ mixings [12, 13], as well as the direct $W_{R}$ search limits at the LHC [14]. Furthermore, since in simple left-right models there is a relation between the masses of $W_{R}$ and $Z_{R}$, e.g. $M_{Z_{R}}\simeq 1.7\,(1.2)M_{W_{R}}$ in parity symmetric models with Higgs triplet (doublet) used for $SU(2)_{R}$ breaking, one has to reconcile a low mass $Z_{R}$ with current limits from LEP and LHC searches [15]. Indeed, consistency with these constraints would prevent an explanation of the anomaly in terms of $W_{R}$ from the standard formulation of left-right symmetric models with parity [11, 12, 13, 14] or without parity [16]. We will have more to say on this conclusion later (see discussions in Sec. 4.1). We focus here on a variant formulation of left-right (LR) symmetric models which introduces vector-like fermions for quark and lepton mass generation and show that such a setup can overcome the hurdles mentioned above. These models, which are based on the standard LR gauge group $SU(3)_{c}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}$, have a very simple Higgs sector – one $SU(2)_{L}$ doublet $\chi_{L}$ and one $SU(2)_{R}$ doublet $\chi_{R}$. Vector-like fermions $(U_{a},\,D_{a},\,E_{a},\,N_{a})$ with $a=1-3$ transforming as singlets of $SU(2)_{L}$ and $SU(2)_{R}$ and with electric charges $(2/3,\,-1/3,\,-1,\,0)$ are needed to generate fermion masses, which arise via a “universal seesaw” mechanism [17, 18, 19, 20]. These vector-like fermions can have gauge invariant bare masses, with $N_{La}$ and $N_{Ra}$ having both Dirac and Majorana masses. Although not very minimal in the fermionic content, these models do provide certain advantages. First, the Higgs sector is very minimal, with the physical spectrum consisting only of two neutral scalars, one of which being the 125 GeV Standard Model–like Higgs boson. Second, owing to the quadratic dependence of the light fermion masses on the Yukawa couplings $Y_{i}$, the values of $Y_{i}$ needed to explain the hierarchy in fermion masses can be in the range $Y_{i}=(10^{-3}-1)$ as opposed to $Y_{i}=(10^{-6}-1)$ in the standard left-right symmetric model (or in the standard model). This follows as the fermion masses in these models are given as $m_{i}\sim Y_{i}^{2}\kappa_{L}\kappa_{R}/M_{i}$, assuming parity, where $\kappa_{L,R}$ are the vacuum expectation values (VEVs) of the $SU(2)_{L,R}$ doublet fields $\chi_{L,R}$. Third, these models provide naturally light sterile neutrinos, which leads to the possibility of right-handed currents in meson decays, and which may play a role in understanding the MiniBoone [21] and LSND [22] neutrino oscillation results.444Models with light sterile neutrinos introduced to explain the MiniBoone and LSND anomalies would appear to be in conflict with the number of effective neutrino species inferred from $\Lambda$CDM cosmology, especially from Planck data [23]. A possible way around this within $\Lambda$CDM is to postulate secret self-interactions of these sterile neutrinos [24]. And fourth, these models provide a simple solution to the strong CP problem based on parity symmetry alone, without the need for an axion. The QCD $\theta$ parameter is zero, and the determinant of the tree-level quark mass matrix is real, both owing to parity symmetry [20]. Small and calculable $\overline{\theta}$ is induced in the model only at the two-loop level, which is consistent with neutron electric dipole moment constraints [20, 25]. While we do emphasize the parity symmetric version of the universal seesaw models, in addressing the $R(D^{*},D)$ anomaly, we shall also deviate from the requirement of exact parity. Some of the motivations, but not all, quoted above will not be valid in this parity asymmetric scenario. In this case we use a partial quark and lepton seesaw, where the seesaw is effective only for a subset of quark and charged lepton families [26]. This enables us to straightforwardly evade the most stringent flavor constraints and still be able to explain $R(D^{*},D)$ results. The same result is also achieved in a parity symmetric scenario where parity is broken softly and spontaneously without relying on partial quark and lepton seesaw, but with a different flavor structure in the Yukawa sector. The main results of the paper are the following. (i) A low mass $W_{R}$ is needed to explain $R(D^{*},D)$ anomaly consistent with LHC and LEP constraints, with the mass range given by $1.2\,(1.8)~{}{\rm TeV}\leq M_{W_{R}}\leq 3~{}{\rm TeV}$ in the parity asymmetric (symmetric) version. (ii) The widths of the $W_{R}$ and $Z_{R}$ turn out to be relatively large, $\Gamma(W_{R},Z_{R})/M_{W_{R},Z_{R}}\geq 20\%$, when $R(D^{*},D)$ anomaly is explained, which helps us reconcile their low masses with LHC searches. (iii) Explaining $R(D^{*},D)$ observations imposes stringent constraints on the flavor structure of the model in the right-handed sector. (iv) In the parity symmetric version, the strong CP problem is solved without the need for an axion. This model predicts a vector-like top partner quark to have a mass $M_{T}=(1.5-2.5)$ TeV. In the parity asymmetric case, the flavor structure we adopt leads to a limit $M_{i}<2.5$ TeV for several vector-like quarks and leptons. Our model setup differs significantly from that of Ref. [8] which also uses right-handed currents in that all three families transform under $SU(2)_{R}$ in our case, as opposed to only the third family in Ref. [8]. The models of Ref. [9] have new vector-like fermions (and not the SM fermions) transforming under $SU(2)_{R}$. The model of Ref. [10] also assumes vector-like fermions transforming under $SU(2)_{R}$, with the SM fermions acquiring $SU(2)_{R}$ charge only via mixing with these vector-fermions. The universal seesaw setup that we pursue here is independently motivated, as noted earlier, especially for the solution it provides for the strong CP problem based on parity symmetry. There are of course other popular explanations for the $R(D^{*},D)$ anomaly, in terms of leptoquarks [27], a $W^{\prime}$ that couples to left-handed fermion fields [28, 29], supersymmetry [30] and extra dimension [31]. Explanations in terms of additional scalars [32] appear to be in tension with the branching ratio constraint ${\cal B}(B_{c}\rightarrow\tau\nu$) [33, 34], but the significance of the anomaly may still be reduced. This paper is organized as follows. In Sec. 2 we describe the details of the universal seesaw version of the LR model. In Sec. 3 we develop a parity asymmetric version of the model and identify a suitable flavor structure for $R(D^{*},D)$ anomaly. Here we also discuss how the constraints on the model from flavor changing observations such as $K_{L}-K_{S}$ mass difference, $D-\bar{D}$, $B_{d,s}-\bar{B}_{d,s}$ transitions, electroweak precision data, etc. are satisfied. In Sec. 4, we develop a parity symmetric versions, which also solves the strong CP problem, which we briefly review. In Sec. 5 we show how the model explains the $R(D^{*},D)$ anomaly. In Sec. 6 we discuss how low mass $W_{R}$ and $Z_{R}$ required for explaining $R(D^{*},D)$ evades the LHC and LEP constraints. We comment on some cosmological and astrophysical constraints on the model in Sec. 7. Finally, in Sec. 8 we offer some theoretical comments on the model and conclude. 2 Left-right symmetric models with universal seesaw We focus on a class of left-right symmetric models based on the gauge group $SU(3)_{c}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}$ where fermion masses are induced through a universal seesaw mechanism [17, 18, 19, 20]. This setup enables one to define Parity ($P$) as a spontaneously broken symmetry. Imposing $P$ would strongly constrain the gauge and Yukawa couplings of the left-handed and right-handed fermions. We consider two versions of the model: One without parity where the couplings in the left-handed and right-handed fermion sectors are arbitrary and unrelated to each other; and a second one where parity is a softly broken symmetry where the left-handed and right-handed Yukawa couplings are identified. We shall see that both versions can explain the $R(D^{*},D)$ anomaly, but with different choices of flavor structure. These models have the usual standard model fermions plus the right-handed neutrinos needed to complete the right-handed lepton doublet. In contrast with the usual left-right models, the universal seesaw version has four extra sets of vector-like fermions which are $SU(2)_{L}\times SU(2)_{R}$ singlets, denoted as $(U_{a},D_{a},E_{a},N_{a})$. The chiral fermions are assigned to the gauge group as follows ($i=1-3$ is the family index): $$\displaystyle Q_{L,i}\left({3},{2},{1},+\frac{1}{3}\right)$$ $$\displaystyle=$$ $$\displaystyle\left(\begin{array}[]{c}u_{L}\\ d_{L}\end{array}\right)_{i},~{}~{}~{}~{}~{}Q_{R,i}\left({3},{1},{2},+\frac{1}{% 3}\right)=\left(\begin{array}[]{c}u_{R}\\ d_{R}\end{array}\right)_{i},$$ $$\displaystyle\psi_{L,i}\left({1},{2},{1},-1\right)$$ $$\displaystyle=$$ $$\displaystyle\left(\begin{array}[]{c}\nu_{L}\\ e_{L}\end{array}\right)_{i},~{}~{}~{}~{}~{}\psi_{R,i}\left({1},{1},{2},-1% \right)=\left(\begin{array}[]{c}\nu_{R}\\ e_{R}\end{array}\right)_{i}~{}.$$ (2.1) The three families ($a=1-3$) of vector-like fermions have the following gauge quantum numbers for both left-handed and right-handed chiralities: $$\displaystyle U_{a}(3,1,1,+\frac{4}{3}),~{}~{}~{}~{}D_{a}(3,1,-\frac{2}{3}),~{% }~{}~{}~{}E_{a}(1,1,1,-2),~{}~{}~{}~{}N_{a}(1,1,1,0)~{}.$$ (2.2) The Higgs sector is very simple consisting of a left-handed and a right-handed doublet: $$\displaystyle\chi_{L}(1,2,1,+1)=\left(\begin{matrix}\chi_{L}^{+}\\ \chi_{L}^{0}\end{matrix}\right),~{}~{}~{}~{}\chi_{R}(1,1,2,+1)=\left(\begin{% matrix}\chi_{R}^{+}\\ \chi_{R}^{0}\end{matrix}\right)~{}.$$ (2.3) Note in particular that there are no bidoublet scalar fields in the model. The physical Higgs boson spectrum has just two neutral scalars, $\sigma_{L}={\rm Re}(\chi_{L}^{0})/\sqrt{2}$ and $\sigma_{R}={\rm Re}(\chi_{R}^{0})/\sqrt{2}$ which mix, with the SM-like Higgs boson of mass 125 GeV identified as primarily $\sigma_{L}$. The charged $\chi_{L,R}^{\pm}$ and neutral pseudo-scalar bosons Im$(\chi_{L,R}^{0})/\sqrt{2}$ are eaten up by the $(W_{L}^{\pm},W_{R}^{\pm})$ and the $(Z_{L}^{0},Z_{R}^{0})$ gauge bosons. We shall denote the vacuum expectation values of the neutral members of $\chi_{L,R}$ as $$\left\langle\chi^{0}_{L}\right\rangle=\kappa_{L};~{}~{}~{}\left\langle\chi^{0}% _{R}\right\rangle=\kappa_{R}~{}$$ (2.4) with $\kappa_{L}\simeq 174$ GeV. Among the charged gauge bosons, $W_{L}^{\pm}$ and $W_{R}^{\pm}$ do not mix at tree-level. Their masses are given by $$M^{2}_{W^{\pm}_{L}}~{}=~{}\frac{g^{2}_{L}\kappa^{2}_{L}}{2},~{}~{}~{}M^{2}_{W^% {\pm}_{R}}~{}=~{}\frac{g^{2}_{R}\kappa^{2}_{R}}{2}~{}.$$ (2.5) In the neutral gauge boson sector, the states $(W_{3L},\,W_{3R},\,B)$ will mix (where $B$ denotes the $B-L$ gauge boson). The photon filed $A_{\mu}$ remains massless, while the two orthogonal fields $Z_{L}$ and $Z_{R}$ mix. The compositions of these fields, in a certain convenient basis, take the form: $$\displaystyle A^{\mu}$$ $$\displaystyle=$$ $$\displaystyle\frac{g_{L}g_{R}B^{\mu}+g_{B}g_{R}W_{3L}^{\mu}+g_{L}g_{B}W_{3R}^{% \mu}}{\sqrt{g_{B}^{2}(g_{L}^{2}+g_{R}^{2})+g_{L}^{2}g_{R}^{2}}}$$ $$\displaystyle Z_{R}^{\mu}$$ $$\displaystyle=$$ $$\displaystyle\frac{g_{B}B^{\mu}-g_{R}W_{3R}^{\mu}}{\sqrt{g_{R}^{2}+g_{B}^{2}}}$$ $$\displaystyle Z_{L}^{\mu}$$ $$\displaystyle=$$ $$\displaystyle\frac{g_{B}g_{R}B^{\mu}-g_{L}g_{R}\left(1+\frac{g_{B}^{2}}{g_{R}^% {2}}\right)W_{3L}^{\mu}+g_{B}^{2}W_{3R}^{\mu}}{\sqrt{g_{B}^{2}+g_{R}^{2}}\sqrt% {g_{B}^{2}+g_{L}^{2}+\frac{g_{B}^{2}g_{L}^{2}}{g_{R}^{2}}}}$$ (2.6) with the $Z_{L}-Z_{R}$ mixing matrix given as $$\displaystyle{\cal M}^{2}_{Z_{L}-Z_{R}}=\frac{1}{2}\,\left(\begin{matrix}(g_{Y% }^{2}+g_{L}^{2})\,\kappa_{L}^{2}&g_{Y}^{2}\sqrt{\frac{g_{Y}^{2}+g_{L}^{2}}{g_{% R}^{2}-g_{Y}^{2}}}\,\kappa_{L}^{2}\\ g_{Y}^{2}\sqrt{\frac{g_{Y}^{2}+g_{L}^{2}}{g_{R}^{2}-g_{Y}^{2}}}\,\kappa_{L}^{2% }&\frac{g_{R}^{4}}{g_{R}^{2}-g_{Y}^{2}}\,\kappa_{R}^{2}+\frac{g_{Y}^{4}}{g_{R}% ^{2}-g_{Y}^{2}}\,\kappa_{L}^{2}\end{matrix}\right)~{}.$$ (2.7) Here $g_{R}$ and $g_{B}$ are the $SU(2)_{R}$ and $U(1)_{B-L}$ gauge couplings which are related to the hypercharge coupling $g_{Y}$ through the formula that embeds $Y$ within $SU(2)_{R}\times U(1)_{B-L}$: $$\frac{Y}{2}=T_{3R}+\frac{B-L}{2}~{}~{}~{}~{}~{}\Rightarrow~{}~{}~{}~{}~{}g_{Y}% ^{-2}=g_{R}^{-2}+g_{B}^{-2}~{}.$$ (2.8) We have eliminated $g_{B}$ in favor of $g_{Y}$ in Eq. (2.7). The physical states and their masses are given by $$\displaystyle Z_{1}$$ $$\displaystyle=$$ $$\displaystyle\cos\xi\,Z_{L}-\sin\xi\,Z_{R},~{}~{}~{}~{}Z_{2}=\sin\xi\,Z_{L}+% \cos\xi\,Z_{R},$$ $$\displaystyle M_{Z_{1}}^{2}$$ $$\displaystyle\simeq$$ $$\displaystyle\frac{1}{2}(g_{Y}^{2}+g_{L}^{2})\,\kappa_{L}^{2},~{}~{}~{}~{}~{}~% {}~{}~{}M_{Z_{2}}^{2}\simeq\frac{g_{R}^{4}}{g_{R}^{2}-g_{Y}^{2}}\,\kappa_{R}^{% 2}+\frac{g_{Y}^{4}}{g_{R}^{2}-g_{Y}^{2}}\,\kappa_{L}^{2}$$ (2.9) with the mixing angle $\xi$ given approximately by $$\xi\simeq\frac{g_{Y}^{2}}{g_{R}^{4}}\,\sqrt{(g_{L}^{2}+g_{Y}^{2})(g_{R}^{2}-g_% {Y}^{2})}\,\frac{\kappa_{L}^{2}}{\kappa_{R}^{2}}~{}.$$ (2.10) Here $Z_{1}$ is identified as the $Z$ boson. As it turns out, $\xi$ is very small for typical parameters that would be used to explain $R(D^{*},D)$ anomaly. A benchmark point is $M_{W^{\pm}_{R}}=2$ TeV, and $g_{R}=2$. This corresponds to $\kappa_{R}=1.4$ TeV, in which case $\xi\simeq 1.8\times 10^{-4}$. If we choose instead, $g_{R}=g_{L}\simeq 0.65$, $\xi\simeq 4.7\times 10^{-4}$, again for $M_{W^{\pm}_{R}}=2$ TeV. Such a small value of $\xi$ has very little impact in our analysis. For example, the new contribution to the electroweak parameter $\alpha T$ is given by $\alpha T\simeq\xi^{2}M_{Z_{R}}^{2}/M^{2}_{Z_{L}}\simeq 1.6\times 10^{-5}$ (for $g_{R}=2$), well below the experimental limits. The decays of $Z_{2}$ into diboson channels, viz., $Z_{2}\rightarrow W_{L}^{+}W_{L}^{-}$ and $Z_{2}\rightarrow Z_{1}h$ (where $h$ is the 125 GeV Higgs boson) will proceed through the $Z_{L}-Z_{R}$ mixing, however, with non-negligible partial widths. We shall take $Z_{L}-Z_{R}$ mixing into account in discussing such diboson decays in Sec. 6. The Higgs potential of the model is given by $$\displaystyle V=-(\mu_{L}^{2}\chi_{L}^{\dagger}\chi_{L}+\mu_{R}^{2}\chi_{R}^{% \dagger}\chi_{R})+\frac{\lambda_{1L}}{2}(\chi_{L}^{\dagger}\chi_{L})^{2}+\frac% {\lambda_{1R}}{2}(\chi_{R}^{\dagger}\chi_{R})^{2}+\lambda_{2}(\chi_{L}^{% \dagger}\chi_{L})(\chi_{R}^{\dagger}\chi_{R})~{}.$$ (2.11) If Parity symmetry is assumed, we would have $\lambda_{1L}=\lambda_{1R}\equiv\lambda_{1}$. We shall allow for soft breaking of $P$, in which case the quadratic terms $\mu_{L}^{2}\neq\mu_{R}^{2}$.555In Ref. [25] it has been shown that soft breaking of $P$ in the Higgs potential is not necessary if $\kappa_{R}\sim 10^{11}$ GeV. For explaining $R(D^{*},D)$ via $W_{R}$, our setup requires $\kappa_{R}\sim 2$ TeV, in which case soft breaking of $P$ would be needed. The physical Higgs spectrum is obtained from the $\sigma_{L}-\sigma_{R}$ mixing matrix ($\sigma_{L}={\rm Re}(\chi_{L}^{0})/\sqrt{2}$, $\sigma_{R}={\rm Re}(\chi_{R}^{0})/\sqrt{2}$) given by $$\displaystyle{\cal M}^{2}_{\sigma_{L,R}}=\left[\begin{matrix}2\lambda_{1L}% \kappa_{L}^{2}&2\lambda_{2}\kappa_{L}\kappa_{R}\\ 2\lambda_{2}\kappa_{L}\kappa_{R}&2\lambda_{1R}\kappa_{R}^{2}\end{matrix}\right% ]~{}.$$ (2.12) The eigenstates and the respective mass eigenvalues are given by $$\displaystyle h$$ $$\displaystyle=$$ $$\displaystyle\cos\zeta\,\sigma_{L}-\sin\zeta\,\sigma_{R},~{}~{}~{}~{}~{}H=\sin% \zeta\,\sigma_{L}+\cos\zeta\,\sigma_{R},$$ $$\displaystyle M_{h}^{2}$$ $$\displaystyle\simeq$$ $$\displaystyle 2\lambda_{1L}\left(1-\frac{\lambda_{2}^{2}}{\lambda_{1L}\lambda_% {1R}}\right)\kappa_{L}^{2},~{}~{}~{}~{}~{}~{}M_{H}^{2}\simeq 2\lambda_{1R}% \kappa_{R}^{2}$$ (2.13) with the mixing angle $\zeta$ given by $$\tan 2\zeta=\frac{2\lambda_{2}\kappa_{L}\kappa_{R}}{(\lambda_{1R}\kappa_{R}^{2% }-\lambda_{1L}\kappa_{L}^{2})}~{}.$$ (2.14) We note that boundedness of the potential requires $$\lambda_{1L}\geq 0,~{}~{}~{}\lambda_{1R}\geq 0,~{}~{}~{}\lambda_{2}\geq-\sqrt{% \lambda_{1L}\lambda_{1R}}~{}.$$ (2.15) The mixing angle $\zeta$ will be relevant for the decays $Z_{2}\rightarrow Z_{1}+h$, $Z_{2}\rightarrow Z_{1}+H$, and $Z_{2}\rightarrow h+H$, the latter two when kinematically allowed. Turning to the fermion masses, the Yukawa couplings and the mass terms in the charged sector have the form $$\displaystyle{\cal L}_{\rm Yuk}$$ $$\displaystyle=$$ $$\displaystyle Y_{U}\overline{Q}_{L}\tilde{\chi}_{L}U_{R}+Y_{U}^{\prime}% \overline{Q}_{R}\tilde{\chi}_{R}U_{L}+M_{U}\overline{U}_{L}U_{R}$$ (2.16) $$\displaystyle+$$ $$\displaystyle Y_{D}\overline{Q}_{L}\chi_{L}D_{R}+Y_{D}^{\prime}\overline{Q}_{R% }\chi_{R}D_{L}+M_{D}\overline{D}_{L}D_{R}$$ $$\displaystyle+$$ $$\displaystyle Y_{E}\overline{\psi}_{L}\chi_{L}E_{R}+Y_{E}^{\prime}\overline{% \psi}_{R}\chi_{R}E_{L}+M_{E}\overline{E}_{L}E_{R}+h.c.$$ Here $\tilde{\chi}_{L,R}=i\tau_{2}\chi_{L,R}^{*}$. When Parity symmetry is imposed, under $P$ the fermion and scalar fields transform as follows: $$\displaystyle Q_{L}\leftrightarrow Q_{R},~{}~{}\psi_{L}\leftrightarrow\psi_{R}% ,~{}~{}~{}U_{L}\leftrightarrow U_{R},~{}~{}~{}D_{L}\leftrightarrow D_{R},~{}~{% }~{}E_{L}\leftrightarrow E_{R},~{}~{}~{}\chi_{L}\leftrightarrow\chi_{R}~{}.$$ (2.17) Simultaneously, $W_{L}\leftrightarrow W_{R}$. The parameters in Eq. (2.16) would then satisfy the following conditions: $$\displaystyle Y_{U}=Y^{\prime}_{U},~{}~{}~{}Y_{D}=Y^{\prime}_{D},~{}~{}~{}Y_{E% }=Y^{\prime}_{E},~{}~{}~{}M_{U}=M_{U}^{\dagger},~{}~{}~{}M_{D}=M_{D}^{\dagger}% ,~{}~{}~{}M_{E}=M_{E}^{\dagger}$$ (2.18) along with $g_{L}=g_{R}$ on the $SU(2)_{L}$ and $SU(2)_{R}$ gauge couplings. If $P$ is a softly broken symmetry, then the hermiticity conditions on $M_{U,D,E}=M_{U,D,E}^{\dagger}$ are not required. In the $P$ symmetric models that we discuss, we shall take $M_{U,D,E}\neq M_{U,D,E}^{\dagger}$. The $6\times 6$ mass matrices in the up-quark, down-quark and charged lepton sectors arising from Eq. (2.16) take the form $$\displaystyle{\cal M}_{U,D,E}~{}=~{}\left(\begin{array}[]{cc}0&Y_{U,D,E}\kappa% _{L}\\ Y^{\prime\dagger}_{U,D,E}\kappa_{R}&M_{U,D,E}\end{array}\right)~{}.$$ (2.19) Here the basis is $(u,c,t,U,C,T)$ in the up-quark sector, $(d,s,b,D,S,B)$ in the down-quark sector and $(e,\mu,\tau,E_{1},E_{2},E_{3})$ in the charged lepton sector, with the left-handed fields multiplying the matrix from the left and the right-handed fields multiplying from the right in Eq. (2.19). (We use capitalized $(U,C,T)$ for the heavy vector-like up-quarks and so forth.) If the determinants of the bare mass terms $M_{U,D,E}$ in Eq. (2.19) are all nonzero, the light eigenvalues will be given by $m_{i}\sim Y_{i}Y_{i}^{\prime}\kappa_{L}\kappa_{R}/M_{i}$ (ignoring generation mixing). This is the universal seesaw mechanism. If parity is imposed as a softly broken symmetry, then $Y^{\prime}_{U,D,E}=Y_{U,D,E}$ in Eq. (2.19). Note that $M_{U,D,E}$ can be non-hermitian as these terms break $P$ only softly. The QCD parameter $\theta$ can be set to zero by virtue of Parity. Furthermore, the VEVs $\kappa_{L,R}$ can be made real by $SU(2)_{L,R}$ gauge rotations. This is a crucial point possible only because the Higgs sector is very simple. With parity, then, we have the determinants of ${\cal M_{U}}$ and ${\cal M_{D}}$ being real. This leads to the result $\overline{\theta}=0$ at tree-level [20]. Even with $M_{U,D}\neq M_{U,D}^{\dagger}$, nonzero $\overline{\theta}$ is induced via two-loop diagrams, which turn out to be of order $10^{-10}$ [20, 25]. Thus the parity symmetric version of the model provides a simple solution to the strong CP problem without the need for an axion. The soft breaking of $P$ in the scalar mass terms of Eqs. (2.11) and in the fermion mass terms of (2.16) can be understood as a spontaneous breaking at a higher scale. A Parity odd real singlet scalar field $S$ can couple to the Higgs fields and the fermion fields [36]. Under $P$, $S\rightarrow-S$. These couplings, along with the $P$ symmetric bare couplings are given by: $$\displaystyle{\cal L}_{S}$$ $$\displaystyle=$$ $$\displaystyle\mu_{0}^{2}(\chi_{L}^{\dagger}\chi_{L}+\chi_{R}^{\dagger}\chi_{R}% )+\left\{M_{U}^{0}\overline{U}_{L}U_{R}+M_{D}^{0}\overline{D}_{L}D_{R}+M_{E}^{% 0}\overline{E}_{L}E_{R}+h.c.\right\}+$$ (2.20) $$\displaystyle+$$ $$\displaystyle\mu_{1}S(\chi_{L}^{\dagger}\chi_{L}-\chi_{R}^{\dagger}\chi_{R})+S% \left\{Y_{U}^{S}\overline{U_{L}}U_{R}+Y_{D}^{S}\overline{D}_{L}D_{R}+Y_{E}^{S}% \overline{E}_{L}E_{R}+h.c.\right\}$$ with $M_{U,D,E}^{0}=M_{U,D,E}^{0\dagger}$ and $Y_{U,D,E}^{S}=-Y_{U,D,S}^{S\dagger}$. Once $S$ acquires a vacuum expectation value, the mass parameters of Eq. (2.11) will be generated with $\mu_{L}^{2}=\mu_{0}^{2}+\mu_{1}\left\langle S\right\rangle$ and $\mu_{R}^{2}=\mu_{0}^{2}-\mu_{1}\left\langle S\right\rangle$. Similarly, in Eq. (2.16) non-hermitian mass matrices will be generated given by $M_{U,D,E}=M_{U,D,E}^{0}+Y_{U,D,SE}^{S}\left\langle S\right\rangle$. Although we shall not explicitly make use of the parity odd singlet scalar $S$, this argument shows the consistency of treating $P$ as a softly broken symmetry. As for the neutrinos, the Yukawa Lagrangian is given by $$\displaystyle{\cal L}_{\rm Yuk}^{\nu}$$ $$\displaystyle=$$ $$\displaystyle Y_{\nu}\overline{\psi}_{L}\tilde{\chi}_{L}N_{R}+Y_{\nu}^{\prime}% \overline{\psi}_{R}\tilde{\chi}_{R}N_{L}+\tilde{Y}_{\nu}\overline{\psi}_{L}% \tilde{\chi}_{L}N^{c}_{R}+\tilde{Y}^{\prime}_{\nu}\overline{\psi}_{R}\tilde{% \chi}_{R}N^{c}_{L}$$ (2.21) $$\displaystyle+$$ $$\displaystyle M_{N}\overline{N}_{L}N_{R}+\mu_{L}N_{L}^{T}CN_{L}+\mu_{R}N_{R}^{% T}CN_{R}+h.c.$$ Note the presence of a Dirac mass term $M_{N}$ and Majorana mass terms $\mu_{L}$ and $\mu_{R}$. The $12\times 12$ Majorana mass matrix in the basis $(\nu_{i},\nu^{c}_{i},N_{i},N^{c}_{i})$ – with all fields taken to be left-handed and the matrices to be taken real for simplicity– is given by $$\displaystyle{\cal M}_{\nu}=\left(\begin{matrix}0&0&Y_{\nu}\kappa_{L}&\tilde{Y% }_{\nu}\kappa_{L}\\ 0&0&Y_{\nu}^{\prime}\kappa_{R}&\tilde{Y}_{\nu}^{\prime}\kappa_{R}\\ Y_{\nu}^{T}\kappa_{L}&Y_{\nu}^{\prime T}\kappa_{R}&\mu_{L}&M_{N}\\ \tilde{Y}_{\nu}^{T}\kappa_{L}&\tilde{Y}_{\nu}^{\prime T}\kappa_{R}&M_{N}^{T}&% \mu_{R}\end{matrix}\right)~{}.$$ (2.22) Under parity, $N_{L}\leftrightarrow N_{R}$, which would imply $Y_{\nu}=Y_{\nu}^{\prime}$, $\tilde{Y}_{\nu}=\tilde{Y}_{\nu}^{\prime}$, $\mu_{L}=\mu_{R}$ and $M_{N}=M_{N}^{\dagger}$. The last two relations will not hold when $P$ is softly broken, and therefore will not be assumed. An interesting feature of this mass matrix is that it naturally leads to light sterile neutrinos. Thus this setup allows for the kinematic decay of $B$ meson into these sterile neutrinos. To see the emergence of light sterile neutrinos, consider decoupled generations and focus on one such generation. With $M_{N}\sim\mu_{L,R}$, two eigenvalues of the mass matrix in Eq. (2.22) will be of order $M_{N}$, while the $\nu^{c}$ state will have a mass of order $Y_{\nu}^{2}\kappa_{R}^{2}/M_{N}$. The lighter $\nu$ state has a mass of order $Y_{\nu}^{2}\kappa_{L}^{2}/M_{N}$. In order to explain the smallness of the light neutrino masses, $M_{N}\gg\kappa_{R}$ would be preferred. We see that with $\kappa_{R}\ll M_{N}$, the mass of $\nu_{R}$ is much smaller than $\kappa_{R}$, and can be in the sub-MeV range. It is also possible that $\mu_{L}\sim\mu_{R}\gg M_{N}$, with additional symmetries keeping the bare Dirac mass $M_{N}$ of order TeV, just as the charged fermion bare mass terms. The Majorana mass terms $\mu_{L,R}$, which could obey different selection rule, need not be protected by such symmetries and can be of order $10^{10}$ GeV or so. Again, $\nu_{R}$ mass will be much smaller than $\kappa_{R}$ and may be in the sub-MeV or even in the eV range. It is intriguing to note that the $\nu_{L}$ to $\nu_{R}$ mass ratio is approximately given as $\kappa_{L}^{2}/\kappa_{R}^{2}$, provided that there is no special flavor structure in $M_{N}$, $\mu_{L}$ and $\mu_{R}$. From a fit to the $R(D^{*},D)$ anomaly we shall find the ratio $(\kappa_{L}/\kappa_{R})^{2}\sim 1/60$, in which case the sterile neutrino mass comes out to be near 3 eV, if we use the active neutrino mass to be 0.05 eV from atmospheric neutrino oscillation data, assuming normal mass ordering. This is in the right range for explaining the MiniBoone and LSND neutrino oscillation data. (See however, the cosmological caveat noted in footnote 4.) We shall allow for this possibility, as well as the case where the $\nu_{R}$ states are heavier (except for $\nu_{\tau_{R}}$) as a result of possible structures in the bare mass terms in Eq. (2.22). It should be noted that there is an option to remove the singlet fields $N_{L}$ and $N_{R}$ from the theory, in which case the neutrino will be pure Dirac particle [35]. It is also possible to provide the light active neutrinos small Majorana masses by introducing a $\Delta_{L}(1,3,1,+2)$ field via type-II seesaw mechanism. A parity partner $\Delta_{R}(1,1,3,+2)$ can acquire a small induced VEV and generate small Majorana masses for the $\nu_{R}$ fields [26]. For concreteness we shall adopt the mass matrix of Eq. (2.22) for neutrino mass generation, and not these variant schemes. 3 Parity asymmetric flavor structure without FCNC In this section we develop a scenario without assuming parity symmetry that explains the $R(D^{*},D)$ anomaly consistent with other flavor violation constraints. When parity symmetry is not assumed, the left-handed and right-handed fermions can have independent Yukawa couplings. Thus this version of the model has more freedom, compared to the parity symmetric version that will be developed in the next section. We choose in this case a specific flavor structure motivated on the one hand by $R(D^{*},D)$ anomaly and by the need to eliminate large flavor violation that could arise in this setup on the other hand. Suppose that parity is not a good symmetry. Then the seesaw mechanism may be only effective partially, which happens when ${\rm Det}(M_{U,D,E})=0$. In this case, the seesaw formula breaks down for some fermions. To see this in detail, let us work in a basis where the fermion mass matrices are block-diagonal and $M_{U,D,E}$ are diagonal. If any one of the diagonal elements of $M_{U,D,E}$ is zero we have ${\rm Det}(M_{U,D,E})=0$ . In that case, the fermion fields split into two groups: for a generation for which the vector-like bare mass term vanishes, there is a heavy fermion with mass $\sim Y^{\prime}\kappa_{R}$ which is coupled to $W_{R}$, and a light fermion with masses $\sim Y\kappa_{L}$ coupling only to $W_{L}$. For the generations for which $M_{F}\neq 0$, there is a light fermion whose mass is given by the seesaw formula $m_{i}\sim Y_{i}Y_{i}^{\prime}\kappa_{L}\kappa_{R}/M_{i}$ and which couples to both $W_{L}$ and $W_{R}$. It is this property of partial seesaw which helps us to have a $W_{R}$ couple exclusively to $\overline{b}_{R}\gamma_{\mu}c_{R}$ and $\overline{\tau}_{R}\gamma_{\mu}\nu_{R}$ in the parity asymmetric case. As an explicit example, we make the following choice. In the quark sector, for the various blocks of the mass matrices of Eq. (2.19) we choose: $$\displaystyle Y_{U}$$ $$\displaystyle=$$ $$\displaystyle V_{L}^{\dagger}Y_{U}^{\rm diag},~{}~{}~{}Y_{U}^{\prime}=V_{R}^{% \dagger}Y_{U}^{\prime\,{\rm diag}},~{}~{}M_{U}={\rm diag}(0,M_{2},0)$$ $$\displaystyle Y_{D}$$ $$\displaystyle=$$ $$\displaystyle Y_{D}^{\rm diag},~{}~{}~{}Y_{D}^{\prime}=Y_{D}^{\prime\,{\rm diag% }},~{}~{}~{}M_{D}={\rm diag}(0,0,M_{3})$$ (3.1) Here $Y_{U}^{\rm diag}={\rm diag}(Y_{1}^{u},Y_{2}^{u},Y_{3}^{u})$, $Y_{U}^{\prime\,{\rm diag}}={\rm diag}(Y_{1}^{\prime u},Y_{2}^{\prime u},Y_{3}^% {\prime u})$, $Y_{D}^{\rm diag}={\rm diag}(Y_{1}^{d},Y_{2}^{d},Y_{3}^{d})$ and $Y_{D}^{\prime\,{\rm diag}}={\rm diag}(Y_{1}^{\prime d},Y_{2}^{\prime d},Y_{3}^% {\prime d})$ are arbitrary diagonal matrices. $V_{L}$ is the left-handed CKM matrix, while $V_{R}$ is the right-handed CKM matrix, which is unrelated to $V_{L}$. $V_{L}$ is chosen to fit the CKM matrix elements, while we choose $V_{R}$ to have the form: $$\displaystyle~{}V_{R}=\left(\begin{array}[]{ccc}1&\epsilon_{1}&\epsilon_{2}\\ -\epsilon_{1}&\epsilon_{3}&1\\ -\epsilon_{2}&1&\epsilon_{4}\end{array}\right)~{}.$$ (3.2) This form of $V_{R}$ is motivated by the need to generate $\overline{c}_{R}\gamma_{\mu}b_{R}W_{R}^{\mu}$ coupling. Here $|\epsilon_{i}|\ll 1$ are small parameters needed only for cosmology. For collider phenomenology we could set $\epsilon_{i}$ to zero, but in this case there would be additional symmetries which would make some of the vector-like quarks absolutely stable. Tiny values of $\epsilon_{i}\sim 10^{-6}$ would lead to their decay at cosmologically acceptable time scales [37]. With this choice of Yukawa coupling and mass matrices, after rotating the fields to remove $V_{L}^{\dagger}$ and $V_{R}^{\dagger}$ in Eq. (3.1) so that they appear in the $W_{L}^{\pm}$ and $W_{R}^{\pm}$ interactions, the quark mass matrices become diagonal except in the $c-C$ and the $b-B$ sectors, where they are given by the matrices of the seesaw form: $$\displaystyle{\cal M}_{c-C}=\left(\begin{matrix}0&Y_{2}^{u}\kappa_{L}\\ Y_{2}^{\prime u}\kappa_{R}&M_{2}\end{matrix}\right),~{}~{}~{}{\cal M}_{b-B}=% \left(\begin{matrix}0&Y_{3}^{d}\kappa_{L}\\ Y_{3}^{\prime d}\kappa_{R}&M_{3}\end{matrix}\right)~{}.$$ (3.3) The light quark masses are then obtained to be $$\displaystyle m_{u}$$ $$\displaystyle=$$ $$\displaystyle Y_{1}^{u}\kappa_{L},~{}~{}m_{c}\simeq\frac{Y_{2}^{u}Y_{2}^{% \prime u}\kappa_{L}\kappa_{R}}{M_{2}},~{}~{}m_{t}=Y_{3}^{u}\kappa_{L}$$ $$\displaystyle m_{d}$$ $$\displaystyle=$$ $$\displaystyle Y_{1}^{d}\kappa_{L},~{}~{}m_{s}=Y_{2}^{d}\kappa_{L},~{}~{}m_{b}% \simeq\frac{Y_{3}^{d}Y_{3}^{\prime d}\kappa_{L}\kappa_{R}}{M_{3}}~{}.$$ (3.4) The heavy quark masses, on the other hand, are found to be: $$\displaystyle M_{U}=Y_{1}^{\prime u}\kappa_{R},~{}~{}M_{C}\simeq M_{2},~{}~{}M% _{T}=Y_{3}^{\prime u}\kappa_{R}$$ $$\displaystyle M_{D}=Y_{1}^{\prime d}\kappa_{R},~{}~{}M_{S}\simeq Y_{2}^{\prime d% }\kappa_{R},~{}~{}M_{B}\simeq M_{3}~{}.$$ (3.5) We choose the couplings $(Y_{1}^{u},Y_{3}^{u})$ and $(Y_{1}^{d},Y_{2}^{d})$ hierarchically to fit the masses of $(u,t)$ and $(d,s)$ quarks. It is clear from Eqs. (3.4)-(3.5) that for low values of $\kappa_{R}\simeq 2$ TeV needed to explain $R(D^{*},D)$ anomaly, ($Y_{1}^{\prime u},Y_{3}^{\prime u}$) cannot be equal to $(Y_{1}^{u},Y_{3}^{u})$ – as that would lead to light vector-like fermions excluded by the LHC – and similarly for the down quark sector. Hence the need to assume parity violation in this type of flavor choice. The zeros in $M_{F}$ in Eq. (3.1) implies that $W_{R}$ couples only to the heavy quarks $(U,T,D,S)$ and not to the corresponding light quarks $(u,t,d,s)$. On the other hand, for the bottom and charm quarks, the masses are given by the quark seesaw formula and therefore these light fields have both $W_{L}$ and $W_{R}$ interactions. The $W_{R}^{\pm}$ couplings to the physical quark fields is given by $$\displaystyle{\cal L}^{q}_{W_{R}^{\pm}}=\frac{g_{R}}{\sqrt{2}}\left(\overline{% U}_{R}~{}\overline{c}_{R}~{}\overline{T}_{R}\right)\gamma_{\mu}V_{R}\left(% \begin{matrix}D_{R}\\ S_{R}\\ b_{R}\end{matrix}\right)\,W_{R}^{+\mu}+h.c.$$ (3.6) With the form of $V_{R}$ given in Eq. (3.2), this interaction clearly contains the desired term $\overline{c}_{R}\gamma_{\mu}b_{R}W_{R}^{+\mu}$ for explaining the $R(D^{*},D)$ anomaly, and no other term involving the light quarks that could lead to unacceptable flavor violation. In the charged lepton sector we choose for the matrix ${\cal M}_{E}$ in Eq. (2.19) $$\displaystyle Y_{E}={\rm diag}(Y_{1}^{e},Y_{2}^{e},Y_{3}^{e}),~{}~{}Y_{E}^{% \prime}={\rm diag}(Y_{1}^{\prime e},Y_{2}^{\prime e},Y_{3}^{\prime e}),~{}~{}M% _{E}={\rm diag}(0,0,M_{E})~{}.$$ (3.7) This leads to decoupled $e$ and $\mu$ fields, while the $\tau$ lepton mixes with the $E_{3}$ field via the seesaw mass matrix $$\displaystyle{\cal M}_{\tau-E_{3}}=\left(\begin{matrix}0&Y_{3}^{e}\kappa_{L}\\ Y_{3}^{\prime e}\kappa_{R}&M_{E}\end{matrix}\right)~{}.$$ (3.8) The light and heavy lepton masses are then given by $$\displaystyle m_{e}$$ $$\displaystyle=$$ $$\displaystyle Y_{1}^{e}\kappa_{L},~{}~{}m_{\mu}=Y_{2}^{e}\kappa_{L},~{}~{}m_{% \tau}\simeq\frac{Y_{3}^{e}Y_{3}^{\prime e}\kappa_{L}\kappa_{R}}{M_{E}}$$ $$\displaystyle M_{E_{1}}$$ $$\displaystyle=$$ $$\displaystyle Y_{1}^{\prime e}\kappa_{R},~{}~{}M_{E_{2}}=Y_{2}^{\prime e}% \kappa_{R},~{}~{}M_{E_{3}}\simeq M_{E}~{}.$$ (3.9) This structure leads to the leptonic interactions of $W_{R}$ given by $$\displaystyle{\cal L}^{\ell}_{W_{R}^{\pm}}=\frac{g_{R}}{\sqrt{2}}\left(% \overline{E}_{1R}~{}\overline{E}_{2R}~{}\overline{\tau}_{R}\right)\gamma_{\mu}% \left(\begin{matrix}\nu_{e_{R}}\\ \nu_{\mu_{R}}\\ \nu_{\tau_{R}}\end{matrix}\right)\,W_{R}^{-\mu}+h.c.$$ (3.10) We see that the only interactions of $W_{R}$ with light leptons is of the form $\overline{\tau}_{R}\gamma_{\mu}\nu_{\tau_{R}}W_{R}^{-\mu}$, which is the desired coupling to explain $R(D^{*},D)$. Integrating out the $W_{R}$ field using Eq. (3.6) and Eq. (3.10) would induce a unique effective dimension six operator involving light quarks and leptons given by ${\cal H}_{eff}\simeq\frac{g^{2}_{R}}{2M^{2}_{W_{R}}}\bar{b}_{R}\gamma_{\mu}c_{% R}\bar{\nu}_{\tau_{R}}\gamma^{\mu}{\tau_{R}}+h.c.$ Its contribution to $R(D^{*},D)$ will be analyzed in Sec. 5. 3.1 Avoiding flavor changing neutral current constraints As is well known, the right-handed $W_{R}$ interactions contribute to flavor changing effects such as to $K_{L}-K_{S}$, $B_{s}-\bar{B}_{s}$ and $B_{d}-\bar{B}_{d}$ mixings at the one loop level via box diagrams. The dominant new contributions arise from the $W_{L}-W_{R}$ mediated box graphs [12]. In the context of LR models without vector-like quarks, such constraints put $W_{R}$ mass to be $M_{W_{R}}/g_{R}\geq 2.5$ TeV, assuming that the left-handed CKM mixing matrix $V_{L}$ and its right-handed counterpart $V_{R}$ are equal. For $g_{R}\simeq 2$, which is what would be needed to explain $R(D^{*},D)$ anomaly, the limit on $W_{R}$ mass is of order 5 TeV, much above the needed value to explain $R(D^{*},D)$. In this subsection, we show how the flavor structure for the Yukawa couplings and the mass matrices shown in Eqs. (3.1)-(3.3) completely evades these bounds. In the parity asymmetric version of the quark seesaw model, the dominant contributions to $\Delta F=2$ flavor changing effects arise from diagrams of such as the one shown in Fig. 1. These amplitudes can be symbolically written as follows: $$\displaystyle\Delta M_{K}\propto(V_{L})_{is}(M_{U})_{ij}(V_{R})_{jd}^{*}(V_{R}% )_{\ell s}(M_{U})_{k\ell}(V_{L})_{kd}^{*}$$ $$\displaystyle\Delta M_{B_{s}}\propto(V_{L})_{is}(M_{U})_{ij}(V_{R})_{jb}^{*}(V% _{R})_{\ell s}(M_{U})_{k\ell}(V_{L})_{kb}^{*}$$ $$\displaystyle\Delta M_{B_{d}}\propto(V_{L})_{ib}(M_{U})_{ij}(V_{R})_{jd}^{*}(V% _{R})_{\ell b}(M_{U})_{k\ell}(V_{L})_{kd}^{*}$$ $$\displaystyle\ \Delta M_{D}\propto V_{L})^{*}_{ci}(M_{D})_{ij}(V_{R})_{uj}(V_{% R})^{*}_{c\ell}(M_{D})_{k\ell}(V_{L})_{uk}~{}.$$ (3.11) Note that by our choice of matrices, $M_{U}V_{R}=O(\epsilon)$ where $\epsilon$ can be a very small number, of order $10^{-6}$ or so. This removes the $K$ and $B_{d,s}$ meson mixing constraints from the dominant source. Furthermore, since $(M_{D}V_{R}^{T})_{iu}=O(\epsilon)$ this also removes the $D-\bar{D}$ mixing constraint. The absence of new contributions to $K-\overline{K}$, $B_{d,s}-\overline{B}_{d,s}$ and $D-\overline{D}$ mixing in the model can also be seen directly from the charged current $W_{R}$ interaction of Eq. (3.6), which is written in terms of physical mass eigenstates, along with the adopted form of $V_{R}$ of Eq. (3.2). These meson mixing diagrams simply do not connect. The mixing of $b-B$, $c-C$ and $\tau-E_{3}$ as given by Eq. (3.3) and Eq. (3.10) would imply that there is some amount of flavor violation in the model. While such mixings in the right-handed sector do not lead to modifications in the interactions of $b,c,\tau$ with the $Z$ boson – as these right-handed fields have identical SM gauge quantum numbers, the miixings of left-handed fields will modify the interactions of $(b_{L},c_{L},\tau_{L})$ with respect to the standard model. The charged current $W_{L}^{\pm}$ interactions in the physical mass eigenbasis for the quarks is modified and is given by $$\displaystyle{\cal L}^{q}_{W_{L}^{\pm}}=\frac{g_{L}}{\sqrt{2}}\left(\overline{% u}_{L}~{}\overline{c}_{L}~{}\overline{t}_{L}~{}\overline{C}_{L}\right)\gamma_{% \mu}J_{L}\left(\begin{matrix}d_{L}\\ s_{L}\\ b_{L}\\ B_{L}\end{matrix}\right)\,W_{L}^{+\mu}+h.c.$$ (3.12) where $J_{L}$ is given by $$\displaystyle J_{L}=\left(\begin{matrix}~{}V_{ud}&~{}V_{us}&V_{ub}c_{b}&V_{ub}% s_{b}\\ V_{cd}c_{c}&V_{cs}c_{c}&V_{cb}c_{c}c_{b}&V_{cb}c_{c}s_{b}\\ ~{}V_{td}&~{}V_{ts}&V_{tb}c_{b}&V_{tb}s_{b}\\ V_{cd}s_{c}&V_{cs}s_{c}&V_{cb}s_{c}c_{b}&V_{cb}s_{c}s_{b}\end{matrix}\right)~{}.$$ (3.13) Here $V_{ij}$ stand for elements of the left-handed CKM matrix $V_{L}$, while $s_{c}=\sin\theta_{c}$ and $s_{b}=\sin\theta_{b}$ stand for the $c_{L}-C_{L}$ and $b_{L}-B_{L}$ mixing angles with $c_{c}=\cos\theta_{c}$ and $c_{b}=\cos\theta_{b}$. These angles are given by (see. Eq. (3.3)): $$\theta_{c}\simeq\frac{Y_{2}^{u}\kappa_{L}}{M_{2}}\simeq\frac{m_{c}}{Y_{2}^{% \prime u}\kappa_{R}},~{}~{}~{}~{}~{}~{}\theta_{b}\simeq\frac{Y_{3}^{d}\kappa_{% L}}{M_{3}}\simeq\frac{m_{b}}{Y_{3}^{\prime d}\kappa_{R}}$$ (3.14) Eq. (3.13) takes into account these mixings, and the fact that the gauge eigenstates $C_{L}$ and $B_{L}$ have no direct couplings to $W_{L}$. In the second halves of Eq. (3.14) we made use of the light eigenvalue for $c$ and $b$ quarks given in Eq. (3.4). Note that these mixing angles can be as small as $10^{-3}$, corresponding to $\kappa_{R}=1.4$ TeV and $Y_{2}^{\prime u}\sim Y_{3}^{\prime d}\sim 1$. The modification of $W_{L}^{\pm}$ interactions with quarks will lead to some flavor violation. The Standard Model box diagram contributions to $K-\overline{K}$ mixing is now given by [38] $$\displaystyle H_{\rm eff}^{LL}=\frac{G_{F}}{\sqrt{2}}\frac{\alpha}{4\pi\sin^{2% }\theta_{W}}\lambda_{i}\lambda_{j}\left[\left(1+\frac{x_{i}x_{j}}{4}\right)I_{% 2}(x_{i},x_{j},1)-2x_{i}x_{j}I_{1}(x_{i},x_{j},1)\right](\overline{s}_{L}% \gamma_{\mu}d_{L})^{2}~{}.$$ (3.15) Here $x_{i}$, $\lambda_{i}$ and the loop functions $I_{i}$ are defined as $$\displaystyle x_{i}$$ $$\displaystyle=$$ $$\displaystyle\frac{m_{i}^{2}}{M_{W_{L}}^{2}},~{}~{}~{}~{}i=u,c,t,C;~{}~{}~{}~{% }\lambda_{i}=(J_{L})_{is}^{*}(J_{L})_{id}$$ $$\displaystyle I_{1}(x_{i},x_{j},\eta)$$ $$\displaystyle=$$ $$\displaystyle\frac{\eta{\rm ln}(1/\eta)}{(1-\eta)(1-x_{i}\eta)(1-x_{j}\eta)}+% \frac{x_{i}{\rm ln}x_{i}}{(x_{i}-x_{j})(1-x_{i})(1-x_{i}\eta)}+(i\rightarrow j),$$ $$\displaystyle I_{2}(x_{i},x_{j},\eta)$$ $$\displaystyle=$$ $$\displaystyle\frac{{\rm ln}(1/\eta)}{(1-\eta)(1-x_{i}\eta)(1-x_{j}\eta)}+\frac% {x_{i}^{2}{\rm ln}x_{i}}{(x_{i}-x_{j})(1-x_{i})(1-x_{i}\eta)}+(i\rightarrow j)$$ $$\displaystyle I_{1}(x_{i},x_{j},1)$$ $$\displaystyle=$$ $$\displaystyle{\rm lim}_{\eta\rightarrow 1}I_{1}(x_{i},x_{j},\eta)~{}.$$ (3.16) Similar expressions appear in $W_{L}-W_{R}$ exchange diagrams, where we shall define $\eta=M_{W_{L}}^{2}/M_{W_{R}}^{2}$. Analogous expressions can be written down for $B_{d,s}-\overline{B}_{d,s}$ mixing as well as $D-\overline{D}$ mixing by interchanging the flavor indices appropriately. Since $J_{L}$ is not unitary, GIM cancellation is no longer effective. However, deviations are quite small. For example, with $M_{C}=3$ TeV for the charm partner mass, $K^{0}-\overline{K^{0}}$ mixing limit requires $\theta_{c}\leq 0.03$, which is well within the allowed range of the model. The constraint on $\theta_{c}$ from $B_{d,s}-\overline{B}_{d,s}$ mass splitting is also of the same order. 3.2 Universality and other flavor constraints The mixing of $c$-quark with the vector-like $C$-quark, $b$ with $B$ and $\tau$ with $E_{3}$ would imply some modifications in precision electroweak parameters and universality in leptonic decays. As we shall see below, our benchmark points needed to explain $R(D^{*},D)$ anomaly are fully consistent with these constraints. Lepton universality will be violated owing to $\tau_{L}-E_{3L}$ mixing. In the charged current interactions of $W_{L}^{\pm}$, this mixing will introduce a factor of $\cos\theta_{\tau}$ wherever $\tau_{L}$ appears, which would lead to the modified interaction $${\cal L}_{\tau}^{W_{L}}=\frac{g_{L}}{\sqrt{2}}\cos\theta_{\tau}\overline{\tau}% _{L}\gamma^{\mu}\nu_{\tau L}W_{L}^{-}+h.c.$$ (3.17) The decay $\tau\rightarrow\pi+\nu_{\tau}$ will be modified, in relation to $\pi\rightarrow\mu\nu_{\mu}$. The ratio of the effective couplings, $A_{\pi}=G_{\tau\pi}^{2}/G_{F}^{2}$ provides the following constraint ($s_{\tau}=\sin\theta_{\tau}$): $$A_{\pi}=\frac{G_{\tau\pi}^{2}}{G_{F}^{2}}=1-s_{\tau}^{2}=1.0020\pm 0.0073\,% \@@cite[cite]{[\@@bibref{}{PDG}{}{}]}.$$ (3.18) Using 1 sigma error, this would lead to the bound $s_{\tau}\leq 0.073$. This constraint, while nontrivial, is easily satisfied within the model, where $s_{\tau}$ is allowed to be as small as $0.001$. The interactions of the $Z$ boson with light fermions are modified because of their mixings with vector-like fermions. However, in the right-handed fermion sector, there are no modifications, as the vector-like fermions have the same SM quantum numbers as the usual fermions. The interactions of $Z$ with light fermions are then modified to $${\cal L}^{Z}=\frac{g}{2c_{W}}\left[\overline{f}_{L}\left\{T_{3L}^{f}(1-s_{f}^{% 2})-Q_{f}s_{W}^{2}\right\}\gamma^{\mu}f_{L}+\overline{f}_{R}(-Q_{f}s_{W}^{2})% \gamma^{\mu}f_{R}\right]Z^{\mu},$$ (3.19) where $s_{f}$ denotes the mixing of the left-handed fermion $f_{L}$ with a vector-like fermion. Here $s_{W}=\sin\theta_{W}$ is the weak mixing angle, and $c_{W}=\cos\theta_{W}$. The polarization asymmetry parameter $A_{f}$, measured at LEP and SLD from forward-backward asymmetry and left-right asymmetry, is now modified to $$A_{f}=A_{f}^{\rm SM}\left(1+\frac{\delta A_{f}}{A_{f}^{\rm SM}}\right)$$ (3.20) where $$\frac{\delta A_{f}}{A_{f}^{\rm SM}}\simeq\frac{-4\,Q_{f}^{2}\,s_{W}^{4}\,s_{f}% ^{2}\,\{T_{3L}^{f}-Q_{f}\,s_{W}^{2}\}}{\{T_{3L}^{f}-2\,Q_{f}\,s_{W}^{2}\}\{(T_% {3L}^{f})^{2}-2\,Q_{f}\,s_{W}^{2}\,T_{3L}^{f}+2\,Q_{f}^{2}\,s_{W}^{4}\}}~{}.$$ (3.21) Eq. (3.21), when applied to $c$ and $b$ quarks and $\tau$ lepton would lead to the following shifts (using $s_{W}^{2}=0.2315$): $$\displaystyle\frac{\delta A_{b}}{A_{b}^{\rm SM}}=-0.158\,s_{b}^{2},~{}~{}~{}% \frac{\delta A_{c}}{A_{c}^{\rm SM}}=-1.20\,s_{c}^{2},~{}~{}~{}\frac{\delta A_{% \tau}}{A_{\tau}^{\rm SM}}=-12.38\,s_{\tau}^{2}~{}.$$ (3.22) Using experimental values of $A_{b}$, $A_{c}$ and $A_{\tau}$, which are given by [39] $A_{b}=0.923\pm 0.020$, $A_{c}=0.670\pm 0.027$ and $A_{\tau}=0.1439\pm 0.0043$ (from $Z$ pole data at LEP), and the theoretical values based on SM given by $A_{b}^{\rm SM}=0.9347$, $A_{c}^{\rm SM}=0.6677$ with negligible errors, and $A_{\tau}=A_{\ell}=0.1469$, we obtain with 1 sigma error allowance the following limits on the mixing angles: $$s_{b}\leq 0.463,~{}~{}~{}s_{c}\leq 0.176,~{}~{}~{}s_{\tau}\leq 0.048~{}.$$ (3.23) If we use the SLD value of $A_{\tau}=0.136\pm 0.015$ instead [39], which is somewhat discrepant from the LEP value, we would get $s_{\tau}\leq 0.091$. The partial decay widths of the $Z$ boson into $b\overline{b}$, $c\overline{c}$ and $\tau^{+}\tau^{-}$ will deviate from their SM values by an amount given by $$\frac{\Delta\Gamma_{f}}{\Gamma_{f}^{\rm SM}}=\frac{2Q_{f}s_{W}^{2}T_{3L}^{f}s_% {f}^{2}}{(T_{3L}^{f}-Q_{f}s_{W}^{2})^{2}+(Q_{f}s_{W}^{2})^{2}}~{},$$ (3.24) leading to $$\frac{\Delta\Gamma_{b}}{\Gamma_{b}^{\rm SM}}=0.418s_{b}^{2},~{}~{}~{}\frac{% \Delta\Gamma_{c}}{\Gamma_{c}^{\rm SM}}=1.077s_{c}^{2},~{}~{}~{}\frac{\Delta% \Gamma_{\tau}}{\Gamma_{\tau}^{\rm SM}}=1.840s_{\tau}^{2}~{}.$$ (3.25) The ratio $\Gamma(Z\rightarrow\tau\tau)/\Gamma(Z\rightarrow ee)=1.0019\pm 0.0032$ is well measured experimentally. Compared to the SM, this ratio is modified by a factor $1-s_{\tau}^{2}$. Using 1 sigma error, we find a limit $$s_{\tau}\leq 0.053~{}.$$ (3.26) Similarly, $R_{b}=\Gamma(Z\rightarrow bb)/\Gamma(Z\rightarrow{\rm hadrons})$ is modified from its SM value to $R_{b}=R_{b}^{\rm SM}(1+0.418s_{b}^{2})$. From the experimental value of $R_{b}=0.21629\pm 0.00066$ [39], we obtain a limit $$s_{b}\leq 0.085~{}.$$ (3.27) A similarly defined ratio $R_{c}$ is modified to $R_{c}=R_{c}^{\rm SM}(1+1.077s_{c}^{2})$. Comparing with the experimental value $R_{c}=0.1721\pm 0.0030$ we obtain $$s_{c}\leq 0.127~{}.$$ (3.28) All these constraints are seen to be consistent with the model parameters required to explain $R(D^{*},D)$. We thus conclude that the model in its parity asymmetric form can lead to the desired flavor structure of $W_{R}^{\pm}$ currents without inducing unwanted flavor violation in other sectors. In Sec. 5 we show how this flavor structure enables us to explain $R(D^{*},D)$ in terms of right-handed currents. Most of the constraints derived and found to be satisfied in this section also apply to the parity symmetric scenario discussed in the next section. 4 Parity symmetric flavor structure without FCNC In this section we develop a scenario which explains $R(D^{*},D)$ anomaly via right-handed currents that is also Parity symmetric. Apart from its aesthetic appeal, such a scheme can also solve the strong CP problem using Parity symmetry without the need for an axion [20]. Our setup is identical to the one discussed in the previous section, with the gauge symmetry being $SU(3)_{c}\times SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}$. This version of the LR model has been shown to solve the strong CP problem owing to the structure of the quark mass matrices that is Parity invariant. First of all, Parity sets $\theta_{QCD}$ to zero. Under $P$, fermions transform as $q_{L}\leftrightarrow q_{R}$, $\psi_{L}\leftrightarrow\psi_{R}$, $(U,D,E)_{L}\leftrightarrow(U,D,E)_{R}$, while the Higgs fields transform as $\chi_{L}\leftrightarrow\chi_{R}$. Simultaneously the gauge fields transform as $W_{L}\leftrightarrow W_{R}$. Consequently, the seesaw quark mass matrices take the form $$\displaystyle{\cal M}_{U,D}=\left(\begin{matrix}0&Y_{U,D}\,\kappa_{L}\\ Y_{U,D}^{\dagger}\,\kappa_{R}&M_{U,D}\end{matrix}\right)$$ (4.1) with the condition $M_{U,D}^{\dagger}=M_{U,D}$. By separate $SU(2)_{L}$ and $SU(2)_{R}$ gauge rotations the VEVs of $\chi_{L}$ and $\chi_{R}$, $\kappa_{L}$ and $\kappa_{R}$, can be chosen to be real. The determinant of ${\cal M}_{U}.{\cal M}_{D}$ is then real, implying that $\overline{\theta}=0$ at tree level. It has been shown that in this setup, there is no induced $\overline{\theta}$ at one loop level [20]. We shall briefly review this result in this section, where we show that soft breaking of $P$ which allows for $M_{U,D}\neq M_{U,D}^{\dagger}$ does not spoil this result. A small value of $\overline{\theta}$ is induced via two loop diagrams, estimated to be $\overline{\theta}\sim 10^{-10}$, which is consistent, but not very far from the limit obtained from neutron electric dipole moment [20, 25]. It will be desirable to keep the solution to the strong CP problem of the setup and at the same time provide an explanation for the $R(D^{*},D)$ anomaly. This is what we take up in this section. Parity was explicitly broken in the discussion of Sec. 3, which therefore has no relevance to the strong CP solution. Recall that a flipping of $u_{R}$ and $U_{R}$ (and similarly other quark and lepton fields) played an important role in the discussion of Sec. 3. The bare mass terms for certain vector-like quarks were set to zero to achieve such flips. Parity can then not be imposed, or else the masses of the $u$ and $U$ quarks will be in the ratio $\kappa_{L}/\kappa_{R}$ which should be of order $(1/10-1/20)$ in order to explain $R(D^{*},D)$. The resulting light vector-like quarks are not allowed by experimental limits. In the up-quark sector, consider the case where $Y_{U}$ is proportional to the identity matrix, and $M_{U}$ an arbitrary non-hermitian matrix: $$\displaystyle Y_{U}$$ $$\displaystyle=$$ $$\displaystyle y_{u}\times{\rm diag}(1,1,1),~{}~{}~{}M_{U}=V_{R}^{0}.\,{\rm diag% }({M_{1}^{u},M_{2}^{u},M_{3}^{u}}).\,V_{L}^{0\dagger}~{}.$$ (4.2) One can remove the unitary matrices $V_{L}^{0}$ and $V_{R}^{0}$ appearing In Eq. (4.2) by the following field transformations: $$\displaystyle U_{L}=V_{R}^{0}U_{L}^{0},~{}~{}~{}U_{R}=V_{L}^{0}U_{R}^{0},~{}~{% }~{}u_{R}=V_{R}^{0}u_{R}^{0},~{}~{}~{}u_{L}=V_{L}^{0}u_{L}^{0}~{}.$$ (4.3) This will induce a flavor structure $V_{L}^{0}$ in the $W_{L}$ and $V_{R}^{0}$ in the $W_{R}$ charged current interactions, with $V_{L}^{0}$ and $V_{R}^{0}$ approximately – but not exactly – being the left-handed and right-handed CKM matrices. Note that $V_{L}^{0}$ and $V_{R}^{0}$ are unrelated. In the new basis, the up-quark mass matrix becomes block-diagonal, with each block given by $$\displaystyle{\cal M}_{u_{i}}=\left(\begin{matrix}0&y_{u}\kappa_{L}\\ y_{u}\kappa_{R}&M_{i}^{u}\end{matrix}\right)~{}.$$ (4.4) For the up and charm quarks, with $M_{i}^{u}\gg y_{u}\kappa_{R}$, the eigenvalues are given as $$\displaystyle m_{u}$$ $$\displaystyle\simeq$$ $$\displaystyle\frac{y_{u}^{2}\kappa_{L}\kappa_{R}}{M_{1}^{u}},~{}~{}~{}M_{U}% \simeq M_{1}^{u}$$ $$\displaystyle m_{c}$$ $$\displaystyle\simeq$$ $$\displaystyle\frac{y_{u}^{2}\kappa_{L}\kappa_{R}}{M_{2}^{u}},~{}~{}~{}M_{C}=M_% {2}^{u}~{}.$$ (4.5) As for the top quark, the $t-T$ mixing in the right-handed sector cannot be too small, and hence the seesaw formula that applies to $u$ and $c$ quarks is not applicable. The reason is that $M_{T}\equiv M_{3}^{u}$ cannot be taken to be much larger than $y_{u}\kappa_{R}$, or else the top quark mass will be suppressed compared to the electroweak scale $\kappa_{L}$. The physical top quark state and its partner $T$ quark state are given as ($c_{t}=\cos\theta_{t}$, $s_{t}=\sin\theta_{t}$, $t^{0}$ and $T^{0}$ are mass eigenstates) $$t_{R}^{0}=c_{t}t_{R}+s_{t}T_{R},~{}~{}~{}T_{R}^{0}=-s_{t}t_{R}+c_{t}T_{R}~{}$$ (4.6) with the $t_{R}-T_{R}$ mixing angle given as $$\tan\theta_{t}=\frac{y_{u}\kappa_{R}}{M_{3}^{u}}~{}.$$ (4.7) Analogous mixing in the $t_{L}-T_{L}$ sector is small, given by replacing $\kappa_{R}$ by $\kappa_{L}$ in $\tan\theta_{t}$. We shall take the limit $M_{3}^{u}\ll y_{u}\kappa_{R}$, so that the mass eigenvalues are: $$m_{t}\simeq y_{u}\kappa_{L},~{}~{}~{}M_{T}\simeq y_{u}\kappa_{R}~{}.$$ This corresponds to a flip of $t_{R}\leftrightarrow T_{R}$, implying that the light top $t_{R}$ will not have $W_{R}$ interactions. Such a choice, with $c_{t}\rightarrow 0$, helps with suppressing FCNC arising from $W_{L}-W_{R}$ mixed box diagrams, as discussed later. In the down quark mass matrix, the matrices ${Y}_{D}$ and ${M}_{D}$ of Eq. (4.1) are chosen as $$\displaystyle{Y}_{D}=\left(\begin{matrix}0&Y_{1}^{d}&0\\ Y_{2}^{d}&0&0\\ 0&0&Y_{3}^{d}\end{matrix}\right),~{}~{}~{}~{}{M}_{D}=\left(\begin{matrix}0&M_{% 1}^{d}&0\\ M_{2}^{d}&0&0\\ 0&0&M_{3}^{d}\end{matrix}\right)~{}.$$ (4.8) This mass matrix consists of three $2\times 2$ block-diagonal matrices: $$\displaystyle{\cal L}_{\rm mass}^{d}$$ $$\displaystyle=$$ $$\displaystyle\left(\begin{matrix}\overline{d}_{1L}&\overline{D}_{1L}\end{% matrix}\right)\left(\begin{matrix}Y_{1}^{d}\kappa_{L}&0\\ M_{1}^{d}&Y_{2}^{d}\kappa_{R}\end{matrix}\right)\left(\begin{matrix}D_{2R}\\ d_{2R}\end{matrix}\right)+\left(\begin{matrix}\overline{d}_{2L}&\overline{D}_{% 2L}\end{matrix}\right)\left(\begin{matrix}0&Y_{2}^{d}\kappa_{L}\\ Y_{1}^{d}\kappa_{R}&M_{2}^{d}\end{matrix}\right)\left(\begin{matrix}d_{1R}\\ D_{1R}\end{matrix}\right)$$ (4.9) $$\displaystyle+$$ $$\displaystyle\left(\begin{matrix}\overline{d}_{3L}&\overline{D}_{3L}\end{% matrix}\right)\left(\begin{matrix}0&Y_{3}^{d}\kappa_{L}\\ Y_{3}^{d}\kappa_{R}&M_{3}^{d}\end{matrix}\right)\left(\begin{matrix}d_{3R}\\ D_{3R}\end{matrix}\right)+h.c.$$ The third block is the usual seesaw matrix, identified as the $b-B$ sector. The eigenvalues are given approximately by $$m_{b}\simeq\frac{(Y_{3}^{d})^{2}\kappa_{L}\kappa_{R}}{M_{3}^{d}},~{}~{}~{}m_{B% }\simeq M_{3}^{d}~{}.$$ (4.10) The first block in Eq. (4.9) turns out to be the $s-S$ sector. We take $M_{1}^{d}\sim Y_{2}^{d}\kappa_{R}$ in this block, so that $d_{2R}-D_{2R}$ mixing is significant. We shall further take the limit $M_{1}^{d}\rightarrow 0$, in which case the light state will be composed of $D_{2R}$ with the $d_{2R}$ belonging to the heavy state. Analogous to the $t-T$ sector, we identify the physical states as $$s_{R}^{0}=c_{s}s_{R}+s_{s}S_{R},~{}~{}~{}S_{R}^{0}=-s_{s}s_{R}+c_{s}S_{R}~{}$$ (4.11) with the $s_{R}-S_{R}$ mixing angle given as $$\tan\theta_{s}=\frac{Y_{2}^{d}\kappa_{R}}{M_{1}^{d}}~{}.$$ (4.12) The eigenvalues $m_{s}$ and $m_{S}$ of the first block matrix are: $$m_{s}\simeq Y_{1}^{d}\kappa_{L},~{}~{}~{}M_{S}\simeq Y_{2}^{d}\kappa_{R}.$$ (4.13) Note that this flips $d_{2R}$ with $D_{2R}$. That is, $d_{2R}$ is the heavy state that couples to $W_{R}$ while $D_{2R}$ is the light state with no coupling to $W_{R}$. For the second block matrix in Eq. (4.9), we take $M_{2}^{d}\gg Y_{1}^{d}\kappa_{R}$, leading to the eigenvalues: $$m_{d}\simeq\frac{Y_{1}^{d}Y_{2}^{d}\kappa_{L}\kappa_{R}}{M_{2}^{d},}~{}~{}~{}m% _{D}\simeq M_{2}^{d}~{}.$$ This ligther eigenvalue is smaller than the lighter eigenvalue of the first block, $m_{s}\simeq Y_{1}^{d}\kappa_{L}$, see Eq. (4.13), and therefore should be identified as the $d$-quark. Thus, $d_{R}$ couples to $W_{R}$. The identification in the limit $\cos\theta_{s}\rightarrow 0$ is this: $d_{1R}=d_{R}$, $d_{2L}=d_{L}$, $d_{1L}=s_{L}$, $D_{2R}=s_{R}$. The flip $d_{2L}\leftrightarrow d_{1L}$ is no concern, since that can be compensated by the arbitrary form of $V_{L}^{0}$ in $W_{L}$ charged current. In fact, with this interchange implemented, $V_{L}^{0}$ will be identified as the left-handed CKM matrix $V_{L}$. Suppose the form of $V_{R}^{0}$ in Eq. (4.2) is $$\displaystyle V_{R}^{0}=\left(\begin{matrix}0&1&0\\ 0&0&1\\ 1&0&0\end{matrix}\right)~{}.$$ (4.14) This form of $V_{R}^{0}$ is motivated by maximizing new contributions to $R(D^{*},D)$ – with the (2,3) entry being 1. The flippling $s_{R}\leftrightarrow S_{R}$ helps with suppressing the decay $\tau\rightarrow K\nu_{\tau}$ which would set significant constraints on new contributions to $R(D^{*},D)$, if it is allowed. The $\overline{u}_{R}\gamma^{\mu}d_{2R}W_{R}^{\mu}$ coupling will now involve the heavy $D_{2R}$ state and will not lead to $\tau\rightarrow K\nu$ decay. This form of the right-handed CKM matrix $V_{R}^{0}$ is chosen to fit the $R(D^{*},D)$ anomaly via right-handed currents while suppressing new contributions to $K^{0}-\overline{K^{0}}$, $B_{d,s}-\overline{B}_{d,s}$ and $D^{0}-\overline{D^{0}}$ mixing mediated by $W_{L}-W_{R}$ mixed box diagrams. The amplitude for such mixed box diagrams, while suppressed by a factor of $(g_{R}^{2}/g_{L}^{2})(M_{W_{L}}^{2}/M_{W_{R}}^{2})$, is enhanced by a numerical factor of about $10^{3}$ arising from combinatorial factor of 8, enhanced matrix element (for the case of $K^{0}-\overline{K^{0}}$ mixing) of order 20 and a factor ln$(m_{c}^{2}/M_{W_{R}}^{2})\simeq 8$ [12, 40]. Thus, suppression of these mixed box diagram contributions is essential for explaining $R(D^{*},D)$ anomaly. In addition to the form of $V_{R}^{0}$ given in Eq. (4.14), a second form can also be considered in principle, with the interchange of first and second column in Eq. (4.14). However, this case, while being consistent with FCNC induced by box diagrams, would lead to to the decay $\tau\rightarrow\pi\nu_{\tau}$, leading to universality violation at such a level as to make new contributions to $R(D^{*},D)$ not significant. We shall not consider such a form as a result. With these form of $V_{R}^{0}$ of Eq. (4.14), constraints from $K^{0}-\overline{K^{0}}$, $B_{d,s}-\overline{B}_{d,s}$ mixing and $D^{0}-\overline{D^{0}}$ mixing can be readily satisfied, as we shall see. Such a form of $V_{R}^{0}$ would lead to excessive meson mixing in the standard formulation of left-right symmetric models, but not in the quark seesaw version. Including the large $t_{R}-T_{R}$ mixing as well as $s_{R}-S_{R}$ mixing, the right-handed CKM matrix given in Eq. (4.14) appears in the charged current interactions as $$\displaystyle{\cal L}_{W_{R}}$$ $$\displaystyle=$$ $$\displaystyle\frac{g_{R}}{\sqrt{2}}\left(\overline{u^{0}_{R}},~{}\overline{c_{% R}^{0}},~{}\overline{t_{R}^{0}},~{}\overline{T_{R}^{0}}\right)\left(\begin{% matrix}0&c_{s}&0&-s_{s}\\ 0&0&1&0\\ c_{t}&0&0&0\\ -s_{t}&0&0&0\end{matrix}\right)\gamma^{\mu}\left(\begin{matrix}d_{R}^{0}\\ s_{R}^{0}\\ b_{R}^{0}\\ S_{R}^{0}\end{matrix}\right)W_{R}^{+\mu}+h.c.$$ (4.15) The $4\times 4$ mixing matrix appearing in Eq. (4.15) will be denoted as $V_{R}$. Unlike the light quark partners, the top-quark partner (and the strange quark partner) have to be relatively light. Note that there is no light vector-like fermion even with $M_{3}^{u}=0$, since the Yukawa coupling $y_{u}$ is of order one. However, this choice would predict $M_{T}/m_{t}=\kappa_{R}/\kappa_{L}$, which for explaining $R(D^{*},D)$ anomaly is about $10-20$. Thus, the mass of the top partner is in the range $(1.5-2.5)$ TeV in this scenario. The mixing angle $\theta_{t}\rightarrow\pi/2$ in this limit, which means that $\cos\theta_{t}\rightarrow 0$. All entries in the third row of the $4\times 4$ matrix $V_{R}$ in Eq. (4.15) vanish for this choice. Similarly, in the limit $c_{s}\rightarrow 0$, all entries in the second column of $V_{R}$ in Eq. (4.15) would vanish. As already noted, this would prevent the decay $\tau\rightarrow K\nu_{\tau}$. As a result of $c_{t}\rightarrow 0$, box diagrams involving $W_{L}-W_{R}$ exchange would be suppressed, thus evading stringent flavor constrains from meson-antimeson oscillations. To see the suppression of $W_{L}-W_{R}$ box diagrams shown for this case in Fig. 2, we note that their amplitudes are given as in standard LR models, but with internal $T$ quark included. The effective Hamiltonian for $K^{0}-\overline{K^{0}}$ mixing is given by [40] $$\displaystyle H_{\rm efff}^{LR}$$ $$\displaystyle=$$ $$\displaystyle\frac{G_{F}}{\sqrt{2}}\frac{\alpha}{4\pi s_{W}^{2}}\lambda_{i}% \lambda_{j}2\eta(x_{i}x_{j})^{1/2}\left[(4+x_{i}x_{j}\eta)I_{1}(x_{i},x_{j},% \eta)-(1+\eta)I_{2}(x_{i},x_{j},\eta)\right](\overline{s}_{R}d_{L})(\overline{% s}_{L}d_{R})$$ where $\eta=M_{W_{L}}^{2}/M_{W_{R}}^{2}$, $x_{i}=m_{i}^{2}/M_{W_{L}}^{2}$ for $i=u,c,t,T$ and the functions $I_{1}$ and $I_{2}$ are defined in Eq. (3.16). The parameter $\lambda_{i}$ are defined as $$\lambda_{i}\equiv(V_{L})_{is}^{*}(V_{R})_{id}~{}.$$ (4.17) With $V_{R}$ given by the $4\times 4$ matrix of Eq. (4.15), and with $c_{t}\rightarrow 0$, the new contributions to $K^{0}-\overline{K^{0}}$ mixing vanishes. The $W_{L}-W_{R}$ box diagram would require chirality flips on the $T$-quark internal lines. However, $T_{L}$ has no coupling to $W_{L}$, being a singlet of $SU(2)_{L}$, and thus there is no contribution to $K^{0}-\overline{K^{0}}$ mixing. Contributions to $B_{d}-\overline{B}_{d}$ mixing also vanishes, being proportional to $m_{c}M_{T}$. New contributions to $B_{s}-\overline{B}_{s}$ mixing also vanish, as the second column of $V_{R}$ is all zero in the limit $c_{s}\rightarrow 0$. Similarly, new contributions to $D^{0}-\overline{D^{0}}$ also vanishes, since this requires chirality flip of $S$ quark. Thus, the flavor structure in the quark sector is consistent with the most stringent constraints from FCNC. It should be pointed out that the form of the right-handed CKM matrix given in Eq. (4.14) can also be realized in the standard left-right symmetric models without parity symmetry. There is a related possibility where the first and second column of Eq. (4.14) are interchanged. Such models cannot explain $R(D^{*},D)$ anomaly, however. In this case, if we adopt a form for $V_{R}^{0}$ where first and second column are interchanged in Eq. (4.14), there would be a new contributions to $B_{s}-\overline{B_{s}}$ which goes as $[(V_{L})_{tb}^{*}(V_{R})_{ts}]^{2}m_{t}$. We find the constraint from this mixing on the $W_{R}$ mass to be 40 TeV with $g_{R}=g_{L}$, and even stronger if $g_{R}>g_{L}$. Similarly, with form of $V_{R}^{0}$ as it is, $K^{0}-\overline{K^{0}}$ mixing will receive a contribution proportional to $[(V_{L})_{ts}^{*}(V_{R})_{td}]^{2}m_{t}^{2}$, which leads to a constraint $M_{W_{R}}\geq 70$ TeV for $g_{R}=g_{L}$. New contributions to $B_{d}-\overline{B}_{d}$ mixing will go as $[(V_{L})_{tb}^{*}(V_{R})_{td}]^{2}m_{t}^{2}$. We obtained a stringent limit of $W_{W_{R}}\geq 225$ TeV in this case.666In all cases, when the $W_{L}-W_{R}$ diagram gives nonzero contributions, we have followed the matrix element evaluations compiled in the first of Ref. [13] to obtain limits quoted here. It is clear that these constraints would contradict the $W_{R}$ mass of order 2 TeV and $g_{R}=2$ needed to explain $R(D^{*},D)$. These contributions are absent in the $P$ symmetric universal seesaw model, when $c_{t}$ and $c_{s}$ in Eq. (4.15) are small. The standard LR models also does not allow for a suppressed coupling of $Z_{R}$ with electron which is needed to be consistent with LEP bounds. With the form of $W_{R}^{\pm}$ interaction given in Eq. (4.15), $W_{R}^{\pm}$ will not be produced resonantly at hadron colliders nor by by $u-s$ fusion when $c_{s}\rightarrow 0$. Interactions of Eq. (4.15) are exactly of the right form needed to explain the $R(D^{*},D)$ anomaly. For this purpose we should specify the couplings of $W_{R}^{\pm}$ to leptons as well to which we now turn. In the charged lepton sector the seesaw mass matrix has a form as given in Eq. (2.19). Here again, as in the quark sector, we shall assume that Parity is softly broken in the bare mass terms of the vector-like $E$ fields. As a result, $M_{E}$ is not hermitian. This soft breaking in the leptonic sector will help suppress $Z_{R}$ coupling to electrons, which is strongly constrained by LEP data. This suppression is achieved by flipping the $e_{R}$ field with a vector-like lepton field, as discussed below. Flipping of $e_{R}$ field with one of the $E_{R}$ fields can be achieved by the following choice for the block mass matrices $Y_{E}$ and $M_{E}$ in Eq. (2.19): $$\displaystyle Y_{E}=\left(\begin{matrix}*&*&Y_{1}^{e}\\ *&*&Y_{2}^{e}\\ *&*&Y_{3}^{e}\end{matrix}\right),~{}~{}~{}M_{E}=\left(\begin{matrix}M_{11}&*&*% \\ *&*&M_{23}\\ *&M_{32}&*\end{matrix}\right)$$ (4.18) where a * indicates small entry. When the * entries are ignored, all three chiral families would be massless. Thus, the couplings $Y_{i}^{e}$ are not constrained by the light lepton masses, and can be of order one. With all the * entries set to zero, this matrix can be exactly diagonalized by the following basis transformations: $$\psi_{R}^{0}=U_{R}\psi_{R},~{}~{}~{}\psi_{L}^{0}=U_{L}\psi_{L},$$ (4.19) which reads more explicitly as $$\displaystyle\left(\begin{matrix}e_{1R}^{0}\\ e_{2R}^{0}\\ e_{3R}^{0}\\ E_{1R}^{0}\\ E_{2R}^{0}\\ E_{3R}^{0}\end{matrix}\right)=\left[\begin{matrix}c_{\alpha_{R}}c_{\theta}&c_{% \alpha_{R}}s_{\theta}c_{\phi}&c_{\alpha_{R}}s_{\theta}s_{\phi}&0&-s_{\alpha_{R% }}&0\\ 0&s_{\phi}&-c_{\phi}&0&0&0\\ s_{\theta}&-c_{\theta}c_{\phi}&-c_{\theta}s_{\phi}&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&0&1\\ s_{\alpha_{R}}c_{\theta}&s_{\alpha_{R}}s_{\theta}c_{\phi}&s_{\alpha_{R}}s_{% \theta}s_{\phi}&0&c_{\alpha_{R}}&0\end{matrix}\right]\left(\begin{matrix}e_{1R% }\\ e_{2R}\\ e_{3R}\\ E_{1R}\\ E_{2R}\\ E_{3R}\end{matrix}\right)~{}.$$ (4.20) Here $e_{i}^{0}$ and $E_{i}^{0}$ refer to mass eigenstates. The matrix $U_{L}$ is obtained from the matrix above by replacing $\alpha_{R}$ by $\alpha_{L}$ and by interchanging the fifth and sixth rows. Here we have defined $$\displaystyle Y_{1}^{e}$$ $$\displaystyle=$$ $$\displaystyle Y^{e}\cos\theta,~{}~{}Y_{2}^{e}=Y^{e}\sin\theta\cos\phi,~{}~{}Y_% {3}^{e}=Y^{e}\sin\theta\sin\phi,$$ $$\displaystyle\tan\alpha_{R}$$ $$\displaystyle=$$ $$\displaystyle\frac{\kappa_{R}Y^{e}}{M_{32}},~{}~{}~{}\tan\alpha_{L}=\frac{% \kappa_{L}Y^{e}}{M_{23}}~{}.$$ (4.21) In Eq. (4.20), $c_{\alpha_{R}}=\cos\alpha_{R}$, $c_{\theta}=\cos\theta$, $s_{\phi}=\sin\phi$ and so forth. The Lagrangian for the lepton masses read as $${\cal L}_{\rm mass}^{\rm lep}=M_{11}\overline{E^{0}}_{1L}E^{0}_{1R}+\frac{M_{2% 3}}{c_{\alpha_{L}}}\overline{E^{0}}_{2L}E^{0}_{2R}+\frac{M_{32}}{c_{\alpha_{R}% }}\overline{E^{0}}_{3L}E^{0}_{3R}+h.c.$$ (4.22) We see that in this limit, all chiral leptons are massless, even when the Yukawa coupling $Y^{e}$ is of order one. Furthermore, the angle $\alpha_{R}$ can be of order one, while $\alpha_{L}$ is much smaller. In the limit $M_{32}\rightarrow 0$, and with $\sin\theta=0$, $e_{1R}$ and $E_{2,3R}$ will be flipped. That is, $e_{1R}^{0}=-E_{2R}$ and $E_{3R}^{0}=e_{1R}$. Note that the mass of $E_{3}$, which is $M_{32}/c_{\alpha_{R}}=Y^{e}\kappa_{R}$ in this limit, and can be of order TeV. This means that the mass of the vector like partner of electron is less than about 4.5 TeV. However, if $Y^{e}$ is of order one, $e_{L}$ can potentially mix with $E_{2L}$ with the mixing angle given by $Y_{1}^{e}\kappa_{L}/M_{23}$. From lepton universality, this mixing angle should be $\leq 0.03$ or so, which can be satisfied by choosing $M_{23}$ of order 10 TeV. Note that if we had imposed Parity on the mass terms, $M_{23}=M_{32}^{*}$, and this solution for $e_{R}\leftrightarrow E_{2R}$ flipping will be unavailable. Once the small entries denoted as * in Eq. (4.18) are included, small masses for $e$, $\mu$ and $\tau$ will be generated. Care should be taken to ensure that the flipping indeed corresponds to $e_{R}\rightarrow E_{2R}$ and not $\mu_{R}\rightarrow E_{2R}$. There is enough freedom in the model to ensure this condition. In what follows, we shall assume that such $e_{R}\rightarrow E_{2R}$ flipping has been done. As for flavor violation, the discussions of Sec. 3 apply to the parity symmetric version as well. The $b_{L}-B_{L}$ mixing angle is given as $\theta_{b}\simeq m_{b}/(Y_{3}^{d}\kappa_{R})$ which can be as small as 0.001, thus satisfying constraints from $R_{b}$. Similarly, $s_{\tau}$, $s_{c}$, etc., can be small enough to satisfy their experimental limits. As for lepton non-universality in $B$-meson decay, we note that if $\nu_{eR}$ and $\nu_{\mu R}$ are heavy, then the new decays of $b\rightarrow c\,\ell\,\overline{\nu}_{eR}$ and $b\rightarrow c\,\ell\,\overline{\nu}_{\mu R}$ will be kinematically forbidden, while the decay $b\rightarrow c\,\ell\,\overline{\nu}_{\tau R}$ will be allowed provided that $\nu_{\tau R}$ is light (which we assume). This scenario can then explain the $R(D^{*},D)$ anomaly. 4.1 A complete theory with Parity A complete theory with Parity symmetry should explain why $g_{R}\neq g_{L}$, as needed for the $R(D^{*},D)$ anomaly. This can happen at low energies in a variety of ways. Parity symmetry may be spontaneously broken (without breaking $SU(2)_{R}$ symmetry) at a high scale $\Lambda$. This can lead to an asymmetric spectrum under $SU(2)_{L}$ and $SU(2)_{R}$ in the energy interval $M_{I}\leq\mu\leq\Lambda$, explaining why $g_{L}\neq g_{R}$ at $M_{I}$. The scales $\Lambda$ and $M_{I}$ may be identified with the GUT scale and an intermediate scale where the asymmetric matter sector acquire their masses. Alternatively, the full gauge symmetry could be $SU(3)_{c}\times SU(2)_{L}\times SU(2)_{R}\times SU(2)_{D}\times U(1)_{B-L}$, where all fermion fields are neutral under the $SU(2)_{D}$. A self-dual bifundamental Higgs field $\Phi_{L}(1,2,1,2,0)$ spontaneously breaks $SU(2)_{L}\times SU(2)_{D}$ down to its diagonal subgroup $SU(2)_{\rm weak}$, which is identified as the weak interaction gauge symmetry. This filed is accompanied by a right-handed partner field $\Phi_{R}(1,1,2,2,0)$, which is assumed to have no vacuum expectation value. Such an embedding would lead to the relation $$g_{w}^{-2}=g_{L}^{-2}+g_{D}^{-2}~{}$$ (4.23) where $g_{w}$ is the weak $SU(2)_{L}$ gauge coupling and $g_{D}$ is the “dark” $SU(2)_{D}$ gauge coupling. Even with $g_{L}=g_{R}$, one obtains $g_{w}\neq g_{R}$ this way, and Parity is maintained above this symmetry breaking scale. If $SU(2)_{D}$ is broken near the TeV scale, this dark sector can also provide interesting dark matter candidates. The $\Phi_{R}(1,1,2,2,0)$ field, which does not acquire a VEV, can be an interesting dark matter candidate. Its existence is required by parity symmetry. Once $SU(2)_{L}\times SU(2)_{D}$ breaks down to the diagonal $SU(2)_{w}$ by the VEV of $\Phi_{L}$, the field $\Phi_{R}$ will transform under $SU(3)_{c}\times SU(2)_{w}\times SU(2)_{R}\times U(1)_{B-L}$ as a $(1,2,2,0)$ scalar. This self-dual field has the quantum numbers of a weak doublet, which turns out to be inert. Thus, a complete parity embedding leads to a natural inert doublet dark matter model [41], which has been widely studied. It should be remarked that for $\Phi_{R}$ to be a dark matter candidate, an allowed quartic coupling $\chi_{L}^{\dagger}\Phi_{L}\Phi_{R}\chi_{R}$ should be absent, which can be arranged by a discrete symmetry. No other couplings will affect the stability of $\Phi_{R}$ dark matter. 4.2 Solving the strong CP problem Here we briefly review how Parity symmetry solves the strong CP problem in the universal quark seesaw framework [20, 25]. We have already noted that parity symmetry sets $\theta_{QCD}$ to zero. Furthermore, Det(${\cal M}_{U}.{\cal M_{D}})$ (see Eq. (4.1)) is real, so that there is no tree-level contribution to $\overline{\theta}$. If $\overline{\theta}$ is induced at the one-loop, it would be typically too large, compared to the experimental limit of $\overline{\theta}\leq 10^{-10}$ arising from neutron electric dipole moment. This is not an issue in our model, as the one-loop contributions to $\overline{\theta}$ are all zero. This is true even when parity is softly broken in the bare quark mass matrices $M_{U,D}$ in Eq. (4.1), as shown in Ref. [20]. We shall briefly review this result here. Following Ref. [20], we write the up-quark mass matrix including loop corrections as $${\cal M}_{U}={\cal M}_{U}^{0}(1+C)~{}.$$ (4.24) Then the contribution of up-type quarks to $\overline{\theta}$ given by $$\displaystyle\overline{\theta}={\rm Arg}{\rm Det}(1+C)={\rm Im}{\rm Tr}(1+C)={% \rm Im}{\rm Tr}\,C_{1}$$ (4.25) where $C=C_{1}+C_{2}+...$ is used as a loop expansion. If the loop corrections to ${\cal M}_{U}$ is written as $$\displaystyle\delta{\cal M}_{U}=\left[\begin{matrix}\delta M_{LL}^{U}&\delta M% _{LH}^{U}\\ \delta M_{HL}^{U}&\delta M_{HH}^{U}\end{matrix}\right],$$ (4.26) then $\overline{\theta}$ is given by $$\displaystyle\overline{\theta}={\rm Im}{\rm Tr}\left[-\frac{1}{\kappa_{L}% \kappa_{R}}\delta M_{LL}^{U}(Y_{U}^{\dagger})^{-1}M_{U}Y_{U}^{-1}+\frac{1}{% \kappa_{L}}\delta M_{LH}^{U}Y_{U}^{-1}+\frac{1}{\kappa_{R}}\delta M_{HL}^{U}(Y% _{U}^{\dagger})^{-1}\right]~{}.$$ (4.27) Note that the correction terms $\delta M^{U}_{HH}$ does not appear in $\overline{\theta}$ at the one-loop level. The one-loop diagrams that generate corrections to the up-quark mass matrix are shown in Fig. 3. In evaluating these diagrams we shall treat the mass matrix as part of the interaction Lagrangian, in which case the cross on the internal fermion line stands for all possible tree-level diagrams where an initial $f_{L}$ becomes a $f_{R}$. Defining $F_{L,R}=(u,U)_{L,R}$, the tree-level mass matrix can be written as $\overline{F}_{L}{\cal M}_{U}^{0}F_{R}$ in the Lagrangian. The full tree-level propagator with all possible mass insertions is then $$\displaystyle\overline{F}_{R}\left[{\cal M}_{U}^{0\dagger}\frac{k^{2}}{k^{2}-{% \cal M}_{U}^{0}{\cal M}_{U}^{0\dagger}}\right]F_{L}~{}.$$ (4.28) Now define the inverse matrix $$\displaystyle\left[{\cal M}_{U}^{0\dagger}\frac{k^{2}}{k^{2}-{\cal M}_{U}^{0}{% \cal M}_{U}^{0\dagger}}-k^{2}\right]^{-1}=\left[\begin{matrix}X(k^{2})&Y(k^{2}% )\\ Y^{\dagger}(k^{2})&Z(k^{2})\end{matrix}\right]$$ (4.29) with $X=X^{\dagger}$ and $Z=Z^{\dagger}$. Ordinary matrix multiplication determines $X,Y,Z$ as $$\displaystyle(\kappa_{R}^{2}Y_{U}^{\dagger}Y_{U}+M_{U}M_{U}^{\dagger}-k^{2})Y^% {\dagger}=-\kappa_{L}M_{U}Y_{U}^{\dagger}X$$ $$\displaystyle\kappa_{L}Y_{U}Y_{U}^{\dagger}X+Y_{U}M_{U}^{\dagger}Y^{\dagger}=% \frac{1}{\kappa_{L}}(I+k^{2}X)$$ $$\displaystyle Y=-\kappa_{L}HY_{U}M_{U}^{\dagger}Z$$ (4.30) where $$H=(\kappa_{L}^{2}Y_{U}Y_{U}^{\dagger}-k^{2})^{-1}=H^{\dagger}~{}.$$ (4.31) The interaction corresponding to the cross on the internal fermion lines of Fig. 3 can be read off from $$\displaystyle-{\cal L}_{\rm eff}^{\rm tree}$$ $$\displaystyle=$$ $$\displaystyle\overline{U}_{R}\left[\frac{k^{4}}{\kappa_{L}}Y_{U}^{-1}Y(k^{2})% \right]U_{L}+\overline{u}_{R}\left[k^{2}Y_{U}\kappa_{R}Z(k^{2})\right]U_{L}$$ (4.32) $$\displaystyle+$$ $$\displaystyle\overline{U}_{R}\left[\frac{k^{2}}{\kappa_{L}}Y_{U}^{-1}\{I+k^{2}% X(k^{2})\}\right]u_{L}+\overline{u}_{R}\left[k^{2}Y_{u}\kappa_{R}Y^{\dagger}(k% ^{2})\right]+h.c.$$ Consider the scalar exchange diagram of Fig. 3 (a). Its amplitude is given by $$\displaystyle\delta M_{LL}^{U}=\int\frac{d^{4}k}{(2\pi)^{4}}Y_{U}\frac{1}{% \kappa_{L}}Y_{U}^{-1}\frac{k^{2}Y(k^{2})Y_{U}^{\dagger}\lambda_{2}\kappa_{L}% \kappa_{R}}{[(p-k)^{2}-M_{\sigma_{L}}^{2}][(p-k)^{2}-M_{\sigma_{R}}^{2}]}~{}.$$ (4.33) Its contribution to $\overline{\theta}$, given by Eq. (4.27), is $$\displaystyle-{\rm Im}{\rm Tr}\left[\frac{\lambda_{2}}{\kappa_{L}}\int\frac{d^% {4}k}{(2\pi)^{4}}\frac{k^{2}Y(k^{2})M_{U}(Y_{U})^{-1}}{[(p-k)^{2}-M_{\sigma_{L% }}^{2}][(p-k)^{2}-M_{\sigma_{R}}^{2}]}\right]~{}.$$ (4.34) We can evaluate the trace before performing the momentum integration, which yields $${\rm Tr}[Y(k^{2})M_{U}Y_{U}^{-1}]=-\kappa_{L}{\rm Tr}[(Y_{U}^{\dagger}Y_{U}% \kappa_{L}^{2}-k^{2})^{-1}M_{U}^{\dagger}Z(k^{2})M_{U}]~{}.$$ (4.35) Since the righ-hand side is the product of two hermitian matrices, its trace is real. Hence we conclude that the contribution from Fig. 3 (a) to $\overline{\theta}$ is zero. The gauge contributions from Fig. 3 (b) has the same flavor structure as Fig. 3 (a), viz., $Y(k^{2})Y_{U}^{\dagger}$. Therefore, the contribution from Fig. 3 (b) to $\overline{\theta}$ is also zero. The off-diagonal contribution from Fig. 3 (c)-(f) have the matrix structures $$\displaystyle{\rm Fig.}\ref{loop}~{}(c):~{}~{}~{}\left[I+k^{2}X(k^{2})\right]Y% _{U}$$ $$\displaystyle{\rm Fig.}\ref{loop}~{}(d):~{}~{}~{}\left[I+k^{2}X(k^{2})\right](% Y_{U}^{\dagger})^{-1}~{}.$$ (4.36) After multiplying by $Y_{U}^{-1}$, the relevant trace for $\overline{\theta}$ is found to involve $(I+k^{2}X)$ and $(I+k^{2}X)(Y_{U}Y_{U}^{\dagger})^{-1}$. Both these traces are real, since $X$ is hermitian. Finally, the contribution from Fig. 3 (e) is proportional to $Y_{U}^{\dagger}Y_{U}Z(k^{2})Y_{U}^{\dagger}$ and Fig. 3 (f) is $Z(k^{2})Y_{U}^{\dagger}$. These contributions to $\overline{\theta}$ are also vanishing. Thus we see that all one-loop contributions to $\overline{\theta}$ are zero, even with the bare mass terms $M_{U,D}$ being non-hermitian. There are two-loop diagrams that generate nonzero $\overline{\theta}$, which has been estimate to be of order $10^{-10}$ [20, 25], consistent with neutron EDM limits. Thus, this class of LR models provides a solution to the strong CP problem without invoking the axion. 5 Explaining the $R(D^{*},D)$ anomaly As mentioned in the introduction, the BaBar, Belle and LHCb collaborations have measured $R(D)$ and $R(D^{\ast})$ to very high precision. The combined experimental values are [42]: $$\displaystyle R(D)_{\text{Exp}}$$ $$\displaystyle=$$ $$\displaystyle 0.407\pm 0.039\pm 0.024,$$ (5.1) $$\displaystyle R(D^{\ast})_{\text{Exp}}$$ $$\displaystyle=$$ $$\displaystyle 0.306\pm 0.013\pm 0.007.$$ (5.2) We see that $R(D)$ and $R(D^{*})$ exceed the SM predictions by 2.3$\sigma$ and 3.0$\sigma$ respectively. The net anomaly is about 3.78$\sigma$. The SM predictions for $R(D^{*})$ [42] which shows an arithmetic average of theory calculations [43, 44, 45] is: $$\displaystyle R(D^{\ast})_{\text{SM}}$$ $$\displaystyle=$$ $$\displaystyle 0.258\pm 0.005,$$ (5.3) The SM predictions for $R(D)$ from FLAG working group [46] is: $$\displaystyle R(D)_{\text{SM}}$$ $$\displaystyle=$$ $$\displaystyle 0.300\pm 0.008$$ (5.4) Refs. [43, 44, 45] show that the SM error can be reduced to 0.003. The significance of $R(D)$ discrepancy does not change for these two values of SM error. We will quote results for both these cases in the results section. In our model $W_{R}$ connects to both $\bar{b}_{R}c_{R}$ current and the $\bar{\tau}_{R}\nu_{\tau,R}$ current leading to the effective operator: $$\displaystyle{\cal H}_{eff}\simeq\frac{g^{2}_{R}}{2M^{2}_{W_{R}}}\bar{b}_{R}% \gamma_{\mu}c_{R}\bar{\nu}_{\tau_{R}}\gamma^{\mu}{\tau_{R}}+h.c.$$ (5.5) In the parity asymmetric model, we found that the implication of the above flavor choice is that only the $b$ and $c$ quarks undergo quark seesaw. The resulting $W_{R}$ interaction with quarks is given in Eq. (3.6). Similarly in the lepton sector, only the tau-lepton field undergoes seesaw which leads to lepton non-universal interaction of $W_{R}$ given in Eq. (3.10), which helps us explain the $R(D^{*},D)$ anomaly. In the parity symmetric model, $W_{R}$ connection to the $\overline{b}_{R}\gamma_{\mu}c_{R}$ current arises from the Eq. (4.15) while in the leptonic sector, only $\overline{\tau}_{R}\gamma_{\mu}\nu_{\tau_{R}}$ is allowed kinematically. This is the case when $\nu_{\mu R}$ is heavier than 200 MeV or so. The $\nu_{eR}$ field couples to heavy leptons and $W_{R}$ and thus will not be relevant for $R(D^{*},D)$ discussions. To see if the interaction of Eq. (5.5) may explain the $R(D^{*},D)$ anomaly, we vary $g_{R}$ and $W_{R}$ and calculate $R(D^{*},D)$. We show a scatter plot with points (in gray) which explains the anomaly in Fig. 4. The allowed ranges of $R(D)$ and $R(D^{*})$ anomalies are enclosed by the black lines and blue lines respectively. In Fig. 5 we show $g_{R}$ as a function of $W_{R}$ mass in the 1 $\sigma$ allowed overlapping regions (between top blue and bottom black curves) arising from the simultaneous explanations of $R(D^{*},D)$ anomalies. As can be seen in this figure, as $g_{R}$ increases $M_{W_{R}}$ takes larger values. 6 Collider constraints: LHC and LEP Let us first focus on the constraints arising from a low mass $Z_{R}$ boson predicted by the model. The coupling of $Z_{R}$ gauge boson to fermions is given by the Lagrangian (ignoring small $Z_{L}-Z_{R}$ mixing) $${\cal L}_{Z_{R}}=\frac{g_{R}^{2}}{\sqrt{g_{R}^{2}-g_{Y}^{2}}}\,\overline{f}_{L% ,R}\,\gamma_{\mu}\left[T_{3R}-\frac{Y_{L,R}}{2}\,\frac{g_{Y}^{2}}{g_{R}^{2}}% \right]f_{L,R}\,Z_{R}^{\mu}~{}.$$ (6.1) Here $g_{R}$ is the $SU(2)_{R}$ gauge coupling, $g_{Y}$ is the hypercharge coupling given by $g_{Y}^{2}=4\pi\alpha/(1-s_{W}^{2})=0.1279$ (using values for the weak mixing angle $s_{W}^{2}(M_{Z})=0.2315$ and $\alpha(m_{Z})=1/127.9$). $T_{3R}=\pm\frac{1}{2}$ or $0$ for $SU(2)_{R}$ doublets and singlets. In the model under discussion, all left-handed fermions will have $T_{3R}=0$. $Y_{L,R}$ refer to the hypercharges of $f_{L,R}$ with the normalization $Y(e_{R})=-2$. The $B-L$ gauge coupling $g_{B}$ appearing in the interactions has been replaced by the hypercharge coupling $g_{Y}$ using the formula that embeds $Y$ within $SU(2)_{R}\times U(1)_{B-L}$, see Eq. (2.8). We shall treat $g_{R}$ as a variable parameter, but note that $g_{R}^{2}\geq g_{Y}^{2}$ is required for consistency of Eq. (2.8). We shall demand that $g_{R}^{2}\leq 4\pi$ to stay within perturbative limits. The decay width for $Z_{R}\rightarrow\overline{f}f$ to fermions of mass $m_{f}$ is given by $$\displaystyle\Gamma(Z_{R}\rightarrow\overline{f}f)=\frac{g_{R}^{4}}{g_{R}^{2}-% g_{Y}^{2}}\,\frac{M_{Z_{R}}}{48\pi}\,\beta\left[\frac{3-\beta^{2}}{2}\,a_{f}^{% 2}+\beta^{2}\,b_{f}^{2}\right]$$ (6.2) where $$\beta=\sqrt{1-\frac{4m_{f}^{2}}{M_{Z_{R}}^{2}}},~{}~{}~{}a_{f}=T_{3R}-\frac{Y_% {L}+Y_{R}}{2}\,\frac{g_{Y}^{2}}{g_{R}^{2}},~{}~{}~{}b_{f}=T_{3R}-\frac{Y_{R}-Y% _{L}}{2}\,\frac{g_{Y}^{2}}{g_{R}^{2}}~{}.$$ (6.3) In addition, $Z_{R}$ can decay into $W_{L}^{+}W_{L}^{-}$ pair utilizing the small $Z_{L}-Z_{R}$ mixing and the SM $ZW^{+}W^{-}$ vertex. Although this partial decay width is suppressed by $\sin^{2}\xi$ ($\xi$ is the $Z_{L}-Z_{R}$ mixing angle), it is enhanced by a factor $(M_{Z_{R}}/M_{W_{L}})^{4}$, and could be significant. The decay width is given by [47] $$\displaystyle\Gamma(Z_{2}\rightarrow W^{+}W^{-})=\frac{g_{L}^{2}\sin^{2}\xi}{1% 92\pi c_{W}^{2}}M_{Z_{2}}\left[\frac{M_{Z_{2}}}{M_{W}}\right]^{4}\left[1-\frac% {4M_{W}^{2}}{M_{Z_{2}}^{2}}\right]^{3/2}\left[1+20\frac{M_{W}^{2}}{M_{Z_{2}}^{% 2}}+12\frac{M_{W}^{4}}{M_{Z_{2}}^{4}}\right]~{}.$$ (6.4) $Z_{2}$ can also decay into $h+Z$. The interaction Lagrangian for this decay in our model is given by $${\cal L}_{Z-Z_{2}-h}=g_{Y}^{2}\sqrt{\frac{g_{Y}^{2}+g_{L}^{2}}{g_{R}^{2}-g_{Y}% ^{2}}}{1\over{\sqrt{2}}}\kappa_{L}Z_{1}^{\mu}Z_{2\mu}\,h\equiv f_{Z_{1}Z_{2}h}% Z_{1}^{\mu}Z_{2\mu}h$$ (6.5) and the partial width is given by $$\Gamma(Z_{2}\rightarrow Z+h)=\frac{\left|f_{Z_{1}Z_{2}h}/M_{Z_{1}}\right|^{2}}% {192\pi}M_{Z_{2}}\lambda^{1/2}\left[1,\frac{M_{Z_{1}}^{2}}{M_{Z_{2}}^{2}},% \frac{M_{h}^{2}}{M_{Z_{2}}^{2}}\right]\left\{\lambda\left[1,\frac{M_{Z_{1}}^{2% }}{M_{Z_{2}}^{2}},\frac{M_{h}^{2}}{M_{Z_{2}}^{2}}\right]+12\frac{M_{Z_{1}}^{2}% }{M_{Z_{2}}^{2}}\right\}~{}.\\ $$ (6.6) Here $\lambda(a,b,c)\equiv a^{2}+b^{2}+c^{2}-2ab-2ac-2bc$. In Eqs. 6.4$\,-\,$6.6, $Z_{1}$ can be identified as the SM $Z$ and $Z_{2}$ as the heavy $Z_{R}$. The branching ratios to various fermions follows from Eq. (6.2). Also the total width of $Z_{R}$ as a function of $g_{R}$ can be computed. We consider two specific scenarios, one Parity asymmetric, and one Parity symmetric. 6.1 Parity asymmetric scenario Here we focus on the case where all exotic fermions have masses larger than $M_{Z_{R}}/2$, so that $Z_{R}$ decays only into SM fermions and the three species of $\nu_{R}$, which are assumed to be light. Furthermore, as discussed in Sec. 3, we shall assume a flipped scenario with respect to $SU(2)_{R}$, where the light chiral fermions $u_{R}$, $t_{R}$, $d_{R}$, $s_{R}$, $e_{R}$, $\mu_{R}$ are $SU(2)_{R}$ singlets (with $T_{3R}=0$), while $c_{R}$, $b_{R}$, $\tau_{R}$ as well as the three flavors of $\nu_{R}$ belong to $SU(2)_{R}$ doublets with $T_{3R}=\pm 1/2$. Numerical values of the branching ratios defined as $$\displaystyle B_{\ell}$$ $$\displaystyle=$$ $$\displaystyle\frac{\Gamma(e^{+}e^{-})+\Gamma(\mu^{+}\mu^{-})}{\Gamma_{\rm total% }},~{}~{}~{}B_{\tau}=\frac{\Gamma(\tau^{+}\tau^{-})}{\Gamma_{\rm total}},~{}~{% }~{}B_{\nu}=\frac{3\Gamma(\nu_{L}\bar{\nu}_{L})+3\Gamma(\nu_{R}\bar{\nu}_{R})}% {\Gamma_{\rm total}}$$ $$\displaystyle B_{\rm jet}$$ $$\displaystyle=$$ $$\displaystyle\frac{\Gamma(u\bar{u})+\Gamma(d\bar{d})+\Gamma(s\bar{s})+\Gamma(c% \bar{c})+\Gamma(b\bar{b})}{\Gamma_{\rm total}},~{}~{}~{}B_{t}=\frac{\Gamma(t% \bar{t})}{\Gamma_{\rm total}}$$ (6.7) as well as the total width over mass $(\Gamma_{\rm total}/M_{Z_{R}})$ for this scenario are presented for five different values of $g_{R}$ of interest in Table 1. The BR of $Z_{R}$ decaying to di-bosons is less than 1% for the $R(D,D^{\ast})$ allowed parameter space. As the value of $g_{R}$ increases, $B_{\ell}$ decreases dramatically, reaching $B_{\ell}=1.1\times 10^{-3}$ for $g_{R}=2$. This occurs due to the flipping of $e_{R}$ and $\mu_{R}$ with $E_{1R}$ and $E_{2R}$ under $SU(2)_{R}$ transformation, a feature facilitated by their common SM quantum numbers. This flipping means that $e_{R}$ and $\mu_{R}$ carry zero $T_{3R}$ quantum number, and thus they interact with $Z_{R}$ with a coupling proportional to $g_{Y}^{2}/g_{R}$, see Eq. (6.1). Among the light fermions, $W_{R}$ couples to only $b_{R}$, $c_{R}$, $\tau_{R}$ and $\nu_{\tau R}$ with a coupling given by $g_{R}/\sqrt{2}$. The decay width of $W_{R}$ is found to be $$\displaystyle{\Gamma_{\rm total}\over{M_{W_{R}}}}\{2.6\%,\,6\%,\,11\%,\,16.6\%% ,\,24\%\}~{}~{}{\rm corresponding~{}to}~{}g_{R}=(1,\,1.5,\,2.0,\,2.5,\,3.0)~{}.$$ (6.8) 6.1.1 LEP constraints $e^{+}e^{-}$ collision at LEP above the $Z$ boson mass provides significant constraints on contact interactions involving $e^{+}e^{-}$ and any fermion pair. As it turns out, in this Parity asymmetric scenario, the couplings of $Z_{R}$ with electron (as well as muon) are highly suppressed, and the LEP constraints are automatically satisfied for a TeV scale $Z_{R}$. To see this, consider the effective Lagrangian involving $(e^{+}e^{-})$ and ($\mu^{+}\mu^{-})$ first, which can be read off from Eq. (6.1): $${\cal L}_{\rm eff}=-\frac{g_{Y}^{4}}{g_{R}^{2}-g_{Y}^{2}}\frac{1}{M_{Z_{R}}^{2% }}\frac{1}{\{1+(\Gamma_{\rm total}/M_{Z_{R}})^{2}\}^{1/2}}\left[\bar{e}_{R}% \gamma_{\mu}e_{R}+\frac{1}{2}\bar{e}_{L}\gamma_{\mu}e_{L}\right]\left[\bar{\mu% }_{R}\gamma^{\mu}\mu_{R}+\frac{1}{2}\bar{\mu}_{L}\gamma^{\mu}\mu_{L}\right]~{}.$$ (6.9) While the larger $T_{3R}$ contribution is absent for $e^{+}e^{-}\rightarrow e^{+}e^{-}\,,\,\mu^{+}\mu^{-}$, it is present in the process $e^{+}e^{-}\rightarrow\tau^{+}\tau^{-}$ on the $\tau$ vertex. The LEP constraint on the scale of contact interaction from this process is $\Lambda^{-}_{RR}>8.7$ TeV. This translates into a limit on $Z_{R}$ mass given by $$\displaystyle M_{Z_{R}}>\{573,\,600,\,607,\,607,\,603\}~{}{\rm GeV}~{}~{}{\rm corresponding% ~{}to}~{}g_{R}=(1,\,1.5,\,2.0,\,2.5,\,3.0)~{}.$$ (6.10) We see that the constraints are rather weak, which are automatically satisfied with TeV scale $Z_{R}$. Since the $\mu^{+}\mu^{-}$ and $\tau^{+}\tau^{-}$ are not universal, we can not use the simultaneous $\mu,\,\tau$ fit limits. But even if we had used that, the constraints are weaker compared to the mass of $Z_{R}$ required to satisfy the $R(D,D^{*})$ anomaly for a given $g_{R}$. Other processes such as $e^{+}e^{-}\rightarrow\bar{c}c$ and $e^{+}e^{-}\rightarrow\bar{b}b$ provide somewhat weaker constraints than the ones quoted in Eq. (6.10). 6.1.2 LHC constraints Important constraints for this model arise from the resonant production of $Z_{R}$ and $W_{R}$ at the LHC. Let us consider first the $Z_{R}$ production. Due to the flavor structure, the coupling of $Z_{R}$ with $u$ and $d$ quarks are suppressed (see Eq. (6.1)) with the couplings going as $g_{Y}^{2}/g_{R}$. With these suppression factors, the production cross-section of $Z_{R}$ at the LHC is smaller compared with the $Z^{\prime}$ associated with the sequential standard model. The cross-sections are shown in Table 2 using a K-factor $=1.3$. The most dominant constraint arises from the dilepton (with $e$ and $\mu$) final states. The branching ratios are shown in Table 1. Combining the branching ratio with the production cross-section for the case of $g_{R}=1$, we find the cross-section to be larger than the experimental constraint. However, for $g_{R}=1.5$, $\sigma\times Br(Z_{R}\rightarrow l^{+}l^{-})$ is $2\times 10^{-4}$ pb where the experimental constraint is $<4\times 10^{-4}$ pb [49, 50] which is well satisfied. We found that the parameter space with $Z_{R}$ mass $>1.2$ TeV and $g_{R}>1.2$ is allowed by the current LHC constraint. The dijet resonance cross-section $\sigma\times Br(Z_{R}\rightarrow jj)$ is 0.29 pb (for $g_{R}=1.5$) where the experimental cross-section is 0.6 pb [52, 53] and therefore $M_{Z_{R}}>1$ TeV with $g_{R}>1$ is allowed. The search for $W_{R}$ is difficult for this model, since it does not couple to the first generation quarks. However, it can still be produced from the gluon-$b$ fusion and gluon-$c$ fusion as shown in Ref. [29] since $W_{R}$ couples only to the $\overline{b}_{R}\gamma_{\mu}c_{R}$ current in the quark sector. In this case, the cross-section is suppressed compared to the case where $W_{R}$ couples to the $u,\,d$ partons in the protons. For example, for $g_{R}=1$ and $W_{R}=1$ TeV, the $W_{R}$ production cross-section is 0.5 pb, which is allowed by the direct search ($\sim 0.6$ pb) with dijet final states [52, 53]. Similar conclusion holds for resonance searches with $\tau$$\nu$ final state [54, 55]. In Fig. 6, we show that $M_{W_{R}}\geq 1.2$ TeV by the LHC in the $R(D,D^{\ast})$ allowed region. 6.2 Parity symmetric scenario: In this case we again assume all the exotic fermions have masses large enough to kinematically forbid $Z_{R}$ from decaying into those states. $Z_{R}$ can then decay only into SM fermion pairs, as well as pairs of three $\nu_{R}$ species, which are assumed to be light. In this case $(u_{R},c_{R})$ as well as ($d_{R},b_{R})$ are taken to be members of $SU(2)_{R}$ doublets with $T_{3R}=\pm 1/2$, as are ($\mu_{R},\tau_{R}$) leptons. On the other hand, $e_{R}$ belongs to $SU(2)_{R}$ singlet with $T_{3R}=0$, a possibility which arises from the flipping of $e_{R}$ and $E_{1R}$. Similarly, $t_{R}$ and $s_{R}$ are $SU(2)_{R}$ singlets. For this scenario, the branching ratios for $Z_{R}$ decays into various channels, as well as the total width to mass ratio of $Z_{R}$ are listed in Table 3 as functions of $g_{R}$. The BR of $Z_{R}$ decaying to di-bosons is less than 1% for the $R(D,D^{\ast})$ allowed parameter space. As can be seen from Table 3, the branching ratio for $Z_{R}$ decaying into leptons is relatively stable under variations of $g_{R}$. While $Z_{R}\rightarrow e^{+}e^{-}$ will drastically decrease with increasing $g_{R}$, the corresponding branching ratio for $Z_{R}\rightarrow\mu^{+}\mu^{-}$ does not change much and contributes dominantly to $B_{\ell}$. This has to do with the flipping of $e_{R}$ with $E_{1R}$, without flipping $\mu_{R}$ with $E_{2R}$ as was done in the Parity asymmetric scenario of Table 1. Among the light fermions, $W_{R}$ couples to $\overline{c}_{R}\gamma_{\mu}b_{R}$ as well as $\overline{\mu}_{R}\gamma_{\mu}\nu_{\mu_{R}}$, and $\overline{\tau}_{R}\gamma_{\mu}\nu_{\tau_{R}}$. The decay width of $W_{R}$ is found to be $$\displaystyle{\Gamma_{\rm total}\over{M_{W_{R}}}}\{3.3\%,\,7.5\%,\,13.3\%,\,20% .7\%,\,29.8\%\}~{}~{}{\rm for}~{}g_{R}=(1,\,1.5,\,2.0,\,2.5,\,3.0)~{}.$$ (6.11) 6.2.1 LEP constraints In this scenario the LEP constraints are slightly stronger than those obtained in the case of Parity asymmetric scenario. However, the difference is not much. Since the $Z_{R}$ couplings to $\mu$ and $\tau$ are the same, we use simultaneous $\mu,\,\tau$ fit limits which provides the strongest limit. LEP limit on the scale of contact interaction from this process $e^{+}e^{-}\rightarrow l^{+}l^{-}$ is $\Lambda^{-}_{RR}>9.3$ TeV, which implies the following limits on $Z_{R}$ gauge boson mass: $$\displaystyle M_{Z_{R}}>\{611,\,634,\,638,\,624,\,598\}~{}{\rm GeV}~{}~{}{\rm corresponding% ~{}to}~{}g_{R}=(1,\,1.5,\,2.0,\,2.5,\,3.0)~{}.$$ (6.12) These constraints are slightly more stringent compared to the ones obtained from $e^{+}e^{-}\rightarrow\tau^{+}\tau^{-}$ in the Parity asymmetric scenario (see Eq. (6.10)), but not by very much. All other LEP processes give weaker constraints. We conclude that $Z_{R}$ mass of order 1 TeV is fully consistent with LEP data in this Parity symmetric scenario as well. It is to be noted that this weakened constraint is a result of the flipping of $e_{R}$ with $E_{1R}$. 6.2.2 LHC constraints For the parity symmetric model, $Z_{R}$ (and $W_{R}$ for $V_{R}$ of the form in Eq. (4.14)) are coupled to the first generation quarks with sizable couplings which make their production cross-sections large at the LHC. However due to large values of $g_{R}$, the model has large decay widths for $Z_{R}$ and $W_{R}$ for $g_{R}\geq 1$, see Table 3 and Eq. (6.11). This causes problems in obtaining constraints at the LHC. The dilepton resonance search analyses which provide the best constraint on the $Z_{R}$ [49, 50] masses at the LHC are based on narrow resonances. In this final state, the maximum values of ${\Gamma\over{M_{Z_{R}}}}$ used in the analyses are 10% for CMS [51] and $\sim$30% for ATLAS [49]. For larger ${\Gamma\over{M_{Z_{R}}}}$, the constraint on the production cross-sections gets relaxed compared to the narrow resonance case, e.g., Ref.  [49] shows that the cross-sections can be relaxed by a factor 2 for the maximum $\Gamma/M_{Z_{R}}\sim 30$% which has been investigated. The dijet resonance search analysis also puts constraint on the $Z_{R}$ and $W_{R}$ masses [52, 53], however, the LHC constraints exist for $\Gamma/{M_{Z_{R},W_{R}}}\leq 30\%$. The constraint on the production cross-section gets relaxed as $\Gamma/{M_{Z_{R},W_{R}}}$ increases, e.g., for $\Gamma/{M_{Z_{R},W_{R}}}\sim 30\%$, the constraint on the cross-section goes down by an order of magnitude [53]. No LHC analysis exists for any final state where $\Gamma/M_{Z_{R},W_{R}}>30$% which occurs when $g_{R}>2.2$. From Fig. 5 we see that $g_{R}>2.2$ can occur for $M_{W_{R}}>1.8$ TeV. The larger width resonance is difficult to be extracted over a continuum background unless the experimental analysis would be able to reduce the background yield to a negligible level. A new analysis is imperative to search for large decay width scenarios. In Fig. 7 we show the allowed region of parameter space by the current LHC data. We see from here that $M_{W_{R}}\geq 1.8$ TeV is allowed. 7 Cosmological and astrophysical constraints In this section we comment on various cosmological and astrophysical constraints that should be satisfied by the model. Some of the constraints arise from a light $\nu_{\tau R}$ needed for the $R(D^{*},D)$ anomaly in our framework, while some others have to do with the adopted flavor structure. 7.1 Supernova constraints A light $\nu_{R}$ may be produced inside the supernova core if its mass is below about 100 MeV. This is indeed the case for $\nu_{\tau R}$ in our model for $R(D^{*},D)$ anomaly. If the interactions of the light $\nu_{R}$ with the supernova matter is too weak, the $\nu_{R}$ will escape, contradicting the observation of neutrino burst from SN1987a. If the $\nu_{R}$ interacts with supernova matter it may be trapped inside, in which case the constraints will be relaxed. Here we follow the crude analytic model studied in Ref. [56] to derive the allowed parameter space from SN1987a observations. The model of Ref. [56] assumes a constant core density of $\rho_{C}\simeq 8\times 10^{14}$ g/cm${}^{3}$, corresponding to a total mass of $M\simeq 1.4M_{\rm Sun}$ and a radius $R_{C}\simeq 10^{6}$ cm and a temperature $T_{C}=(30-70$ MeV. For the calculation of right-handed neutrino sphere, the density profile outside the core was assumed to be $\rho(R)=\rho_{C}(R_{C}/R)^{m}$ with $m=3-7$. This uncertainty in the density profile, as well as the uncertainty in the core temperature leads to considerable uncertainty in the $\nu_{R}$ interaction strength allowed by SN1987a observations. Ref. [56] also assumes that the energy loss in $\nu_{R}$ emission should be less than about 20 times energy loss in $\nu_{L}$ emission. Under these assumptions the following region in an effective mass $M_{N}$ was found to be excluded: $$(2.4-4.3)M_{W_{L}}\leq M_{N}\leq(7.5-40)M_{W_{L}}~{}.$$ (7.1) This limit arises from the neutral current process $e^{+}e^{-}\rightarrow\overline{\nu}_{R}\nu_{R}$, whose cross section was parametrized as $$\sigma(e^{+}e^{-}\rightarrow\overline{\nu}_{R}\nu_{R})=\frac{G_{F}^{2}s}{12\pi% }\left[\frac{M_{W_{L}}}{M_{N}}\right]^{4}~{}.$$ (7.2) In our model, the neutral current process $e^{+}e^{-}\rightarrow\overline{\nu}_{R}\nu_{R}$ does occur. The cross section for this process, both in the parity symmetric and asymmetric version, is given by $$\sigma(e^{+}e^{-}\rightarrow\overline{\nu}_{R}\nu_{R})=\left(\frac{5}{16}% \right)\frac{1}{48\pi}\frac{g_{Y}^{4}g_{R}^{4}}{(g_{R}^{2}-g_{Y}^{2})^{2}}% \frac{s}{M^{2}_{Z_{R}}}~{}.$$ (7.3) The exclusion region is then obtained for various values of $g_{R}$ as $$\displaystyle(239-429)~{}{\rm GeV}\leq M_{Z_{R}}\leq(748-3890)~{}{\rm GeV}~{}~% {}~{}~{}(g_{R}=2)$$ $$\displaystyle(252-452)~{}{\rm GeV}\leq M_{Z_{R}}\leq(788-4203)~{}{\rm GeV}~{}~% {}~{}~{}(g_{R}=1)~{}.$$ (7.4) In these exclusion regions, one should take the weaker limit, which is found to be consistent with the range of parameters needed for explaining the $R(D^{*},D)$ anomaly. We note in passing that the charged current $W_{R}$ interactions does not lead to neutronization process $e_{R}p\rightarrow\nu_{R}n$ in our model, since $W_{R}^{\pm}$ has no coupling to the electrons. 7.2 Other constraints In the parity asymmetric model, in the heavy quark sector, we see that in the limit of $\epsilon_{i}\to 0$ in Eq. (3.2), the lightest of the $(U,D,S,T)$ quarks will remain stable and will not annihilate fast enough so that it can over-close the universe. The reason for this is that for TeV mass colored particles, the only annihilation channel for $T\leq M_{Q}$, is via gluon emission to two light quarks i.e. $Q\bar{Q}\to q\bar{q}$. This cross section goes as $\sigma_{Q\bar{Q}}\sim\frac{\alpha^{2}_{s}}{M^{2}_{Q}}$ which is $\leq{\rm pb}$. This implies that they could either form the dark matter of the universe, which is unacceptable since these are colored particles or worse, they over-close the universe. We therefore need for the lightest of the heavy quarks to decay. Once $\epsilon_{1,2}$ are turned on in our model, the relevant lightest heavy quark can decay to $b$ and $c$ quarks which decay via the left-handed CKM matrix $V_{L}$ to leptons and follow the usual cosmology. Typical decay rate for these fermions can be estimated to be $\Gamma_{Q}\sim\frac{g^{4}_{R}}{192\pi^{3}M^{4}_{W_{R}}}\epsilon^{2}M^{5}_{Q}$ and the temperature at which they will decay can be estimated by using $$\displaystyle\Gamma_{Q}\sim g^{1/2}_{*}\frac{T^{2}_{d}}{M^{2}_{Pl}}~{}.$$ (7.5) For these decays to happen above a Temperature of the universe $T>1$ GeV, we need $\epsilon_{1,2}\geq 10^{-9}$ [37]. This is a rather weak constraint and is therefore easily satisfied without contradicting any other phenomenology. Similarly, in the lepton sector, we can introduce small mixings among the right handed leptons to make the heavy neutral and charged leptons to decay above $T\sim 1$ GeV to avoid conflict with BBN requirements. The existence of light $\nu_{R}$ states can modify big bang nucleosynthesis. If the $\nu_{R}$ decouples from the plasma above QCD phase transition temperature, then their contribution to effective neutrino species is about 0.1 per $\nu_{R}$ species. Even with all three neutrinos being light, this excess is consistent with BBN constraints. As noted in footnote 4, if the light $\nu_{R}$ also play a role in short baseline neutrino anomalies from LSND and MiniBoone, then large scale structure formation constraints become important [23] within the $\Lambda$CDM paradigm. Secret neutrino interactions can potentially relax these limits [24]. We have not explored this possibility here. 8 Discussion and conclusion Before concluding, we make a few observations of theoretical nature on the model presented here. 1. In the parity asymmetric model, we have several vector-like fermions acquiring masses from the right-handed Higgs mechanism. As seen from Eq. (3.5) and Eq. (3.9), the masses of $U,T,D,S$ quarks as well as $E_{1}$ and $E_{2}$ leptons arise from $Y_{i}\kappa_{R}$. Perturbativity of the Yukawa couplings would then imply that these vector-like fermions have masses not much above $\kappa_{R}\simeq(1.5-2.5)$ TeV. This can be made more precise by looking at partial wave unitarity in the process $\overline{f}f\rightarrow\overline{f}f$ mediated by the $Z_{R}$ and $W_{R}$ gauge bosons. Such an analysis in the context of the SM leads to a limit of 550 GeV on the mass of a fourth generation quark [57]. For $N$ generations of quarks, this is strengthened by a factor of $1/\sqrt{N}$. These results can be readily scaled up to the masses of vector-like quarks of our model. We find for four degenerate quarks, $M_{Q}\leq 2.24$ TeV, for $M_{W_{R}}=2$ TeV and $g_{R}=2.0$. Other processes, such as $\overline{f}f\rightarrow W_{R}^{+}W_{R}^{-}$ can also yield useful limits. Using the results of Ref. [58] we obtain $M_{Q}\leq 5.6$ TeV, which is somewhat weaker. In the parity symmetric scenario the mass of the vector like partner of electron is given as $Y\kappa_{R}$. The partial wave unitarity limit on a fourth generation SM lepton mass is 1 TeV, which can be scaled to obtain a limit of $M_{E}\leq 4.5$ TeV for the vector-like partner of the electron. In the $P$ asymmetric case, since two such vector-like leptons acquire their masses from $\kappa_{R}$, the partial wave unitarity limit on their (common) mass is $M_{E}<3.2$ TeV. 2. The boundedness of the Higgs potential of Eq. (2.11) poses an upper limit on the masses of fermions generated by the Higgs mechanism. In the parity asymmetric model, four quarks and two leptons acquire such masses. The quartic coupling $\lambda_{1R}$ will turn negative at higher energies if these Yukawa couplings are large. This should not happen at least for an order of magnitude higher energy. Demanding this would lead to an upper limit on vector-fermion masses. To see this, we can examine the renormalization group evolution equation for $\lambda_{1R}$, which is given by $$\displaystyle 16\pi^{2}\frac{d\lambda_{1R}}{dt}=12\lambda_{1R}^{2}+4\lambda_{2% }^{2}-\lambda_{1R}(3g_{B}^{2}+9g_{R}^{2})+\frac{3}{4}g_{B}^{4}+\frac{3}{2}g_{B% }^{2}g_{R}^{2}+\frac{9}{4}g_{R}^{4}+$$ $$\displaystyle\lambda_{1R}{\rm Tr}\left(3Y_{U}^{\prime\dagger}Y_{U}^{\prime}+3Y% _{D}^{\prime\dagger}Y_{D}^{\prime}+Y_{E}^{\prime\dagger}Y_{E}^{\prime}\right)-% 4{\rm Tr}\left(3(Y_{U}^{\prime\dagger}Y_{U}^{\prime})^{2}+3(Y_{D}^{\prime% \dagger}Y_{U}^{\prime})^{2}+(Y_{E}^{\prime\dagger}Y_{E}^{\prime})^{2}\right)~{}.$$ (8.1) The full set of RGE for the Yukawa couplings in a closely related universal seesaw model can be found in Ref. [59]. With four degenerate quark and two lepton fields, demanding that $\lambda_{1R}$ remains positive up to a scale of $10\kappa_{R}$ gives a limit on these fermion masses of about 2.5 TeV. This limit depends on the initial value of $\lambda_{1R}$. The upper limit on vector-like fermion masses are $M_{F}\leq(1.6,\,1.9,\,2.2,\,2.5)$ TeV, corresponding to the initial value of $\lambda_{1R}=(1.0,\,2.0,\,3.0,\,4.0)$ (keeping $\lambda_{2}$ fixed at $0.7$). 3. We have used relatively large values of the $SU(2)_{R}$ gauge coupling $g_{R}$. However, perturbation theory is still valid, as the theory is asymptotically free. If the Higgs fields of the model are not present, the $SU(2)_{R}$ theory is one with $N_{f}=6$ (that is, with twelve doublets), which has been studied non-perturbatively on the lattice [60, 61]. The phase diagram of such a theory appears to be emerging, with the $N_{f}=6$ lying close to the boundary of the conformal window. Since we Higgs the theory, the gauge coupling $g_{R}$ increases coming from higher to lower energies, until the Higgsing occurs. A fixed point value of $g_{*}^{2}\simeq 14.5$ was found in Ref. [61]. Just before the theory acquires this fixed point value, we assume that spontaneous symmetry breaking occurs. A semi-perturbative value of $g_{R}\sim(2.0-3.0)$ appears quite reasonable in this case. 4. Our model (and the general universal seesaw models) does not grand unify into conventional GUT groups such as $SU(5)$ or $SO(10)$. However, these models can be embedded into grand unified symmetries based on $SU(5)\times SU(5)$ or $SO(10)\times SO(10)$. For the former possibility and as one example how unification works in such models, see Ref. [62]. The unification of gauge couplings occurs in multiple steps, and therefore is a bit nontrivial. Proton decay mediated by the gauge bosons in such models leads to the dominance of $p\rightarrow e^{+}\pi^{0}$ decay mode with a lifetime estimated to be near the current experimental limit. In summary, we have presented a UV complete theory that resolves the $R(D^{*},D)$ anomaly based on left-right gauge symmetry with a low mass $W_{R}$ and a relatively large $g_{R}$. Two versions of the theory were developed, one with softly broken parity symmetry and one without parity. In the former case the model solves the strong CP problem with parity symmetry, without invoking the Peccei-Quinn symmetry and the resulting axion. In each case we have presented flavor structures that lead to a consistent explanation of the $R(D^{*},D)$ anomaly in terms of the right-handed currents, which are also compatible with low energy flavor violation constraints. The charged $W_{R}^{\pm}$ that mediates new contributions in $B$ decays is accompanied by a neutral $Z_{R}^{0}$, which is nearly degenerate in mass with the $W_{R}^{\pm}$. LEP and LHC experiments provide stringent limits on these relative low mass gauge bosons. Their discovery would be somewhat challenging, since their total widths turn out to be 20% or more compared to their masses, once the $R(D^{*},D)$ anomaly is explained. The parity asymmetric version of the model has several vector-like quarks that acquire masses via the Higgs mechanism. These masses cannot be greater than about 2.5 TeV, to be consistent with perturbative unitarity and an understanding of $R(D^{*},D)$ anomaly. In the parity symmetric version, the top quark partner is predicted to have a mass $M_{T}=(1.5-2.5)$ TeV. A vector-like electron partner with a mass less than 4.5 TeV is expected in both cases. Along with the gauge bosons, these vector-like fermions provide a rich spectrum waiting to be explored at the LHC. Acknowledgements We would like to thank Julian Calle, Bogdan Dobrescu, Ricardo Eusebi, George Fleming, Sudip Jana, Teruki Kamon and Jure Zupan for very helpful discussions. The work of KSB is supported by U.S. Department of Energy Grant No. de-sc0016013 and by a Fermilab Distinguished Scholar program. The work of BD is supported by DOE Grant No. de-sc0010813. The work of RNM is supported by the US National Science Foundation under Grant No. PHY1620074. KSB is thankful to the Mitchell Institute at Texas A& M University for hospitality during the workshop on “Collider, Dark Matter and Neutrino Physics, 2018”. KSB and RNM acknowledge hospitality of the Bethe Center for Theoretical Physics, University of Bonn during the workshop on “Grand Unification and the Real World”. KSB and BD are thankful to the Theory Group at Fermilab for hospitality during a summer visit. References [1] BaBar Collaboration, J. P. Lees et al., Phys. Rev. Lett. 109, 101802 (2012), 1205.5442. [2] BaBar, J. P. Lees et al., Phys. Rev. D88, 072012 (2013), 1303.0571. [3] Belle, M. Huschle et al., Phys. Rev. D92, 072014 (2015), 1507.03233. [4] Belle Collaboration, A. Abdesselam et al., (2016), 1603.06711. [5] Belle Collaboration, A. Abdesselam et al., (2016), 1608.06391. [6] LHCb Collaboration, R. Aaij et al., Phys. Rev. Lett. 115, 111803 (2015), 1506.08614, [Addendum: Phys. Rev. Lett. 115, no.15, 159901 (2015)]. [7] R. Aaij et al. [LHCb Collaboration], LHCB-PAPER-2017-035, CERN-EP-2017-275 , [arXiv:1711.05623]. [8] X. G. He and G. Valencia, Phys. Rev. D 87, no. 1, 014014 (2013) [arXiv:1211.0348 [hep-ph]]; X. G. He and G. Valencia, Phys. Lett. B 779, 52 (2018) [arXiv:1711.09525 [hep-ph]]. [9] A. Greljo, D. J. Robinson, B. Shakya and J. Zupan, arXiv:1804.04642 [hep-ph]; D. Robinson, B. Shakya and J. Zupan, arXiv:1807.04753 [hep-ph]. [10] P. Asadi, M. R. Buckley and D. Shih, JHEP 1809, 010 (2018) [arXiv:1804.04135 [hep-ph]]; P. Asadi, M. R. Buckley and D. Shih, arXiv:1810.06597 [hep-ph]. [11] J.C. Pati and A. Salam, Phys. Rev. D 10, 275 (1974); R. N. Mohapatra and J. C. Pati, Phys. Rev. D 11, 566, 2558 (1975); G. Senjanovi ́c and R. N. Mohapatra, Phys. Rev. D 12, 1502 (1975). [12] G. Beall, M. Bander and A. Soni, Phys. Rev. Lett.  48, 848 (1982); [13] See for e.g., Y. Zhang, H. An, X. Ji and R. N. Mohapatra, Nucl. Phys. B 802, 247 (2008) [arXiv:0712.4218 [hep-ph]]; A. Maiezza, M. Nemevsek, F. Nesti and G. Senjanovic, Phys. Rev. D 82, 055022 (2010). [14] See for e.g: M. Nemevsek, F. Nesti, G. Senjanovic and Y. Zhang, Phys. Rev. D 83, 115014 (2011) [arXiv:1103.1627 [hep-ph]]. [15] For relaxing constraints on $Z^{\prime}$ versus $W^{\prime}$ see: B. A. Dobrescu and P. J. Fox, JHEP 1605, 047 (2016) [arXiv:1511.02148 [hep-ph]]; P. S. B. Dev, R. N. Mohapatra and Y. Zhang, Nucl. Phys. B 923, 179 (2017) [arXiv:1703.02471 [hep-ph]]. [16] P. Langacker and S. U. Sankar, Phys. Rev. D 40, 1569 (1989). [17] Z.G. Berezhiani, Phys. Lett. B 129, 99 (1983). [18] D. Chang and R. N. Mohapatra, Phys. Rev. Lett.  58, 1600 (1987). [19] A. Davidson and K.C. Wali, Phys. Rev. Lett. 59, 393 (1987); S. Rajpoot, Mod. Phys. Lett. A 2, 307 (1987). [20] K.S. Babu and R.N. Mohapatra, Phys. Rev. Lett. 62, 1079 (1989); K.S. Babu and R.N. Mohapatra, Phys. Rev. D 41, 1286 (1990). [21] A. A. Aguilar-Arevalo et al. [MiniBooNE Collaboration], arXiv:1805.12028 [hep-ex]; A. A. Aguilar-Arevalo et al. [MiniBooNE Collaboration], Phys. Rev. Lett.  110, 161801 (2013) [arXiv:1303.2588 [hep-ex]]. [22] C. Athanassopoulos et al. [LSND Collaboration], Phys. Rev. Lett.  81, 1774 (1998) [nucl-ex/9709006]; A. Aguilar-Arevalo et al. [LSND Collaboration], Phys. Rev. D 64, 112007 (2001) [hep-ex/0104049]. [23] P. A. R. Ade et al. [Planck Collaboration], Astron. Astrophys.  571, A16 (2014) [arXiv:1303.5076 [astro-ph.CO]]; N. Aghanim et al. [Planck Collaboration], arXiv:1807.06209 [astro-ph.CO]. [24] K. S. Babu and I. Z. Rothstein, Phys. Lett. B 275, 112 (1992); B. Dasgupta and J. Kopp, Phys. Rev. Lett.  112, no. 3, 031803 (2014) [arXiv:1310.6337 [hep-ph]]; S. Hannestad, R. S. Hansen and T. Tram, Phys. Rev. Lett.  112, no. 3, 031802 (2014) [arXiv:1310.5926 [astro-ph.CO]]; J. F. Cherry, A. Friedland and I. M. Shoemaker, arXiv:1605.06506 [hep-ph]. [25] For a recent study see: L. J. Hall and K. Harigaya, arXiv:1803.08119 [hep-ph]. [26] K. S. Babu, D. Eichler and R. N. Mohapatra, Phys. Lett. B 226, 347 (1989). [27] Y. Sakaki, M. Tanaka, A. Tayduganov and R. Watanabe, Phys. Rev. D 88, no. 9, 094012 (2013) [arXiv:1309.0301 [hep-ph]] B. Gripaios, M. Nardecchia and S. A. Renner, JHEP 1505, 006 (2015) [arXiv:1412.1791 [hep-ph]]; S. Sahoo and R. Mohanta, Phys. Rev. D 91, no. 9, 094019 (2015) [arXiv:1501.05193 [hep-ph]]; L. Calibbi, A. Crivellin and T. Ota, Phys. Rev. Lett.  115, 181801 (2015) [arXiv:1506.02661 [hep-ph]]; R. Alonso, B. Grinstein and J. Martin Camalich, JHEP 1510, 184 (2015); M. Bauer and M. Neubert, Phys. Rev. Lett.  116, no. 14, 141802 (2016) [arXiv:1511.01900 [hep-ph]]; S. Fajfer and N. KoÅ¡nik, Phys. Lett. B 755, 270 (2016) [arXiv:1511.06024 [hep-ph]]; R. Barbieri, G. Isidori, A. Pattori and F. Senia, Eur. Phys. J. C 76, no. 2, 67 (2016) [arXiv:1512.01560 [hep-ph]]; D. Das, C. Hati, G. Kumar and N. Mahajan, Phys. Rev. D 94, 055034 (2016) [arXiv:1605.06313 [hep-ph]]; D. Bečirević, S. Fajfer, N. KoÅ¡nik and O. Sumensari, Phys. Rev. D 94, no. 11, 115021 (2016) [arXiv:1608.08501 [hep-ph]]; X. Q. Li, Y. D. Yang and X. Zhang, JHEP 1608, 054 (2016) [arXiv:1605.09308 [hep-ph]]; C. H. Chen, T. Nomura and H. Okada, Phys. Lett. B 774, 456 (2017) [arXiv:1703.03251 [hep-ph]]; S. Sahoo, R. Mohanta and A. K. Giri, Phys. Rev. D 95, no. 3, 035027 (2017) [arXiv:1609.04367 [hep-ph]]; D. A. Faroughy, A. Greljo and J. F. Kamenik, Phys. Lett. B 764, 126 (2017) [arXiv:1609.07138 [hep-ph]]; G. Hiller, D. Loose and K. Schönwald, JHEP 1612, 027 (2016) [arXiv:1609.08895 [hep-ph]]; O. Popov and G. A. White, Nucl. Phys. B 923, 324 (2017) [arXiv:1611.04566 [hep-ph]]; B. Bhattacharya, A. Datta, J. P. Guévin, D. London and R. Watanabe, JHEP 1701, 015 (2017) [arXiv:1609.09078 [hep-ph]]; A. Crivellin, D. Müller and T. Ota, JHEP 1709, 040 (2017) [arXiv:1703.09226 [hep-ph]]; L. Di Luzio and M. Nardecchia, Eur. Phys. J. C 77, no. 8, 536 (2017); M. Blanke and A. Crivellin, Phys. Rev. Lett.  121, no. 1, 011801 (2018) [arXiv:1801.07256 [hep-ph]]; A. Monteux and A. Rajaraman, arXiv:1803.05962 [hep-ph]; D. Becirevic, I. Dorsner, S. Fajfer, N. Kosnik, D. A. Faroughy and O. Sumensari, Phys. Rev. D 98, no. 5, 055003 (2018) [arXiv:1806.05689 [hep-ph]]; A. Biswas, D. Kumar Ghosh, N. Ghosh, A. Shaw and A. K. Swain, arXiv:1808.04169 [hep-ph]; A. Angelescu, D. Becirevic, D. A. Faroughy and O. Sumensari, arXiv:1808.08179 [hep-ph]; J. Heeck and D. Teresi, arXiv:1808.07492 [hep-ph]; S. Bansal, R. M. Capdevilla and C. Kolda, arXiv:1810.11588 [hep-ph]. [28] B. Bhattacharya, A. Datta, D. London and S. Shivashankara, Phys. Lett. B 742, 370 (2015) [arXiv:1412.7164 [hep-ph]]; A. Greljo, G. Isidori and D. Marzocca, JHEP 1507, 142 (2015) [arXiv:1506.01705 [hep-ph]]; S. Bhattacharya, S. Nandi and S. K. Patra, Phys. Rev. D 93, no. 3, 034011 (2016) [arXiv:1509.07259 [hep-ph]]; S. M. Boucenna, A. Celis, J. Fuentes-Martin, A. Vicente and J. Virto, Phys. Lett. B 760, 214 (2016) [arXiv:1604.03088 [hep-ph]]; JHEP 1612, 059 (2016) [arXiv:1608.01349 [hep-ph]]; D. Bardhan, P. Byakti and D. Ghosh, JHEP 1701, 125 (2017) [arXiv:1610.03038 [hep-ph]]; L. Di Luzio and M. Nardecchia, Eur. Phys. J. C 77, no. 8, 536 (2017) [arXiv:1706.01868 [hep-ph]];D. Choudhury, A. Kundu, R. Mandal and R. Sinha, Phys. Rev. Lett.  119, no. 15, 151801 (2017) [arXiv:1706.08437 [hep-ph]]; R. Dutta, arXiv:1710.00351 [hep-ph]; T. D. Cohen, H. Lamm and R. F. Lebed, Phys. Rev. D 98, no. 3, 034022 (2018) [arXiv:1807.00256 [hep-ph]]; X. W. Kang, T. Luo, Y. Zhang, L. Y. Dai and C. Wang, Eur. Phys. J. C 78, no. 11, 909 (2018) [arXiv:1808.02432 [hep-ph]]. [29] M. Abdullah, J. Calle, B. Dutta, A. Flórez and D. Restrepo, Phys. Rev. D 98, no. 5, 055016 (2018) [arXiv:1805.01869 [hep-ph]]. [30] J. Zhu, H. M. Gan, R. M. Wang, Y. Y. Fan, Q. Chang and Y. G. Xu, Phys. Rev. D 93, no. 9, 094023 (2016) [arXiv:1602.06491 [hep-ph]]; N. G. Deshpande and X. G. He, Eur. Phys. J. C 77, no. 2, 134 (2017) [arXiv:1608.04817 [hep-ph]]; W. Altmannshofer, P. S. Bhupal Dev and A. Soni, Phys. Rev. D 96, no. 9, 095010 (2017) [arXiv:1704.06659 [hep-ph]]; Q. Y. Hu, X. Q. Li, Y. Muramatsu and Y. D. Yang, arXiv:1808.01419 [hep-ph]. [31] E. Megias, M. Quiros and L. Salas, JHEP 1707, 102 (2017) [arXiv:1703.06019 [hep-ph]]; A. Biswas, A. Shaw and S. K. Patra, Phys. Rev. D 97, no. 3, 035019 (2018) [arXiv:1708.08938 [hep-ph]]; S. Dasgupta, U. K. Dey, T. Jha and T. S. Ray, Phys. Rev. D 98, 055006 (2018) [arXiv:1801.09722 [hep-ph]]; M. Carena, E. Megias, M. Quiros and C. Wagner, arXiv:1809.01107 [hep-ph]. [32] A. Crivellin, C. Greub and A. Kokulu, Phys. Rev. D 86, 054014 (2012) [arXiv:1206.2634 [hep-ph]]; A. Datta, M. Duraisamy and D. Ghosh, Phys. Rev. D 86, 034027 (2012) [arXiv:1206.3760 [hep-ph]]; A. Celis, M. Jung, X. Q. Li and A. Pich, JHEP 1301, 054 (2013) [arXiv:1210.8443 [hep-ph]];M. Tanaka and R. Watanabe, Phys. Rev. D 87, no. 3, 034028 (2013) [arXiv:1212.1878 [hep-ph]]; M. Freytsis, Z. Ligeti and J. T. Ruderman, Phys. Rev. D 92, no. 5, 054018 (2015); A. Crivellin, J. Heeck and P. Stoffer, Phys. Rev. Lett.  116, no. 8, 081801 (2016) [arXiv:1507.07567 [hep-ph]]; J. M. Cline, Phys. Rev. D 93, no. 7, 075017 (2016) [arXiv:1512.02210 [hep-ph]]; P. Ko, Y. Omura, Y. Shigekami and C. Yu, Phys. Rev. D 95, no. 11, 115040 (2017) [arXiv:1702.08666 [hep-ph]]; C. H. Chen and T. Nomura, Eur. Phys. J. C 77, no. 9, 631 (2017) [arXiv:1703.03646 [hep-ph]]; S. Iguro and K. Tobe, Nucl. Phys. B 925, 560 (2017) [arXiv:1708.06176 [hep-ph]]; A. G. Akeroyd and C. H. Chen, Phys. Rev. D 96, no. 7, 075011 (2017) [arXiv:1708.04072 [hep-ph]]; A. K. Alok, D. Kumar, J. Kumar, S. Kumbhakar and S. U. Sankar, arXiv:1710.04127 [hep-ph]; S. Iguro and Y. Omura, JHEP 1805, 173 (2018) [arXiv:1802.01732 [hep-ph]]. [33] R. Alonso, B. Grinstein and J. Martin Camalich, Phys. Rev. Lett.  118, no. 8, 081802 (2017) [arXiv:1611.06676 [hep-ph]]. [34] A. G. Akeroyd and C. H. Chen, Phys. Rev. D 96, no. 7, 075011 (2017) [arXiv:1708.04072 [hep-ph]]. [35] K. S. Babu and X. G. He, Mod. Phys. Lett. A 4, 61 (1989). [36] D. Chang, R. N. Mohapatra and M. K. Parida, Phys. Rev. Lett.  52, 1072 (1984). [37] P. S. Bhupal Dev, R. N. Mohapatra and Y. Zhang, JHEP 1611, 077 (2016) [arXiv:1608.06266 [hep-ph]]. [38] T. Inami and C. S. Lim, Prog. Theor. Phys.  65, 297 (1981) Erratum: [Prog. Theor. Phys.  65, 1772 (1981)]. [39] M. Tanabashi et al. [Particle Data Group], Phys. Rev. D 98, no. 3, 030001 (2018). [40] R. N. Mohapatra, G. Senjanovic and M. D. Tran, Phys. Rev. D 28, 546 (1983). [41] E. Ma, Phys. Rev. D 73, 077301 (2006) [hep-ph/0601225]; R. Barbieri, L. J. Hall and V. S. Rychkov, Phys. Rev. D 74, 015007 (2006) [hep-ph/0603188]. [42] http://www.slac.stanford.edu/xorg/hfag/semi/fpcp17/RDRDs.html; https://hflav-eos.web.cern.ch/hflav-eos/semi/summer18/RDRDs.html. [43] F. U. Bernlochner, Z. Ligeti, M. Papucci and D. J. Robinson, Phys. Rev. D 95, no. 11, 115008 (2017) Erratum: [Phys. Rev. D 97, no. 5, 059902 (2018)]. [44] S. Jaiswal, S. Nandi and S. K. Patra, JHEP 1712, 060 (2017) [arXiv:1707.09977 [hep-ph]]; S. Bhattacharya, S. Nandi and S. Kumar Patra, arXiv:1805.08222 [hep-ph]. [45] D. Bigi, P. Gambino and S. Schacht, JHEP 1711, 061 (2017) [arXiv:1707.09509 [hep-ph]]. [46] S. Aoki et al., Eur. Phys. J. C 77, no. 2, 112 (2017) [arXiv:1607.00299 [hep-lat]]. [47] See for e.g: V. D. Barger and K. Whisnant, Phys. Rev. D 36, 3429 (1987). [48] Electroweak [LEP and ALEPH and DELPHI and L3 and OPAL Collaborations and LEP Electroweak Working Group and SLD Electroweak Group and SLD Heavy Flavor Group], hep-ex/0312023. [49] M. Aaboud et al. [ATLAS Collaboration], JHEP 1710, 182 (2017) [arXiv:1707.02424 [hep-ex]]. [50] A. M. Sirunyan et al. [CMS Collaboration], JHEP 1806, 120 (2018) [arXiv:1803.06292 [hep-ex]]. [51] Private communication with T. Kamon. [52] M. Aaboud et al. [ATLAS Collaboration], Phys. Rev. D 96, no. 5, 052004 (2017) [arXiv:1703.09127 [hep-ex]]. [53] A. M. Sirunyan et al. [CMS Collaboration], arXiv:1806.00843 [hep-ex]. [54] M. Aaboud et al. [ATLAS Collaboration], Phys. Rev. Lett.  120, no. 16, 161802 (2018) [arXiv:1801.06992 [hep-ex]]. [55] A. M. Sirunyan et al. [CMS Collaboration], [arXiv:1807.11421 [hep-ex]]. [56] R. Barbieri and R. N. Mohapatra, Phys. Rev. D 39, 1229 (1989). [57] M. S. Chanowitz, M. A. Furman and I. Hinchliffe, Phys. Lett.  78B, 285 (1978); M. S. Chanowitz, M. A. Furman and I. Hinchliffe, Nucl. Phys. B 153, 402 (1979). [58] K. S. Babu, J. Julio and Y. Zhang, Nucl. Phys. B 858, 468 (2012) [arXiv:1111.5021 [hep-ph]]. [59] K. S. Babu, I. Gogoladze and S. Khan, Phys. Rev. D 95, no. 9, 095013 (2017) [arXiv:1612.05185 [hep-ph]]. [60] T. Appelquist et al., Phys. Rev. Lett.  112, no. 11, 111601 (2014) [arXiv:1311.4889 [hep-ph]]. [61] V. Leino, K. Rummukainen, J. M. Suorsa, K. Tuominen and S. T�htinen, Phys. Rev. D 97, no. 11, 114501 (2018) [arXiv:1707.04722 [hep-lat]]. [62] C. H. Lee and R. N. Mohapatra, JHEP 1702, 080 (2017) [arXiv:1611.05478 [hep-ph]].
Light-curve and spectral properties of ultra-stripped core-collapse supernovae leading to binary neutron stars Takashi J. Moriya${}^{1,2}$, Paolo A. Mazzali${}^{3,4}$, Nozomu Tominaga${}^{5,6}$, Stephan Hachinger${}^{7}$, Sergei I. Blinnikov${}^{8,9,6}$, Thomas M. Tauris${}^{10,2}$, Koh Takahashi${}^{11}$, Masaomi Tanaka${}^{1,6}$,Norbert Langer${}^{2}$, and Philipp Podsiadlowski${}^{12,2}$ ${}^{1}$ Division of Theoretical Astronomy, National Astronomical Observatory of Japan, National Institutes of Natural Sciences, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan ${}^{2}$ Argelander Institute for Astronomy, University of Bonn, Auf dem Hügel 71, D-53121 Bonn, Germany ${}^{3}$ Astrophysics Research Institute, Liverpool John Moores University, IC2, Liverpool Science Park, 146 Browlow Hill, Liverpool L3 5RF, UK ${}^{4}$ Max Planck Institute for Astrophysics, Karl-Schwarzschild-Straße 1, D-85748 Garching, Germany ${}^{5}$ Department of Physics, Faculty of Science and Engineering, Konan University, 8-9-1 Okamoto, Kobe, Hyogo 658-8501, Japan ${}^{6}$ Kavli Institute for the Physics and Mathematics of the Universe (WPI), The University of Tokyo Institutes for Advanced Study, The University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8583, Japan ${}^{7}$ Leibniz Supercomputing Centre (LRZ), Bavarian Academy of Sciences and Humanities, Boltzmannstraße 1, D-85748 Garching, Germany ${}^{8}$ Institute for Theoretical and Experimental Physics, Bolshaya Cheremushkinskaya ulitsa 25, 117218 Moscow, Russia ${}^{9}$ All-Russia Research Institute of Automatics, Sushchevskaya ulitsa 22, 127055 Moscow, Russia ${}^{10}$ Max Planck Institute for Radio Astronomy, Auf dem Hügel 69, D-53121 Bonn, Germany ${}^{11}$ Department of Astronomy, Graduate School of Science, The University of Tokyo, Hongo 7-3-1, Bunkyo, Tokyo 113-0033, Japan ${}^{12}$ Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK takashi.moriya@nao.ac.jp (Accepted 2016 December 8. Received 2016 December 8; in original form 2016 August 25) Abstract We investigate light-curve and spectral properties of ultra-stripped core-collapse supernovae. Ultra-stripped supernovae are the explosions of heavily stripped massive stars which lost their envelopes via binary interactions with a compact companion star. They eject only $\sim 0.1~{}\mathrm{M}_{\odot}$ and may be the main way to form double neutron-star systems which eventually merge emitting strong gravitational waves. We follow the evolution of an ultra-stripped supernova progenitor until iron core collapse and perform explosive nucleosynthesis calculations. We then synthesize light curves and spectra of ultra-stripped supernovae using the nucleosynthesis results and present their expected properties. Ultra-stripped supernovae synthesize $\sim 0.01~{}\mathrm{M}_{\odot}$ of radioactive ${}^{56}\mathrm{Ni}$, and their typical peak luminosity is around $10^{42}~{}\mathrm{erg~{}s^{-1}}$ or $-16$ mag. Their typical rise time is $5-10$ days. Comparing synthesized and observed spectra, we find that SN 2005ek, some of the so-called calcium-rich gap transients, and SN 2010X may be related to ultra-stripped supernovae. If these supernovae are actually ultra-stripped supernovae, their event rate is expected to be about 1 per cent of core-collapse supernovae. Comparing the double neutron-star merger rate obtained by future gravitational-wave observations and the ultra-stripped supernova rate obtained by optical transient surveys identified with our synthesized light-curve and spectral models, we will be able to judge whether ultra-stripped supernovae are actually a major contributor to the binary neutron star population and provide constraints on binary stellar evolution. keywords: supernovae: general — supernovae: individual: SN 2005ek — supernovae: individual: SN 2010X — supernovae: individual: PTF10iuv — gravitational waves ††pagerange: Light-curve and spectral properties of ultra-stripped core-collapse supernovae leading to binary neutron stars–References††pubyear: 2016 1 Introduction Multiplicity of massive stars plays an essential role in determining stellar structure at the time of their core collapse and thus their supernova (SN) properties (e.g., Langer, 2012; Yoon, 2015; Vanbeveren & Mennekens, 2015; Marchant et al., 2016). In particular, the lack of hydrogen-rich layers in progenitors of stripped-envelope core-collapse SNe (i.e., Type IIb/Ib/Ic SNe), is often suggested to be caused by binary interaction (e.g., Wheeler & Levreault, 1985; Ensman & Woosley, 1988; Podsiadlowski, Joss, & Hsu, 1992; Nomoto et al., 1994; Shigeyama et al., 1994; Woosley et al., 1994; Bersten et al., 2012, 2014; Fremling et al., 2014; Ergon et al., 2015; Eldridge et al., 2015; Lyman et al., 2016a). The small typical ejecta mass estimated from light curves (LCs) of stripped-envelope SNe ($\simeq 1-5~{}\mathrm{M}_{\odot}$, e.g., Sauer et al. 2006; Drout et al. 2011; Taddia et al. 2015; Lyman et al. 2016a), and nucleosynthetic signatures estimated from their spectral modeling (e.g., Jerkstrand et al., 2015), support progenitors with relatively small zero-age main-sequence (ZAMS) masses. The less massive ZAMS mass stars need to remove their hydrogen-rich envelopes with mass loss caused by binary interactions because of their inefficient radiation-driven wind (e.g., Podsiadlowski, Joss, & Hsu, 1992; Nomoto, Iwamoto, & Suzuki, 1995; Podsiadlowski et al., 2004b; Izzard, Ramirez-Ruiz, & Tout, 2004; Yoon, Woosley, & Langer, 2010; Eldridge, Langer, & Tout, 2011; Benvenuto, Bersten, & Nomoto, 2013; Lyman et al., 2016a; Eldridge & Maund, 2016). It is also known that mass loss caused by binary interaction is essential to explain the observational ratio of stripped-envelope SNe to hydrogen-rich SNe (e.g., Eldridge, Izzard, & Tout, 2008; Eldridge et al., 2013; Smith et al., 2011). Some Type Ib/Ic SNe are known to have a much faster LC evolution than others. While typical Type Ib/Ic SNe reach their peak luminosity in $\sim 20$ days (e.g., Drout et al., 2011; Prentice et al., 2016), rapidly-evolving SN LCs rise in less than $\sim 10$ days and decline quickly on a similar timescale (e.g., Poznanski et al., 2010; Perets et al., 2010; Kawabata et al., 2010; Kasliwal et al., 2010; Ofek et al., 2010; Kasliwal et al., 2012; Drout et al., 2013, 2014; Inserra et al., 2015). The simplest way to interpret the rapid LC evolution of some Type Ib/Ic SNe is that their ejecta mass is much smaller than in the more slowly evolving SNe (see also, e.g., Kleiser & Kasen 2014; Drout et al. 2014; Tanaka et al. 2016). LC evolution becomes faster with smaller ejecta mass because of the smaller diffusion timescale. This is roughly proportional to $(M_{\mathrm{ej}}^{3}/E_{\mathrm{ej}})^{1/4}$, where $M_{\mathrm{ej}}$ is the ejecta mass and $E_{\mathrm{ej}}$ is the kinetic energy (e.g., Arnett, 1982). The ejecta mass of rapidly-evolving SNe is typically estimated to be $\sim 0.1~{}\mathrm{M}_{\odot}$, which is an order of magnitude smaller than that of typical Type Ib/Ic SNe (e.g., Poznanski et al., 2010; Kasliwal et al., 2012; Drout et al., 2013). Type Ib/Ic SNe with rapidly-evolving LCs show diversity in their peak luminosities and spectra. SN 2002bj is among the first observed rapidly-evolving Type Ib/Ic SNe and among the brightest with its peak magnitude at around $-18$ mag (Poznanski et al., 2010). Many rapidly-evolving SNe have their peak magnitudes at around $-16$ mag with different spectral properties. One example of rapidly-evolving SNe in this luminosity range is so-called “Ca-rich gap transients.” They are optical transients whose peak luminosity lie between that of classical novae and SNe and they show strong Ca lines especially in the nebular phase (e.g., Kasliwal et al., 2012). Although they are called ”Ca-rich” transients, they may not necessarily be Ca-rich. What is indicated from their spectra is that they have a larger fraction of Ca to O than typical Type Ib/Ic SNe. Several progenitor scenarios have been suggested for these events (e.g., Perets et al., 2010; Kawabata et al., 2010), but their nature is not yet clear. There also exist rapidly-evolving SNe with similar luminosities to the Ca-rich gap transients but with very different spectral features such as SN 2005ek (Drout et al., 2013) and SN 2010X (Kasliwal et al., 2010). Another kind of rapidly-evolving SNe are SN 2002cx-like SNe, which are also known as Type Iax SNe; they are characterized by their strong Si and S features with relatively low photospheric velocities and by the wide peak luminosity range covering from $\sim-14$ mag to $\sim-19$ mag (e.g., Foley et al., 2013). A number of possibilities have been suggested to obtain a small ejecta mass to explain the rapidly-evolving SNe, including some not related to the core collapse of massive stars (e.g., Moriya & Eldridge, 2016; Dessart & Hillier, 2015; Kashiyama & Quataert, 2015; Kleiser & Kasen, 2014; Shen et al., 2010; Moriya et al., 2010; Kitaura, Janka, & Hillebrandt, 2006). In particuar, Tauris et al. (2013); Tauris, Langer, & Podsiadlowski (2015) proposed an “ultra-stripped” core-collapse SN scenario to explain small ejecta masses. They showed that tight helium star–neutron star (NS) binary systems, presumably created in the common-envelope phase from high-mass X-ray binaries (Tauris & van den Heuvel, 2006), can lead to the extreme stripping of the helium envelope and result in SNe with ejecta masses of the order of 0.1 $\mathrm{M}_{\odot}$ or less. The SN ejecta mass from these systems is even less than those typically obtained in SN progenitors from the first exploding stars ($\sim 1~{}\mathrm{M}_{\odot}$) during binary evolution (e.g., Yoon, Woosley, & Langer 2010; Lyman et al. 2016a). Several studies have investigated the observational properties of stripped-envelope SNe with ejecta larger than 1 $\mathrm{M}_{\odot}$ coming from progenitors obtained from binary stellar evolution (e.g., Dessart et al., 2015). However, few LC and spectral studies have been carried out for ultra-stripped SNe. A previous study of ultra-stripped SNe (Tauris et al., 2013) only provided LC models. Because there are several proposed ways to make SNe with rapidly-evolving LCs, LC information is not sufficient to identify ultra-stripped SNe observationally. In addition, the ${}^{56}\mathrm{Ni}$ mass was treated as a free parameter in the previous study. In this paper, we investigate not only LCs but also spectral properties of ultra-stripped SNe by performing explosive nucleosynthesis calculations which provide an appropriate estimate for the ${}^{56}\mathrm{Ni}$ mass synthesized during the explosion, so that we can have a better understanding of their observational signatures and identify ultra-stripped SNe observationally. Ultra-stripped SNe are closely connected to the formation of double NS systems. It was argued by Tauris et al. (2013); Tauris, Langer, & Podsiadlowski (2015) that all double NS systems formed in the Galactic disk (i.e., outside dense environments such as globular clusters) which are tight enough to merge within a Hubble time must have been produced from an ultra-stripped SN. Therefore, ultra-stripped SNe are related to sources of gravitational waves detected by LIGO (e.g., Abadie et al., 2010). In addition, double NS systems which merge are suggested to cause short gamma-ray bursts (Blinnikov et al., 1984; Paczynski, 1986; Narayan, Paczynski, & Piran, 1992) and $r$-process element nucleosynthesis (e.g., Rosswog et al., 1999; Argast et al., 2004; Hirai et al., 2015), possibly followed by a so-called “kilonova” (e.g., Metzger et al., 2010; Barnes & Kasen, 2013; Tanaka & Hotokezaka, 2013). However, to produce a double NS system a massive binary must survive two SN explosions. A binary system is likely to be disrupted by the first SN explosion because of a combination of sudden mass loss and a kick imparted to the newborn NS (Brandt & Podsiadlowski, 1995). Ultra-stripped SNe, on the other hand, eject very little mass and are unlikely to have large NS kicks in general because of their rapid explosions (Podsiadlowski et al., 2004a; Suwa et al., 2015; Tauris, Langer, & Podsiadlowski, 2015). Thus, ultra-stripped SNe avoid these two major obstacles in forming double NS systems and are therefore likely to produce systems which lead to merger events if the post-SN orbital period is short enough. Future constraints on the observed rate of ultra-stripped SNe can be directly compared to the double NS merger rate determined from gravitational wave observations (Tauris, Langer, & Podsiadlowski, 2015). This can be used to verify their evolutionary connection. The merger rate of double NS systems will be known within a few years when LIGO/VIRGO reach full design sensitivity. We refer to Abadie et al. (2010); Berry et al. (2015); Abbott et al. (2016b) for detailed reviews on the expected merger rates. This paper is organized as follows. Section 2 presents the numerical methods we adopt in our study. Our synthetic LC and spectral models are presented in Section 3. We compare our results with observations in Section 4. We discuss our results in Section 5 and conclude this paper in Section 6. 2 Method 2.1 Progenitor We investigate the observational properties of explosions originating from the ultra-stripped SN progenitor presented in Tauris et al. (2013) (see also Tauris, Langer, & Podsiadlowski 2015). This is a typical model of ultra-stripped SNe. The total progenitor mass at explosion is 1.50 $\mathrm{M}_{\odot}$ with a carbon+oxygen core of 1.45 $\mathrm{M}_{\odot}$. The helium ZAMS mass of the progenitor was 2.9 $\mathrm{M}_{\odot}$. Tauris et al. (2013) evolved the progenitor until shortly after the onset of off-center oxygen burning using the BEC binary stellar evolution code (Yoon, Woosley, & Langer 2010 and references therein). Although the remaining time to collapse is estimated to be about 10 years and the final progenitor mass does not change until the collapse, the density in the inner layers of the progenitor increases significantly in the remaining time. As the final density structure is critical for the explosive nucleosynthesis results, we need to follow the progenitor evolution until core collapse. The BEC code is not suitable for this. In order to obtain a SN progenitor evolved until core collapse, we therefore used the public stellar evolution code MESA (Paxton et al., 2011, 2013, 2015). We used the same physical parameters as in the Tauris et al. (2013) BEC calculations in MESA. In particular, we used a mixing length parameter of 2 with a semiconvection efficiency parameter of 1. As the core mass is rather small, weak reactions play an important role in the evolution of the core approaching collapse (e.g., Takahashi, Yoshida, & Umeda, 2013; Jones et al., 2013; Schwab, Quataert, & Bildsten, 2015). Therefore we used the large nuclear network ‘‘mesa151.net’’ provided in MESA, which includes 151 nuclei up to ${}^{65}$Ni with important weak reactions such as electron capture by ${}^{33}$S and ${}^{35}$Cl. We stopped the calculation when the stellar core started to infall with a speed of 1,000 $\mathrm{km~{}s^{-1}}$. First, we evolved a star starting from the helium ZAMS as is done by Tauris et al. (2013), but as a single star. Because the mass lost by the binary interactions like Roche-lobe overflow is known from the binary calculation of Tauris et al. (2013), we imposed their mass loss in our single-star evolution calculation. Thus, our single-star evolution calculation takes mass loss caused by binary stellar evolution into account in a simplified way. Tauris et al. (2013) obtained a 1.50 $\mathrm{M}_{\odot}$ helium star with a carbon+oxygen core of 1.45 $\mathrm{M}_{\odot}$ from a helium ZAMS star of 2.90 $\mathrm{M}_{\odot}$ which suffers from Roche-lobe overflow to a NS with initial orbital period of 0.1 days. We slightly increased the initial helium mass to 2.949 $\mathrm{M}_{\odot}$ in order to obtain the same carbon+oxygen core mass after the core helium burning in our MESA calculation. The evolution of the central density and temperature is presented in Fig. 1. We find that the internal evolution of our progenitor is essentially the same as that in Tauris et al. (2013) up to the point they succeeded to follow. The star forms an Fe core and collapses as was presumed by Tauris et al. (2013); Tauris, Langer, & Podsiadlowski (2015). It is suggested that low-mass core-collapse SN progenitors experience violent silicon flashes shortly before the core collapse (Woosley & Heger, 2015), but we do not find such a flash in our model. If such a flash occurs, it may result in a creation of a dense helium-rich circumstellar medium around the progenitor and the SN may be observed as Type Ibn (e.g., Moriya & Maeda, 2016). The final density structure of the progenitor at collapse is presented in Fig. 2. The abundance of the Fe-group elements sharply increases at an enclosed mass of 1.35 $\mathrm{M}_{\odot}$, while the electron fraction sharply decreases there. Thus, we estimate that the final iron-core mass at the time of collapse is 1.35 $\mathrm{M}_{\odot}$. This iron-core mass is comparable to that obtained recently by Suwa et al. (2015) (1.33 $\mathrm{M}_{\odot}$) from a progenitor model with a similar carbon+oxygen core mass as ours (1.45 $\mathrm{M}_{\odot}$). We compare our progenitor with that of Suwa et al. (2015) in Fig. 2. The internal structure of the collapsing models is similar. We also show for comparison the density structure obtained with the BEC code at a time $\sim\!10\;{\rm years}$ prior to core collapse. 2.2 Nucleosynthesis The collapsing progenitor described in the previous section was then used to follow the explosive nucleosynthesis. Numerical calculations of explosive nucleosynthesis were performed with the same numerical code as in previous studies of explosive nucleosynthesis in core-collapse SNe (e.g., Nakamura et al., 2001; Tominaga, Umeda, & Nomoto, 2007). It is a one-dimensional explicit Lagrangian hydrodynamics code in which a piece-wise parabolic method is adopted (Colella & Woodward, 1984). The $\alpha$-network is coupled with hydrodynamics and detailed nucleosynthesis calculations are performed as post-processing modelling. We used a reaction network including 280 isotopes up to ${}^{79}$Br (Table 1 in Umeda & Nomoto 2005). 2.3 Explosive hydrodynamics and light curves Synthetic LCs were numerically obtained using the one-dimensional multi-group radiation hydrodynamics code STELLA (Blinnikov & Bartunov, 1993; Blinnikov et al., 1998; Blinnikov & Sorokina, 2004; Blinnikov et al., 2006; Baklanov, Blinnikov, & Pavlyuk, 2005; Sorokina et al., 2015). STELLA has been used to model SN LCs of various kinds, including ultra-stripped SNe (Tauris et al., 2013). We take the progenitor structure above a mass cut and inject thermal energy at the bottom of the structure to initiate the explosion. The amount of thermal energy injected is $E_{\mathrm{ej}}+E_{\mathrm{bind}}$, where $E_{\mathrm{bind}}$ is the total binding energy of the progenitor. We used the chemical composition obtained from explosive nucleosynthesis. We show LC and spectral models obtained using two different mass cuts, at 1.30 $\mathrm{M}_{\odot}$ and 1.35 $\mathrm{M}_{\odot}$, respectively. A mass-cut of 1.35 $\mathrm{M}_{\odot}$ corresponds to the final iron-core mass of the progenitor model. Suwa et al. (2015) obtained a final NS baryonic mass of 1.35 $\mathrm{M}_{\odot}$ from an explosion using the same carbon+oxygen core mass. Tauris et al. (2013) used a mass cut of 1.30 $\mathrm{M}_{\odot}$. Both the small Fe core mass of our progenitor model and simulations by Suwa et al. (2015) suggest that the mass cut is likely to be small. 2.4 Spectra The spectral properties of ultra-stripped SNe have been investigated using the Monte Carlo spectral synthesis code developed by Mazzali & Lucy (1993). We refer to Mazzali & Lucy (1993); Lucy (1999); Mazzali (2000); Tanaka et al. (2011) for details. This code has been used for many SN spectral synthesis studies (e.g., Mazzali, Lucy, & Butler, 1992; Mazzali et al., 1993; Mazzali, Iwamoto, & Nomoto, 2000; Tanaka et al., 2011). The code is applicable in early phases of SNe when a photosphere exists in the ejecta from where photons are assumed to be emitted with a blackbody spectrum. The code requires a density structure, abundances, position of the photosphere, and emerging SN luminosity to synthesize spectra. We used the average abundances of the models obtained from the nucleosynthesis calculations (Table 1) in our spectral modelling. We did not assume stratification of chemical elements in the SN ejecta because the ejecta mass is small and the ejecta are likely to be well mixed (e.g., Hachisu et al., 1991). We took the density structure and luminosity from STELLA. The spectral code assumes homologous expansion of the SN ejecta, which is satisfied in every model we present in this paper. A converging model is obtained by changing photospheric velocity and temperature in the spectral synthesis code. Our models contain a large fraction of helium, and some rapidly-evolving SNe are of Type Ib. However, the code we mainly use does not include non-thermal excitation of helium which is essential in modelling helium features in SN spectra (e.g., Lucy, 1991; Mazzali & Lucy, 1998; Hachinger et al., 2012). We investigate the effect of non-thermal helium excitation using a code developed by Hachinger et al. (2012) and find that non-thermal excitation does not have a strong effect on the spectra we present in this study (Section 3.3.3). 3 Results 3.1 Nucleosynthesis We calculated explosive nucleosynthesis for three different explosion energies: 0.10, 0.25, and 0.50 B $(1~{}\mathrm{B}\equiv 10^{51}~{}\mathrm{erg})$. We investigated these small explosion energies based on the recent explosion simulations of ultra-stripped SNe by Suwa et al. (2015). They found that ultra-stripped SNe explode via the neutrino-driven mechanism and have small explosion energies, of the order of 0.1 B. The results of our nucleosynthesis calculation for the case of $E_{\mathrm{ej}}=0.25~{}\mathrm{B}$ is shown in Fig. 3. Table 1 shows the final average abundances in the ejecta for all energies and mass cuts we applied. One of the most important elements determining SN properties is the mass of ${}^{56}\mathrm{Ni}$ synthesised in the explosion. We find that the ${}^{56}\mathrm{Ni}$ masses range from 0.034 to 0.021 $\mathrm{M}_{\odot}$ depending on explosion energy and mass cut (Table 1). 3.2 Light curves Figure 4 shows the bolometric LCs of ultra-stripped SNe obtained in this study. After shock breakout, the bolometric LCs decline in the first 1 day owing to adiabatic cooling of the ejecta. Then, when the heating from ${}^{56}\mathrm{Ni}$ decay becomes dominant, the bolometric luminosity starts to increase again. The peak luminosity is approximately proportional to the initial ${}^{56}\mathrm{Ni}$ mass. In most cases, the peak luminosity does not exactly match that expected from a simple estimate based on Arnett (1982): it is larger by up to 50 per cent. The rise time and peak luminosity are consistent with those estimated in Tauris, Langer, & Podsiadlowski (2015) by using an analytic approach. Figure 5 shows multi-color LCs of the same models with several Bessel filters (Bessell, 1990). Optical LCs presented in Fig. 5 show some differences from the bolometric LCs. In particular, optical LCs show a first LC peak at times when bolometric LCs monotonically decline. This is a consequence of the cooling of the ejecta, which shifts the spectral peak to longer wavelengths as time goes on. Therefore, LCs in redder bands peak later. In the models presented so far, we have used the chemical structure from the explosive nucleosynthesis modeling (cf. Fig. 3) and did not take into account the effect of mixing. To demonstrate the effect of mixing on the LCs, we show a LC in which ${}^{56}\mathrm{Ni}$ is uniformly mixed in the entire ejecta (Fig. 6). Because of the presence of ${}^{56}\mathrm{Ni}$ in the outer layers, heating by ${}^{56}\mathrm{Ni}$ in ejecta is more efficient early on and the rise time becomes shorter in the mixed model. However, at late phases, the gamma-rays in the outer layers are less trapped because of the smaller optical depth. Thus, the luminosity of the mixed model is less than that of the non-mixed model by about 50%. The decline rate after the LC peak is not strongly affected by mixing. 3.3 Spectra 3.3.1 Peak spectra We present synthetic spectra at maximum light in Fig. 7. We focus on this epoch’s spectra because this is when ultra-stripped SNe are most likely to be observed. We compare these synthetic spectra with observations in Section 4.2. Spectral features primarily depend on elemental abundances, photospheric temperature, and photospheric velocity. We used the mixed composition in Table 1 for our spectral modelling. Figure 8 shows the photospheric velocities and temperatures obtained by fitting the blackbody function to the spectral energy distributions (SEDs) obtained from STELLA. The photosphere here is defined as the location where the Rosseland mean optical depth is 2/3. The circles in Fig. 8 represent the values in the converged spectral models of $M_{\mathrm{ej}}=0.20~{}\mathrm{M}_{\odot}$ and $E_{\mathrm{ej}}=0.25~{}\mathrm{B}$ shown in Fig. 9. The photospheric temperature and velocity from the LC code and the spectral code match within 10 per cent. Line shifts and broadening in spectra depend on the photospheric velocity. Because velocity is proportional to $(E_{\mathrm{ej}}/M_{\mathrm{ej}})^{1/2}$, $E_{\mathrm{ej}}/M_{\mathrm{ej}}$ is a good indicator of these properties. The models with $E_{\mathrm{ej}}=0.25~{}\mathrm{B}$ have $E_{\mathrm{ej}}/M_{\mathrm{ej}}\simeq 1~{}\mathrm{B}/\mathrm{M}_{\odot}$, which is similar to typical SNe Ia. There are several notable features in our synthetic peak spectra. First of all, there are relatively strong Si ii features, particularly Si ii $\lambda 6355$. In some models, especially in the spectra with relatively small explosion energy, we can also see the C ii $\lambda 6582$ feature next to Si ii $\lambda 6355$. We can find some S ii features between 5000 Å and 6000 Å. O i $\lambda 7774$ and Ca ii IR triplet around 8000 Å are also seen. The strong Ca feature in the $M_{\mathrm{ej}}=0.15~{}\mathrm{M}_{\odot}$ and $E_{\mathrm{ej}}=0.50~{}\mathrm{B}$ model is due to Ca ionization as is discussed in the next section. 3.3.2 Temporal evolution We show the temporal evolution of our synthetic spectra with $M_{\mathrm{ej}}=0.20~{}\mathrm{M}_{\odot}$ and $E_{\mathrm{ej}}=0.25~{}\mathrm{B}$ in Fig. 9. We first present the spectrum at 0.5 days after the explosion, which is still in the cooling phase after shock breakout (Fig. 4). This epoch roughly corresponds to the time when the optical LCs show the first peak (Fig. 5). The spectra during these very early epochs are similar to those of broad-line Type Ic SNe (SNe Ic-BL) because the photosphere is located in the high-velocity outer layers at that time (cf. Fig. 8). Up to about 2.5 days after the explosion, a strong Ca absorption is observed between 7000 Å and 8000 Å. These broad features disappear by the time of LC peak, when the photospheric temperature becomes hot enough to change the Ca ionization level. The broad feature starts to appear again about 9 days after the explosion as the photosphere cools. There is no significant evolution in the spectra around LC peak, and the spectra quickly evolve to become nebular soon thereafter. A further study of the late-time spectra is beyond the scope of this paper. 3.3.3 Helium features Our progenitor contains about 0.03 $\mathrm{M}_{\odot}$ of helium. This is close to the maximum helium mass that can be hidden without showing significant spectral features in relatively low-mass SNe Ib/Ic (about 0.1 $\mathrm{M}_{\odot}$ of He Hachinger et al., 2012). It is important to judge whether ultra-stripped SNe from our progenitor are expected to be observed as Type Ib or Type Ic. However, the spectral synthesis code we have used so far does not take the non-thermal excitation required for helium excitation into account. Here, we show a spectral model where the effect of the non-thermal excitation is taken into account. The spectrum was computed with the spectral synthesis code described in Hachinger et al. (2012). Figure 10 shows the spectrum including non-thermal excitation. In the optical range the spectrum is not strongly affected by non-thermal effects. Helium lines remain invisible except for the intrinsically strong $\lambda 10830$ multiplet. Therefore, non-thermal excitation does not seem to cause significant changes to the spectra of ultra-stripped SNe. The fact that He i remains practically invisible in our spectra, while 0.1 $\mathrm{M}_{\odot}$ of helium could be detected in the SN Ib/Ic models of Hachinger et al. (2012), is not only due to the somewhat smaller amount of ${}^{56}\mathrm{Ni}$ in our models. It can also be traced back to the fact that spectral temperatures are relatively high in our models, such that the non-thermal excitation effects rather result in a high ionization fraction (i.e. dominance of He ii) than in a large occupation number within He i excited states possibly generating lines. This is a situation somewhat similar to that of SLSNe (Mazzali et al., 2016). 4 Comparison with observations 4.1 Light curves Figure 11 shows a comparison between our synthetic LCs and observational data of rapidly-evolving SNe. The comparison is shown both in bolometric luminosity and in the $B$ band. Overall, our LCs are consistent with faint rapidly-evolving SNe with peak luminosity of $\sim 10^{42}~{}\mathrm{erg~{}s^{-1}}$ (i.e., $\sim-16$ mag). Luminous SNe like SN 2002bj require more than 0.1 $\mathrm{M}_{\odot}$ of ${}^{56}\mathrm{Ni}$ to explain the peak luminosity by ${}^{56}\mathrm{Ni}$, which is inconsistent with the small amount of ${}^{56}\mathrm{Ni}$  ($\sim 0.01~{}\mathrm{M}_{\odot}$) we expect in our ultra-stripped SNe. Ultra-stripped SN LCs are consistent with those of rapidly-evolving faint SNe (e.g., SN 2010X, PS1-12bb, and SN 2008ha), the so-called Ca-rich gap transients (e.g., PTF10iuv and SN 2005E), and SN 2005ek. 4.2 Spectra We compare our models with observed near-LC-peak spectra of some rapidly-evolving SNe , i.e., SN 2005ek, two Ca-rich gap transients (PTF10iuv and SN 2005E), SN 2010X, SN 2002bj, and a SN 2002cx-like SN 2007qd. The observed spectra are taken from WISeREP111http://wiserep.weizmann.ac.il (Yaron & Gal-Yam, 2012). We use synthetic spectra of 0.25 B for comparison, as there are no significant differences in spectral features (except for velocity) caused by the difference in the explosion energy in most spectra (Fig. 7). We correct the extinction of observed spectra by using the Galactic extinction law of Cardelli, Clayton, & Mathis (1989) assuming $R_{V}=3.1$. The redshifts and extinctions applied are, $z=0.017$ and $E(B-V)=0.21~{}\mathrm{mag}$ for SN 2005ek (Drout et al., 2011), $z=0.02$ and $E(B-V)=0~{}\mathrm{mag}$ for PTF10iuv (Kasliwal et al., 2012), $z=0.0090$ and $E(B-V)=0.098~{}\mathrm{mag}$ for SN 2005E (Perets et al., 2010), $z=0.015$ and $E(B-V)=0.146~{}\mathrm{mag}$ for SN 2010X (Kasliwal et al., 2010), $z=0.012$ and $E(B-V)=0~{}\mathrm{mag}$ for SN 2002bj (Poznanski et al., 2010), and $z=0.043$ and $E(B-V)=0.035~{}\mathrm{mag}$ for SN 2007qd (McClelland et al., 2010). 4.2.1 SN 2005ek SN 2005ek was suggested to be a SN whose progenitor experienced extensive mass stripping by binary interactions (Drout et al., 2013). Its LC was shown to be consistent with our ultra-stripped SN model (Tauris et al., 2013), but the spectral properties have not yet been compared. Figure 12 shows a comparison between synthetic spectra obtained from our ultra-stripped SN models and the spectrum of SN 2005ek one day before LC peak. Overall, continuum features and velocity shifts of the synthetic spectra match the observed spectrum well. In particular, the characteristic features in the red part of the spectrum of SN 2005ek, i.e., C i $\lambda 6582$, O i $\lambda 7774$, and Ca ii IR triplet, are all well reproduced by the $M_{\mathrm{ej}}=0.20~{}\mathrm{M}_{\odot}$ model. Our models predict stronger Si ii features than seen in SN 2005ek. We find that we can obtain a better match by reducing the Si abundance from 0.053 to 0.003 and increasing the O abundance from 0.18 to 0.23 (Fig. 13). The strong Si features in the original ultra-stripped SN model may also be due to our assumption of fully-mixed ejecta. The average Si abundance used in the spectral modelling is around 0.05, while most of the outer layers have smaller fractions initially (Fig. 3). The smaller abundances may result in weaker Si features especially at early times. Thus, our assumption of full mixing may be too simplistic and SN 2005ek might have had a rather smaller degree of mixing. This may also explain the significant drop in flux below $\sim 3500$ Å in the model: a larger degree of mixing results in more Fe-group elements in the outer layers, leading to more line blocking which suppresses the NUV flux. 4.2.2 Ca-rich gap transients (PTF10iuv and SN 2005E) Fig. 11 shows the LCs of two Ca-rich gap transients, PTF10iuv and SN 2005E. Their peak luminosity is slightly smaller than that of our ultra-stripped SNe, but the timescales of the LC evolution as well as the LC decline rates are similar. Efficient mixing in the ejecta may make the luminosity of our models slightly lower (Fig. 6), which motivates us to investigate their spectra. Figure 14 shows a spectral comparison near peak luminosity. The spectral features including velocity shifts of PTF10iuv (top panels of Fig. 14) match our models well. Prominent features in PTF10iuv, such as Si ii $\lambda 6347$, O i $\lambda 7772$, and Ca ii $\lambda 8542$ are also well reproduced. We find relatively strong Si and Ca lines from our core-collapse progenitors because of the small values of the explosion energy and the progenitor mass. The overall color of PTF10iuv is consistent with our model. In summary, the LC and spectral properties of PTF10iuv are overall consistent with our ultra-stripped SN models. On the other hand, our ultra-stripped SN spectral models are not consistent with the spectral features of the other Ca-rich gap transient, SN 2005E, very well, as shown in the bottom panels of Fig. 14. This indicates that there may be several kinds of progenitors for Ca-rich gap transients, of which some could be related to ultra-stripped SNe. Ca-rich gap transients are typically found in remote locations from the center of their host galaxies (Kasliwal et al., 2012; Lyman et al., 2014, 2016b; Foley, 2015). For example, PTF10iuv was 40 kpc away from the closest host galaxy candidate (Kasliwal et al., 2012). This fact is used to relate Ca-rich gap transients to explosive events related to white dwarfs. However, Ca-rich transients are also found in the intergalactic space of merging galaxies (e.g., Foley, 2015), where star formation is likely to take place (e.g., Mullan et al., 2011). Some massive stars are also known to exist very far from apparent star forming regions (e.g., Smith, Andrews, & Mauerhan, 2016). It is also interesting to note that white dwarf mergers may actually end up as core-collapse SNe and could lead to SNe with similar properties to our ultra-stripped SNe (e.g., Nomoto & Iben, 1985; Schwab, Quataert, & Kasen, 2016). As we suggest here, Ca-rich gap transients can have different origin. Those found far from any host galaxy may be unrelated to ultra-stripped SNe. The faint nature of ultra-stripped SNe may also prevent them from being found in bright star-forming regions such as in galactic disks. 4.2.3 SN 2010X SN 2010X was a rapidly-evolving Type Ib SN. Its ejecta mass was estimated to be $\sim 0.16$ $\mathrm{M}_{\odot}$ and its ${}^{56}\mathrm{Ni}$ mass to be $\sim 0.02$ $\mathrm{M}_{\odot}$ (Kasliwal et al., 2010). These properties, as well as the LC evolution are similar to those of our ultra-stripped SNe (Fig. 11). SN 2010X had spectral characteristics similar to SN 2002bj, which we discuss in the next section, but it was much fainter. We compare our spectra with that of SN 2010X near LC peak in Fig. 15. Overall spectral features match well, especially the model with $M_{\mathrm{ej}}=0.15~{}\mathrm{M}_{\odot}$. The Si ii $\lambda 6355$ is predicted to be slightly stronger, but a small reduction of the Si abundance or a slightly smaller degree of mixing could reduce the strength of this line as was discussed for SN 2005ek. Therefore, SN 2010X may also be related to ultra-stripped SNe. 4.2.4 SN 2002bj SN 2002bj was one of the first rapidly-evolving SNe to be reported (Poznanski et al., 2010). If SN 2002bj was powered by ${}^{56}\mathrm{Ni}$, the amount of ${}^{56}\mathrm{Ni}$ required to explain its peak luminosity is $0.15-0.25~{}\mathrm{M}_{\odot}$ (Poznanski et al. 2010, see also Fig. 11). The amount of ${}^{56}\mathrm{Ni}$ required is much larger than what we expect from ultra-stripped SNe, and the ${}^{56}\mathrm{Ni}$ mass alone is comparable to the total ejecta mass expected from ultra-stripped SNe. This suggests that SN 2002bj is unlikely to be related to ultra-stripped SNe. Nonetheless, we show the comparison between our synthetic spectra and that of SN 2002bj. As expected, the match is not very good (Fig. 16). 4.2.5 SN 2002cx-like (Type Iax) SNe SN 2002cx-like SNe, which are often referred as Type Iax SNe, are a peculiar type of SN Ia with fainter peak luminosity. Their origin is still under discussion (e.g., Foley et al., 2016). Although many of them have peak magnitudes brighter than $-17$ mag (see Foley et al. 2016 for a summary) and are thus too bright to be ultra-stripped SNe, some of them do have fainter peak luminosity (e.g., Stritzinger et al., 2014; McClelland et al., 2010). We take one faint SN 2002cx-like SN, SN 2007qd (McClelland et al., 2010), which is in the expected peak luminosity range for ultra-stripped SNe, and compare its spectrum at the peak luminosity with our synthetic spectra. Figure 17 shows the comparison. SN 2002cx-like SNe have much lower velocities than our synthetic ultra-stripped SN spectra. A low photospheric velocity is a commonly observed feature of faint SN 2002cx-like SNe. Our model with the smallest explosion energy (0.10 B) may have a photospheric velocity similar to that of some SN 2002cx-like SNe (Fig. 8), but it still seems to fail to reproduce crowded, unblended line features. In addition, the spectra of ultra-stripped SNe evolve quickly to the nebular regime, while the spectra of SN 2002cx-like SNe do not (e.g., Sahu et al., 2008; Foley et al., 2016). 4.3 Summary Table 2 summarizes the results of the comparison between our ultra-stripped SN models and possible observational counterparts. We find that SN 2005ek, PTF10iuv (Ca-rich gap transient), and SN 2010X match our synthetic ultra-stripped SN LCs and spectra. 5 Discussion 5.1 Diversity We have investigated the observational properties of an ultra-stripped SN explosion from a progenitor with a mass of 1.50 $\mathrm{M}_{\odot}$. Depending on the initial binary parameters and the helium ZAMS mass of the progenitor, the final total mass and core mass vary considerably (Tauris, Langer, & Podsiadlowski, 2015). However, recent numerical simulations of explosions of ultra-stripped core-collapse SNe find that the explosion energy and the synthesized ${}^{56}\mathrm{Ni}$ mass do not change significantly among progenitors of different core masses (Suwa et al., 2015). Thus, the explosion energy and ${}^{56}\mathrm{Ni}$ mass values investigated in this paper are likely to remain similar even in different ultra-stripped SN progenitors. The low spread in the ${}^{56}\mathrm{Ni}$ masses found in our models suggests that the peak luminosity should be roughly similar in all ultra-stripped SNe, although the LC rise time can vary depending on the ejecta mass (Tauris, Langer, & Podsiadlowski, 2015). In our one-dimensional model the ${}^{56}\mathrm{Ni}$ mass ejected depends on the mass cut, but Suwa et al. (2015) obtained similar ${}^{56}\mathrm{Ni}$ masses in their multi-dimensional explosion simulations. Thus, the spectral signatures we find in this study for a particular progenitor model are likely to be generic for ultra-stripped SNe. A possible major source of diversity in ultra-stripped SN properties that may be caused by differences in their progenitors is their SN spectral type, which is determined by the ejected helium mass. In the progenitor model we used in this study, there are 0.03 $\mathrm{M}_{\odot}$ of helium in an ejected mass of $0.15-0.20~{}\mathrm{M}_{\odot}$. Although helium features are not expected to be observed significantly in our model and we expect the explosion to be a SN Ic (Section 3.3.3), the helium mass as well as the ejecta mass can change depending on the initial binary configurations (Tauris, Langer, & Podsiadlowski, 2015). Many progenitors are expected to have a helium mass above the critical helium mass ($\sim 0.1~{}\mathrm{M}_{\odot}$, Hachinger et al. 2012) required to observe optical helium features (Tauris, Langer, & Podsiadlowski, 2015), and thus may be observed as SNe Ib like SN 2010X. To summarize, the expected luminosity range of ultra-stripped SNe is similar to that obtained with the model used in this paper. The diversity in SN ejecta masses caused by different progenitor systems could lead to diversity in the rise times of ultra-stripped SNe. We indicate the expected location of ultra-stripped SNe in the phase diagram of transients (e.g., Kulkarni, 2012) in Fig. 18. It is likely that there is a smooth transition between ultra-stripped SNe and stripped-envelope SNe. The classical Type Ic SN 1994I, which had an ejecta mass of only 1 $\mathrm{M}_{\odot}$(e.g., Iwamoto et al., 1994; Sauer et al., 2006) may be an example of a SN Ic located between typical stripped-envelope SNe and ultra-stripped SNe. Also, the low-mass Type Ic SN 2007gr (Hunter et al., 2009; Mazzali et al., 2010) may be an even more extreme case. 5.2 Event rates Based on our spectral analysis, we have shown that SN 2005ek and some of the Ca-rich gap transients, as well as SN 2010X, might be related to ultra-stripped SNe. The event rate of rapidly-evolving SNe including SN 2005ek, SN 2010X, and SN 2002bj are estimated to be at least 1-3 per cent of SNe Ia (Drout et al., 2013). The total event rate of the Ca-rich gap transients is estimated to be at least a few per cent of SNe Ia (Kasliwal et al., 2012; Perets et al., 2010). Given that a part of the transients included to estimate the two event rates actually correspond to ultra-stripped SNe, we can roughly estimate that the event rate for the ultra-stripped SNe based on SN observations is at least a few per cent of SNe Ia. Because the volumetric rates of SNe Ia and SNe Ib/Ic are similar (e.g., Li et al. 2011), ultra-stripped SN rates are presumed to be at least a few per cent of SNe Ib/Ic. This rate corresponds to roughly 1 per cent of all core-collapse SNe including SNe II (Li et al., 2011) and matches the rough event rate estimate by Tauris et al. (2013) ($0.1-1$ per cent of core-collapse SNe). Further observational and theoretical studies to better estimate the event rate of ultra-stripped SNe are encouraged for their better comparison. 5.3 Synergy with gravitational-wave astronomy Because ultra-stripped SNe are presumed to lead to double-NS systems, but not all double NS systems merge within a Hubble time, the observed rate of ultra-stripped SNe is expected to be larger than the rate of double-NS mergers. Furthermore, ultra-stripped SNe may also originate in binary systems with a white dwarf or a black hole accretor (Tauris et al., 2013). After the recent success in detecting gravitational wave signals from merging compact objects by advanced LIGO (Abbott et al., 2016a), it is expected that gravitational waves from merging double-NS systems will be detected in the near future, with advanced LIGO or other detectors like advanced VIRGO and KAGRA. Thus, it will soon be possible to obtain NS merger rate constraints from gravitational wave observations. This should place a lower limit to the rate of ultra-stripped SNe. Comparing the rates of ultra-stripped SNe and NS mergers, we will be able to test whether the ultra-stripped SN channel is actually the major path to form double-NS systems, and thus to constrain binary stellar evolution using gravitational waves. The current constraints on the NS merger rate from LIGO is less than $12600~{}\mathrm{Gpc^{-3}~{}yr^{-1}}$ (Abbott et al., 2016b). Assuming the Galactic core-collapse SN rate of $\sim 0.01~{}\mathrm{yr^{-1}}$ and ultra-stripped SN fraction of $0.1-1$ per cent in core-collapse SNe, we obtain an expected Galactic ultra-stripped SN rate of $\sim 10^{-5}-10^{-4}~{}\mathrm{yr^{-1}}$. If we adopt the Milky Way equivalent galaxy density of $0.01~{}\mathrm{Mpc^{-3}}$ (Kopparapu et al., 2008), we expect an ultra-stripped SN rate of $\sim 100-1000~{}\mathrm{Gpc^{-3}~{}yr^{-1}}$. This is significantly lower than the current upper limit on the NS-NS merger rate from LIGO. Finally, it should be noted that the distribution of eccentricities in current observations of NS-NS binaries (Martinez et al., 2015; Lazarus et al., 2016) supports the idea that ultra-stripped SNe are indeed the progenitors of the second SN in these systems, and that they often (but not always) result in small kicks. Out of 12 NS-NS systems, 9 have an eccentricity of less than 0.3. Moreover, even relatively small NS kicks of $50\;{\rm km\,s}^{-1}$ can result in post-SN eccentricities of more than $0.5$, depending on the orbital period (Tauris et al. in preparation). 5.4 Observations of ultra-stripped SN progenitors While it has been argued that the descendants of ultra-stripped SNe are often close pairs of NSs in binaries, it is much more difficult to find direct observational evidence for the progenitors of ultra-stripped SNe. Arguably, a fraction of high-mass X-ray binaries currently containing a NS and an OB-star will eventually evolve into double NS systems. Others will merge or become disrupted in the process. However, in between these two stages, the immediate progenitors of ultra-stripped stars would be found either as post-common envelope naked helium stars (Wolf-Rayet stars) or in an X-ray binary during the subsequent so-called Case BB Roche-lobe overflow. However, these phases are short lasting (especially the latter which is typically $<10^{5}\;{\rm yr}$, Tauris, Langer, & Podsiadlowski 2015) and thus chances of detecting these systems are small, even though the helium star is rather luminous ($>10^{4}\;L_{\odot}$). One possible related observed system is Cygnus X-3 which is a close binary system with a NS or BH and a Wolf-Rayet star, but the mass of the Wolf-Rayet star (about 10 $\mathrm{M}_{\odot}$, Zdziarski, Mikołajewska, & Belczyński 2013) is too high to correspond to an ultra-stripped SN progenitor. 6 Conclusions We have presented synthetic LCs and spectral properties of ultra-stripped SNe. We evolved the ultra-stripped SN progenitor presented previously (Tauris et al., 2013) until core collapse, calculated its explosive nucleosynthesis, and then synthesized LCs and spectra. Our ultra-stripped SNe have explosion energies of $1-5\times 10^{50}~{}\mathrm{erg}$ and ejecta masses of $\sim 0.1$ $\mathrm{M}_{\odot}$. Explosive nucleosynthesis calculations show that they produce about $0.03~{}\mathrm{M}_{\odot}$ of ${}^{56}\mathrm{Ni}$. We also found that ultra-stripped SNe have rise times of $5-10$ days and their peak luminosity is $\sim-16$ mag or $10^{42}~{}\mathrm{erg~{}s^{-1}}$. Several types of transients have been found that have rise times and peak luminosities similar to those expected for ultra-stripped SNe. They show diverse spectral properties. We compared our synthetic spectra with those of rapidly-evolving transients showing LCs similar to those of our synthetic ultra-stripped SN LCs, and found that the spectra of SN 2005ek, some of so-called Ca-rich gap transients, and SN 2010X match reasonably well our synthetic ultra-stripped SN spectra. Not all Ca-rich gap transients have similar properties to our ultra-stripped SNe, indicating that this group of transients may include events with different origin. For example, the spectra of PTF10iuv are consistent with our ultra-stripped SN spectra, while those of SN 2005E are not. If all the transients above mentioned are actually from ultra-stripped SNe, the event rate of ultra-stripped SNe would be about one per cent of all stripped-envelope SNe. It has been suggested that ultra-stripped SNe may be a major evolutionary path to form double-NS systems which could merge within a Hubble time and that double-NS systems left by ultra-stripped SNe may dominate the population of merging double-NS systems which is expected to be observed by GW observatories (Tauris et al., 2013; Tauris, Langer, & Podsiadlowski, 2015). If this is true, we expect the NS merger rate to be comparable to or somewhat smaller than that of ultra-stripped SNe. Acknowledgments TJM thanks Yudai Suwa and Markus Kromer for helpful discussions. TJM is supported by Japan Society for the Promotion of Science Postdoctoral Fellowships for Research Abroad (26·51) and by the Grant-in-Aid for Research Activity Start-up of the Japan Society for the Promotion of Science (16H07413). The work of S.Blinnikov on development of STELLA code is supported by Russian Science Foundation grant 14-12-00203. The work has also been supported by a Humboldt Research Award to PhP at the University of Bonn. Numerical computations were partially carried out on Cray XC30 and PC cluster at Center for Computational Astrophysics, National Astronomical Observatory of Japan. The numerical calculations were also partly carried out on Cray XC40 at Yukawa Institute for Theoretical Physics in Kyoto University. We made use of the Weizmann interactive supernova data repository - http://wiserep.weizmann.ac.il. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. References Abadie et al. (2010) Abadie J., et al., 2010, CQGra, 27, 173001 Abbott et al. (2016a) Abbott B. P., et al., 2016a, PhRvL, 116, 061102 Abbott et al. (2016b) Abbott B. P., et al., 2016b, arXiv, arXiv:1607.07456 Argast et al. (2004) Argast D., Samland M., Thielemann F.-K., Qian Y.-Z., 2004, A&A, 416, 997 Arnett (1982) Arnett W. D., 1982, ApJ, 253, 785 Baklanov, Blinnikov, & Pavlyuk (2005) Baklanov P. V., Blinnikov S. I., Pavlyuk N. N., 2005, AstL, 31, 429 Barnes & Kasen (2013) Barnes J., Kasen D., 2013, ApJ, 775, 18 Benvenuto, Bersten, & Nomoto (2013) Benvenuto O. G., Bersten M. C., Nomoto K., 2013, ApJ, 762, 74 Berry et al. (2015) Berry C. P. L., et al., 2015, ApJ, 804, 114 Bersten et al. (2014) Bersten M. C., et al., 2014, AJ, 148, 68 Bersten et al. (2012) Bersten M. C., et al., 2012, ApJ, 757, 31 Bessell (1990) Bessell M. S., 1990, PASP, 102, 1181 Blinnikov & Bartunov (1993) Blinnikov S. I., Bartunov O. S., 1993, A&A, 273, 106 Blinnikov et al. (1984) Blinnikov S. I., Novikov I. D., Perevodchikova T. V., Polnarev A. G., 1984, SvAL, 10, 177 Blinnikov & Sorokina (2004) Blinnikov S., Sorokina E., 2004, Ap&SS, 290, 13 Blinnikov et al. (1998) Blinnikov S. I., Eastman R., Bartunov O. S., Popolitov V. A., Woosley S. E., 1998, ApJ, 496, 454 Blinnikov et al. (2006) Blinnikov S. I., Röpke F. K., Sorokina E. I., Gieseler M., Reinecke M., Travaglio C., Hillebrandt W., Stritzinger M., 2006, A&A, 453, 229 Brandt & Podsiadlowski (1995) Brandt N., Podsiadlowski P., 1995, MNRAS, 274, 461 Brott et al. (2011) Brott I., et al., 2011, A&A, 530, A115 Cardelli, Clayton, & Mathis (1989) Cardelli J. A., Clayton G. C., Mathis J. S., 1989, ApJ, 345, 245 Colella & Woodward (1984) Colella P., Woodward P. R., 1984, JCoPh, 54, 174 Dessart & Hillier (2015) Dessart L., Hillier D. J., 2015, MNRAS, 447, 1370 Dessart et al. (2015) Dessart L., Hillier D. J., Woosley S., Livne E., Waldman R., Yoon S.-C., Langer N., 2015, MNRAS, 453, 2189 Drout et al. (2014) Drout M. R., et al., 2014, ApJ, 794, 23 Drout et al. (2013) Drout M. R., et al., 2013, ApJ, 774, 58 Drout et al. (2011) Drout M. R., et al., 2011, ApJ, 741, 97 Eldridge et al. (2015) Eldridge J. J., Fraser M., Maund J. R., Smartt S. J., 2015, MNRAS, 446, 2689 Eldridge et al. (2013) Eldridge J. J., Fraser M., Smartt S. J., Maund J. R., Crockett R. M., 2013, MNRAS, 436, 774 Eldridge, Izzard, & Tout (2008) Eldridge J. J., Izzard R. G., Tout C. A., 2008, MNRAS, 384, 1109 Eldridge, Langer, & Tout (2011) Eldridge J. J., Langer N., Tout C. A., 2011, MNRAS, 414, 3501 Eldridge & Maund (2016) Eldridge J. J., Maund J. R., 2016, MNRAS, 461, L117 Ensman & Woosley (1988) Ensman L. M., Woosley S. E., 1988, ApJ, 333, 754 Ergon et al. (2015) Ergon M., et al., 2015, A&A, 580, A142 Foley (2015) Foley R. J., 2015, MNRAS, 452, 2463 Foley et al. (2016) Foley R. J., Jha S. W., Pan Y.-C., Zheng W. K., Bildsten L., Filippenko A. V., Kasen D., 2016, MNRAS, 461, 433 Foley et al. (2013) Foley R. J., et al., 2013, ApJ, 767, 57 Foley et al. (2009) Foley R. J., et al., 2009, AJ, 138, 376 Fong & Berger (2013) Fong W., Berger E., 2013, ApJ, 776, 18 Fremling et al. (2014) Fremling C., et al., 2014, A&A, 565, A114 Hachinger et al. (2012) Hachinger S., Mazzali P. A., Taubenberger S., Hillebrandt W., Nomoto K., Sauer D. N., 2012, MNRAS, 422, 70 Hachisu et al. (1991) Hachisu I., Matsuda T., Nomoto K., Shigeyama T., 1991, ApJ, 368, L27 Heger, Langer, & Woosley (2000) Heger A., Langer N., Woosley S. E., 2000, ApJ, 528, 368 Hirai et al. (2015) Hirai Y., Ishimaru Y., Saitoh T. R., Fujii M. S., Hidaka J., Kajino T., 2015, ApJ, 814, 41 Hunter et al. (2009) Hunter D. J., et al., 2009, A&A, 508, 371 Inserra et al. (2015) Inserra C., et al., 2015, ApJ, 799, L2 Iwamoto et al. (1994) Iwamoto K., Nomoto K., Höflich P., Yamaoka H., Kumagai S., Shigeyama T., 1994, ApJ, 437, L115 Izzard, Ramirez-Ruiz, & Tout (2004) Izzard R. G., Ramirez-Ruiz E., Tout C. A., 2004, MNRAS, 348, 1215 Jerkstrand et al. (2015) Jerkstrand A., Ergon M., Smartt S. J., Fransson C., Sollerman J., Taubenberger S., Bersten M., Spyromilio J., 2015, A&A, 573, A12 Jones et al. (2013) Jones S., et al., 2013, ApJ, 772, 150 Kashiyama & Quataert (2015) Kashiyama K., Quataert E., 2015, MNRAS, 451, 2656 Kasliwal et al. (2012) Kasliwal M. M., et al., 2012, ApJ, 755, 161 Kasliwal et al. (2010) Kasliwal M. M., et al., 2010, ApJ, 723, L98 Kawabata et al. (2010) Kawabata K. S., et al., 2010, Nature, 465, 326 Kitaura, Janka, & Hillebrandt (2006) Kitaura F. S., Janka H.-T., Hillebrandt W., 2006, A&A, 450, 345 Kleiser & Kasen (2014) Kleiser I. K. W., Kasen D., 2014, MNRAS, 438, 318 Kopparapu et al. (2008) Kopparapu R. K., Hanna C., Kalogera V., O’Shaughnessy R., González G., Brady P. R., Fairhurst S., 2008, ApJ, 675, 1459-1467 Kulkarni (2012) Kulkarni S. R., 2012, IAUS, 285, 55 Langer (2012) Langer N., 2012, ARA&A, 50, 107 Lazarus et al. (2016) Lazarus P., et al., 2016, arXiv, arXiv:1608.08211 Li et al. (2011) Li W., et al., 2011, MNRAS, 412, 1441 Lyman et al. (2016a) Lyman J. D., Bersier D., James P. A., Mazzali P. A., Eldridge J. J., Fraser M., Pian E., 2016a, MNRAS, 457, 328 Lyman et al. (2014) Lyman J. D., Levan A. J., Church R. P., Davies M. B., Tanvir N. R., 2014, MNRAS, 444, 2157 Lyman et al. (2016b) Lyman J. D., Levan A. J., James P. A., Angus C. R., Church R. P., Davies M. B., Tanvir N. R., 2016b, MNRAS, 458, 1768 Lucy (1999) Lucy L. B., 1999, A&A, 345, 211 Lucy (1991) Lucy L. B., 1991, ApJ, 383, 308 Marchant et al. (2016) Marchant P., Langer N., Podsiadlowski P., Tauris T. M., Moriya T. J., 2016, A&A, 588, A50 Martinez et al. (2015) Martinez J. G., et al., 2015, ApJ, 812, 143 Mazzali (2000) Mazzali P. A., 2000, A&A, 363, 705 Mazzali, Iwamoto, & Nomoto (2000) Mazzali P. A., Iwamoto K., Nomoto K., 2000, ApJ, 545, 407 Mazzali & Lucy (1993) Mazzali P. A., Lucy L. B., 1993, A&A, 279, 447 Mazzali & Lucy (1998) Mazzali P. A., Lucy L. B., 1998, MNRAS, 295, 428 Mazzali, Lucy, & Butler (1992) Mazzali P. A., Lucy L. B., Butler K., 1992, A&A, 258, 399 Mazzali et al. (1993) Mazzali P. A., Lucy L. B., Danziger I. J., Gouiffes C., Cappellaro E., Turatto M., 1993, A&A, 269, 423 Mazzali et al. (2001) Mazzali P. A., Nomoto K., Patat F., Maeda K., 2001, ApJ, 559, 1047 Mazzali et al. (2010) Mazzali P. A., Maurer I., Valenti S., Kotak R., Hunter D., 2010, MNRAS, 408, 87 Mazzali et al. (2016) Mazzali P. A., Sullivan M., Pian E., Greiner J., Kann D. A., 2016, MNRAS, 458, 3455 McClelland et al. (2010) McClelland C. M., et al., 2010, ApJ, 720, 704 Metzger et al. (2010) Metzger B. D., et al., 2010, MNRAS, 406, 2650 Modjaz et al. (2015) Modjaz M., Liu Y. Q., Bianco F. B., Graur O., 2015, arXiv, arXiv:1509.07124 Moriya & Eldridge (2016) Moriya T. J., Eldridge J. J., 2016, MNRAS, 461, 2155 Moriya & Maeda (2016) Moriya T. J., Maeda K., 2016, ApJ, 824, 100 Moriya et al. (2010) Moriya T., Tominaga N., Tanaka M., Nomoto K., Sauer D. N., Mazzali P. A., Maeda K., Suzuki T., 2010, ApJ, 719, 1445 Mullan et al. (2011) Mullan B., et al., 2011, ApJ, 731, 93 Nakamura et al. (2001) Nakamura T., Umeda H., Iwamoto K., Nomoto K., Hashimoto M.-a., Hix W. R., Thielemann F.-K., 2001, ApJ, 555, 880 Narayan, Paczynski, & Piran (1992) Narayan R., Paczynski B., Piran T., 1992, ApJ, 395, L83 Nomoto & Iben (1985) Nomoto K., Iben I., Jr., 1985, ApJ, 297, 531 Nomoto, Iwamoto, & Suzuki (1995) Nomoto K. I., Iwamoto K., Suzuki T., 1995, PhR, 256, 173 Nomoto, Thielemann, & Yokoi (1984) Nomoto K., Thielemann F.-K., Yokoi K., 1984, ApJ, 286, 644 Nomoto et al. (1994) Nomoto K., Yamaoka H., Pols O. R., van den Heuvel E. P. J., Iwamoto K., Kumagai S., Shigeyama T., 1994, Nature, 371, 227 Ofek et al. (2010) Ofek E. O., et al., 2010, ApJ, 724, 1396 Paczynski (1986) Paczynski B., 1986, ApJ, 308, L43 Paxton et al. (2011) Paxton B., Bildsten L., Dotter A., Herwig F., Lesaffre P., Timmes F., 2011, ApJS, 192, 3 Paxton et al. (2013) Paxton B., et al., 2013, ApJS, 208, 4 Paxton et al. (2015) Paxton B., et al., 2015, ApJS, 220, 15 Perets et al. (2010) Perets H. B., et al., 2010, Nature, 465, 322 Poznanski et al. (2010) Poznanski D., et al., 2010, Science, 327, 58 Podsiadlowski, Joss, & Hsu (1992) Podsiadlowski P., Joss P. C., Hsu J. J. L., 1992, ApJ, 391, 246 Podsiadlowski et al. (2004a) Podsiadlowski P., Langer N., Poelarends A. J. T., Rappaport S., Heger A., Pfahl E., 2004a, ApJ, 612, 1044 Podsiadlowski et al. (2004b) Podsiadlowski P., Mazzali P. A., Nomoto K., Lazzati D., Cappellaro E., 2004b, ApJ, 607, L17 Prentice et al. (2016) Prentice S. J., et al., 2016, MNRAS, 458, 2973 Rosswog et al. (1999) Rosswog S., Liebendörfer M., Thielemann F.-K., Davies M. B., Benz W., Piran T., 1999, A&A, 341, 499 Sahu et al. (2008) Sahu D. K., et al., 2008, ApJ, 680, 580-592 Sauer et al. (2006) Sauer D. N., Mazzali P. A., Deng J., Valenti S., Nomoto K., Filippenko A. V., 2006, MNRAS, 369, 1939 Schwab, Quataert, & Bildsten (2015) Schwab J., Quataert E., Bildsten L., 2015, MNRAS, 453, 1910 Schwab, Quataert, & Kasen (2016) Schwab J., Quataert E., Kasen D., 2016, arXiv, arXiv:1606.02300 Shen et al. (2010) Shen K. J., Kasen D., Weinberg N. N., Bildsten L., Scannapieco E., 2010, ApJ, 715, 767 Shigeyama et al. (1994) Shigeyama T., Suzuki T., Kumagai S., Nomoto K., Saio H., Yamaoka H., 1994, ApJ, 420, 341 Smith, Andrews, & Mauerhan (2016) Smith N., Andrews J. E., Mauerhan J. C., 2016, arXiv, arXiv:1607.01056 Smith et al. (2011) Smith N., Li W., Filippenko A. V., Chornock R., 2011, MNRAS, 412, 1522 Sorokina et al. (2015) Sorokina E., Blinnikov S., Nomoto K., Quimby R., Tolstov A., 2015, arXiv, arXiv:1510.00834 Stritzinger et al. (2014) Stritzinger M. D., et al., 2014, A&A, 561, A146 Suwa et al. (2015) Suwa Y., Yoshida T., Shibata M., Umeda H., Takahashi K., 2015, MNRAS, 454, 3073 Taddia et al. (2015) Taddia F., et al., 2015, A&A, 574, A60 Takahashi, Yoshida, & Umeda (2013) Takahashi K., Yoshida T., Umeda H., 2013, ApJ, 771, 28 Tanaka & Hotokezaka (2013) Tanaka M., Hotokezaka K., 2013, ApJ, 775, 113 Tanaka et al. (2011) Tanaka M., Mazzali P. A., Stanishev V., Maurer I., Kerzendorf W. E., Nomoto K., 2011, MNRAS, 410, 1725 Tanaka et al. (2016) Tanaka M., et al., 2016, ApJ, 819, 5 Tauris et al. (2013) Tauris T. M., Langer N., Moriya T. J., Podsiadlowski P., Yoon S.-C., Blinnikov S. I., 2013, ApJ, 778, L23 Tauris, Langer, & Podsiadlowski (2015) Tauris T. M., Langer N., Podsiadlowski P., 2015, MNRAS, 451, 2123 Tauris & van den Heuvel (2006) Tauris T. M., van den Heuvel E. P. J., 2006, in Lewin W., van der Klis M., eds, Compact Stellar X-ray Sources. Cambridge Univ. Press, Cambridge, p. 623 Tominaga, Umeda, & Nomoto (2007) Tominaga N., Umeda H., Nomoto K., 2007, ApJ, 660, 516 Umeda & Nomoto (2005) Umeda H. & Nomoto K., 2005, ApJ, 619, 427 Vanbeveren & Mennekens (2015) Vanbeveren D., Mennekens N., 2015, arXiv, arXiv:1508.04282 Wanajo et al. (2009) Wanajo S., Nomoto K., Janka H.-T., Kitaura F. S., Müller B., 2009, ApJ, 695, 208 Weisberg, Nice, & Taylor (2010) Weisberg J. M., Nice D. J., Taylor J. H., 2010, ApJ, 722, 1030 Wheeler & Levreault (1985) Wheeler J. C., Levreault R., 1985, ApJ, 294, L17 Woosley et al. (1994) Woosley S. E., Eastman R. G., Weaver T. A., Pinto P. A., 1994, ApJ, 429, 300 Woosley & Heger (2015) Woosley S. E., Heger A., 2015, ApJ, 810, 34 Yaron & Gal-Yam (2012) Yaron O., Gal-Yam A., 2012, PASP, 124, 668 Yoon (2015) Yoon S.-C., 2015, PASA, 32, e015 Yoon, Woosley, & Langer (2010) Yoon S.-C., Woosley S. E., Langer N., 2010, ApJ, 725, 940 Zdziarski, Mikołajewska, & Belczyński (2013) Zdziarski A. A., Mikołajewska J., Belczyński K., 2013, MNRAS, 429, L104
Cherenkov Radiation from $e^{+}e^{-}$ Pairs and Its Effect on $\nu_{e}$ Induced Showers Sourav K. Mandal    Spencer R. Klein    J. David Jackson Lawrence Berkeley National Laboratory Berkeley, CA 94720 Abstract We calculate the Cherenkov radiation from an $e^{+}e^{-}$ pair at small separations, as occurs shortly after a pair conversion. The radiation is reduced (compared to that from two independent particles) when the pair separation is smaller than the wavelength of the emitted light. We estimate the reduction in light in large electromagnetic showers, and discuss the implications for detectors that observe Cherenkov radiation from showers in the Earth’s atmosphere, as well as in oceans and Antarctic ice. I Introduction Cherenkov radiation from relativistic particles has been known for over 70 years [1]. However, to date, almost all studies have concentrated on the radiation from individual particles. Frank [2], Eidman[3] and Balazs [4] considered the Cherenkov radiation from electric and magnetic dipoles, but only in the limit of vanishing separations $d$. Their work was nicely reviewed by Jelley [5]. Several more recent calculations have considered Cherenkov radiation from entire electromagnetic showers, in the coherent or almost coherent limit [6]. The fields from the $e^{+}$ and $e^{-}$ largely cancel, and the bulk of the coherent radiation is due to the net excess of $e^{-}$ over $e^{+}$ (the Askaryan effect) [7]. Hadronic showers produce radiation through the same mechanism [8]. Coherent radiation occurs when the wavelength of the radiation is large compared to the radial extent of the shower; for real materials, this only occurs for radio waves. Here, we consider another case, the reduction of radiation from slightly-separated oppositely-charged co-moving pairs. This includes $e^{+}e^{-}$ pairs produced by photon conversion. When high-energy photons convert to $e^{+}e^{-}$ pairs, the pair opening angle is small and the $e^{+}$ and $e^{-}$ separate slowly. Near the pair, the electric and magnetic fields from the $e^{+}$ and $e^{-}$ must be considered separately. However, for an observer far away from the pair (compared to the pair separation $d$), the electric and magnetic fields from the $e^{+}$ and $e^{-}$ largely cancel. Cherenkov radiation is produced at a distance of the order of the photon wavelength $\Lambda$ from the charged particle trajectory. So, for $d<\Lambda$, cancellation reduces the Cherenkov radiation from a pair to below that for two independent particles. For a typical pair opening angle $m/k$, where $k$ is the photon energy and $m$ the electron mass, without multiple scattering, $\Lambda>d$ for a distance $k\Lambda/m$. For blue light ($\Lambda=400$ nm) from a 1 TeV pair, the radiation is reduced until the pair travels a distance of 40 cm (neglecting multiple scattering). A similar cancellation effect was observed for energetic ($\sim 100$ GeV) $e^{+}e^{-}$ pairs in nuclear emulsions [9]. Ionization from newly created $e^{+}e^{-}$ pairs is reduced when the pair separation is less than the screening distance for ionization in the target. In this paper, we calculate the Cherenkov radiation from $e^{+}e^{-}$ pairs, simulate optical radiation from pairs follow realistic trajectories, and consider the radiation from electromagnetic showers. We consider two classes of experiments: underwater/in-ice neutrino observatories and air Cherenkov telescopes. II Cherenkov radiation from pairs Cherenkov radiation from closely spaced $e^{+}e^{-}$ pairs can be derived by extending the derivation for point charges, by replacing a point charge with an oppositely charged, separated pair. We sketch the derivation for radiation from point charges, review previous work on radiation from infinitesimal dipoles, and derive the expression for Cherenkov radiation from a closely-spaced co-moving pair. We follow the notation and derivation from Ref. [10]. In Fourier space, the charge density $\rho$ and current density $\vec{J}$ from a point charge $ze$ propagating with speed $v$ in the $x_{1}$ direction can be written as $$\displaystyle\rho(\vec{k},\omega)$$ $$\displaystyle=\frac{ze}{2\pi}\delta(\omega-k_{1}v)$$ (1) $$\displaystyle\vec{J}(\vec{k},\omega)$$ $$\displaystyle=\vec{v}\rho(\vec{k},\omega)$$ where $\vec{k}$ is the wave vector and $\omega$ the photon energy. This current deposits energy into the medium through electromagnetic interactions. We use Maxwell’s equations beyond a radius $a$ around the particle track, where $a$ is comparable to the average atomic separation. Then, by conservation of energy, the Cherenkov radiation power is equal to the the energy flow through a cylinder of this radius, giving $$\left(\frac{dE}{dx}\right)=-caRe\int_{0}^{\infty}B_{3}^{\ast}(\omega)E_{1}(% \omega)d\omega\;.$$ (2) $E_{1}$ is the component of $\vec{E}$ parallel to the particle track, and $B_{3}$ is the component of $\vec{B}$ in the $x_{3}$ direction, evaluated at an impact parameter $b$ at a point with $x_{2}=b$, $x_{3}=0$. We omit the time-phase factors for brevity. Using the wave equations in a dielectric medium and the definition of fields, then integrating over momenta, which eliminates the space-phase factors, one finds $$E_{1}(\omega)=-\frac{ize\omega}{v^{2}}\left(\frac{2}{\pi}\right)^{1/2}\left[% \frac{1}{\epsilon(\omega)}-\beta^{2}\right]K_{0}(\lambda b)$$ (3) where $$\lambda^{2}=\frac{\omega^{2}}{v^{2}}[1-\beta^{2}\epsilon(\omega)]\;.$$ Similarly, $$\displaystyle E_{2}(\omega)$$ $$\displaystyle=\frac{ze}{v}\left(\frac{2}{\pi}\right)\frac{\lambda}{\epsilon(% \omega)}K_{1}(\lambda b)$$ (4) $$\displaystyle B_{3}(\omega)$$ $$\displaystyle=\epsilon(\omega)\beta E_{2}(\omega)\;.$$ $K_{0}$ and $K_{1}$ are the zeroth and first order modified Bessel functions of the second kind. The far-field radiation depends on the asymptotic form of the energy deposition at $|\lambda a|\gg 1$. For $\beta>1/\sqrt{\epsilon(\omega)}$ for real $\epsilon(\omega)$, $\lambda$ is completely imaginary. The asymptotic contribution of the Bessel functions in the integrand of $dE/dx$ is finite, giving the well-known expression for the Cherenkov radiation $$\displaystyle\left(\frac{dE}{dx}\right)$$ $$\displaystyle=\frac{(ze)^{2}}{c^{2}}$$ (5) $$\displaystyle\times\int_{\epsilon(\omega)>1/\beta^{2}}\omega\left(1-\frac{1}{% \beta^{2}\epsilon(\omega)}\right)d\omega\;.$$ Note how $a$ has dropped out [10, Ch. 13]. The derivation of this Cherenkov radiation may be expanded to give the field from a pair. The radiation from an $e^{+}e^{-}$ pairs depends on two parameters: the separation $d$ and the angle between the direction of motion and the orientation of the pair. For relativistic pairs created by photon conversion, the transverse (to the direction of motion) separation is important; the longitudinal separation of a highly relativistic pair can be neglected, due to Lorentz length contraction. Balazs [4] provided an expression for Cherenkov radiation from an infinitesimal dipole $D$ oriented transverse to its momentum. The fields are approximated by by a linear Taylor expansion of the corresponding point-charge fields: $$\displaystyle E_{1}^{(D)}(\omega)=-d\frac{\partial E_{1}(\omega)}{\partial x_{% 2}}\;;\;B_{3}^{(D)}(\omega)=-d\frac{\partial B_{3}(\omega)}{\partial x_{2}}$$ where $d$ is the effective pair separation, so $D=zed$. Then, following the same steps as in the point-charge case, Balazs finds $$\displaystyle\left(\frac{dE}{dx}\right)$$ $$\displaystyle=\frac{1}{2}\frac{D^{2}}{c^{4}}$$ (6) $$\displaystyle\times\int_{\epsilon(\omega)>1/\beta^{2}}\epsilon(\omega)\omega^{% 3}\left(1-\frac{1}{\beta^{2}\epsilon(\omega)}\right)^{2}d\omega\;.$$ For a point dipole oriented parallel to its direction of motion, the radiation is negligible for $\beta\lesssim 1$ [5]. To compute the Cherenkov radiation for finite separations $d$, let us consider a pair moving in the $+x$ direction. The pair lies entirely in the transverse plane $y$-$z$, with the line between them making an angle $\alpha$ with respect to the $y$-axis. Then, generalizing Eq. (1), the charge density from the pair is $$\rho(\vec{k},\omega)=\frac{ze}{2\pi}\delta(\omega-k_{1}v)\big{[}e^{-i(k_{2}y_{% +}-k_{3}z_{+})}-e^{-i(k_{2}y_{-}-k_{3}z_{-})}\big{]}.$$ The two charges have positions relative to the center of mass $$\displaystyle y_{+}=\frac{d}{2}\cos\alpha$$ $$\displaystyle z_{+}=-\frac{d}{2}\sin\alpha$$ $$\displaystyle y_{-}=-\frac{d}{2}\cos\alpha$$ $$\displaystyle z_{-}=\frac{d}{2}\sin\alpha\;.$$ The angle $\alpha$ is the relative azimuth between the line connecting the two charges and the azimuth of observation. The generalization of $E_{1}(\omega)$ of Eq. 3 is $$\displaystyle E_{1}(\omega)=$$ $$\displaystyle\frac{-ize\omega}{v^{2}}\sqrt{\frac{\pi}{2}}\left(\frac{1}{% \epsilon(\omega)}-\beta^{2}\right)$$ $$\displaystyle\times\left[K_{0}(\lambda b_{-})-K_{0}(\lambda b_{+})\right]$$ (7) where $$b_{\pm}=\sqrt{\frac{d^{2}}{4}\sin^{2}\alpha+(b\pm\frac{d}{2}\cos\alpha)^{2}}\;.$$ As before, we take $|\lambda a|\gg 1$ and $a<b$, so we need only consider $d\ll b$; there is little interference for $d\gtrsim b$. Therefore, we can simplify using $$b_{\pm}\simeq b\pm\frac{d}{2}\cos\alpha\;.$$ Then, as before, considering completely imaginary $\lambda$ and $|\lambda a|\gg 1$, $$\displaystyle E_{1}(\omega)=$$ $$\displaystyle\frac{2ze\omega}{c^{2}}\left(1-\frac{1}{\beta^{2}\epsilon(\omega)% }\right)$$ (8) $$\displaystyle\times\sqrt{\frac{i}{|\lambda|}}\frac{e^{i|\lambda|b}}{b}\sin% \left[\frac{d}{2}|\lambda|\cos\alpha\right]$$ and a similar expression for $B_{3}(\omega)$. Here we have taken $b_{-}\simeq b_{+}\simeq b$ in the denominator. At $\alpha=\pm\pi/2$, $E_{1}(\omega)=0$. The Cherenkov radiation is no longer symmetric about the direction of motion, and vanishes at right angles to the direction of the dipole. As the charge separation increases (or the wavelength decreases), the angular distribution evolves from two wide lobes into a many-lobed structure, as shown in Fig. 1. After integration over even a narrow range of $\omega$ or $d$, the angular distribution becomes an almost-complete disk, with two narrow zeroes remaining at a direction perpendicular to the dipole vector. After assembling the pieces, and averaging over $\alpha$, we find the generalization of Eq. (5), $$\displaystyle\left(\frac{dE}{dx}\right)=$$ $$\displaystyle\frac{(ze)^{2}}{c^{2}}\int_{\epsilon(\omega)>1/\beta^{2}}d\omega$$ (9) $$\displaystyle\omega\left(1-\frac{1}{\beta\epsilon(\omega)}\right)\times 2\left% [1-J_{0}(\lambda d)\right]\;.$$ Here $J_{0}$ is the zeroth order Bessel function of the first kind. For $\lambda d\ll 1$, this reproduces Eq. (6). For $\lambda d\gg 1$, the $dE/dx$ is twice that expected for an independent particle (Eq. (5)). The transition is shown in Fig. 2. As the emission wavelength $\Lambda$ approaches $d$, the pair spectrum converges to the point-charge spectrum in an oscillatory fashion, characteristic of the Bessel function. For certain values of $\lambda d$, the radiation exceeds that of two independent charged particles. For the remainder of the paper, we assume that media satisfy $\sqrt{\epsilon(\omega)}=n$, where $n$ is independent of frequency. In realistic detection media, any variation of $n$ with frequency is small, and would have little effect on Cherenkov radiation from relativistic particles. With real $e^{+}e^{-}$ pairs, two effects should be considered. Electromagnetic radiation is not emitted instantaneously, but occurs while the radiating particles travel a distance known as the formation length, $l_{f}$. For Cherenkov radiation, $l_{f}=\Lambda/\sin^{2}(\theta_{C})=\Lambda\epsilon\beta^{2}/(\epsilon\beta^{2}% -1)$ [11], depends only on the Cherenkov emission angle, $\theta_{C}$ and the photon wavelength; $l_{f}$ depends only slightly (through $\beta$) on the electron energy. While the pair is covering the distance $l_{f}$, the pair separation will change by an amount $\Delta d=l_{f}\sin{(\theta)}$, where $\theta$ is the angle between the $e^{+}$ and $e^{-}$ velocity vectors. Since $\theta$ is of order $1/\gamma$, $\Delta d/d\ll 1$, so the change in separation is not significant. Second, the Cherenkov radiation produced at a point ($x$-coordinate) depends on the fields emitted by the charged particles at earlier times, when $d$ may be different than at the point of radiation. For full rigor, these retarded separations should be used in the calculation. Again, this has a negligible effect on the results. III Radiation from $e^{+}e^{-}$ pairs in showers Many experiments study Cherenkov radiation from large electromagnetic showers. The radiation from a shower may be less than would be expected if every particle were treated as independent. We use a simple simulation to consider 300 to 800 nm radiation from electromagnetic showers. This frequency range is typical for photomultiplier based Cherenkov detectors; at longer wavelength, there is little radiation, while shorter wavelength light is absorbed by the glass in the phototube. We simulated 1000 $\gamma$ conversions to $e^{+}e^{-}$ pairs with total energies from ${10}^{8}$ to ${10}^{20}$ eV. Pairs were produced with the energy partitioned between the $e^{+}$ and $e^{-}$ following the Bethe-Heitler differential cross section $d\sigma\approx E_{\pm}(1-E_{\pm})$, where $E_{\pm}$ is the electron (or positron energy)[13]. At high energies in dense media (above ${10}^{16}$ eV in water or ice), the LPM effect becomes important, and more asymmetric pairs predominate [12]. The pairs are generated with initial opening angle of $m/k$; the fixed angle is a simplification, but the pair separation is dominated by multiple scattering, so it has little effect on our results. The $e^{-}$ and $e^{+}$ are tracked through a water medium (with $n=\sqrt{\epsilon}=1.3$) in steps of $0.02X_{0}$, where $X_{0}$ is the radiation length, 36.1 cm in water. At each step, the particles multiple-scatter, following a Gaussian approximation [14, Ch. 27]. The particles radiate bremsstrahlung photons, using a simplified model where photon emission follows a Poisson distribution, with mean free path $X_{0}$. Although this model has almost no soft bremsstrahlung, soft emission has little effect on Cherenkov radiation, since the electron or positron velocity is only slightly affected. At each step, we compute the Cherenkov radiation for each pair. They are treated coherently when $d<2\Lambda$; at larger separations the particles radiate independently. As shown in Fig. 3, the particles in lower energy pairs ($<{10}^{10}$ eV) radiate almost independently. In contrast, the radiation from very high energy pairs ($>{10}^{15}$ eV) is largely suppressed. The broad excursions slightly above unity occur when $J_{0}(\lambda d)>1$ for many of the scattered pairs. IV Implications for experiments At least two types of astrophysical observatories depend on Cherenkov radiation. Water and ice based neutrino observatories observe Cherenkov radiation from the charged particles produced in neutrino interactions, and air Cherenkov telescopes look for $\gamma$-ray induced electromagnetic showers in the Earth’s atmosphere. Current neutrino observatories can search for electron neutrinos with energies above 50 TeV (for $\nu_{\mu}$, the threshold is much lower) [16]. They use large arrays of photomultiplier tubes to observe the Cherenkov radiation from $\nu_{e}$ induced showers. For water, $n\approx 1.3$, Fig. 3 shows that $\lambda d<1$ while the pair travels significant distances. Ice is similar to water, with a slightly lower density; $n$ of ice depends on its structure, and is typically $\approx 1.29$ [18]. To quantify the effect of Cherenkov radiation from $\nu_{e}$ interactions, we use a toy model of an electromagnetic shower. The shower evolves through generations, with each generation having twice as many particles as the preceding generation, with half the energy. Each generation evolves over a distance of $X_{0}$; other simulations have evolved generations over a shorter distance $(\ln{2})X_{0}$, leading to a more compact shower [17]. In these showers, most of the particles are produced in the last radiation lengths. Fig. 4 shows the Cherenkov radiation expected from a model ${10}^{20}$ eV shower with coherent Cherenkov radiation (solid line) and in a model where all particles radiate independently (dotted line). This model does not include the LPM effect, so it should be considered only illustrative. The LPM effect lengthens the high-energy (above a few ${10}^{15}$ eV) portion of the shower. By spreading the shower longitudinally, the LPM effect will give the electrons and positrons more time to separate, and so will somewhat lessen the difference between the two results. However, it is clear from Fig. 4 that coherence has a significant effect for the first $\approx 22$ generations. Since the front of the shower contains relatively few particles, it will not affect the measured energy; the change in number of radiated photons (and hence on the energy measurement) should be less than 1%. However, the suppression will affect the apparent length of the shower. For the first $\approx 8$ generations, the shower will emit less light than a single charged particle. Because of the LPM effect, each of these generations (with mean particle energy $E_{g}$ above a few ${10}^{15}$ eV) develop over a distance $X=X_{0}\sqrt{E_{g}/5E_{LPM}}$, where $E_{LPM}=278$ TeV for water is the effective LPM energy [17], greatly elongating the shower. So, the first 8 generations include most of the length of the shower. So, the suppression of Cherenkov radiation hides the initial shower development, making the shower appear considerably more compact. The reduction in early-stage radiation should help in separating electron cascades from muon-related backgrounds, especially muons that undergo hard interactions, and lose a large fraction of their energy. Atmospheric Cherenkov telescopes like the Whipple observatory study astrophysical $\gamma$-rays with energies from 100 GeV to 10 TeV. These telescopes observe Cherenkov radiation from pairs in the upper atmosphere; for a 1 TeV shower, the maximum particle density occurs at an altitude of 8 km above sea level (asl) [15], where the density is about $1/3$ that at sea level. Since $n-1$ depends linearly on the density, at 8 km asl $n-1\approx 1\times{10}^{-4}$, so for 500 nm photons radiated from ultra-relativistic particles, $\lambda d<1$ only for $d<6\;\mu\mbox{m}$. In this low-density medium, the effect of the pair opening angle is significant and multiple scattering is less important. Pairs with $k<1$ TeV will separate by 30 $\mu$m in a distance less than 30 meters; at 8 km asl, this is 3% of a radiation length. This distance is too short to affect the radiation pattern from the shower. Cherenkov radiation is also used in lead-glass block calorimetry, and in Cherenkov counters for particle identification; their response to photon conversions may be affected by this coherence. Although the reactions are slightly different, a similar analysis applies to the reduction of ionization by $e^{+}e^{-}$ pairs. Perkins observed that the ionization from pairs with mean energy 180 GeV in emulsion was surppressed for the first $\approx 250$ $\mu$m after the pairs were created [9]. With $X_{0}=3$ cm (typical for emulsion), the $e^{+}$ and $e^{-}$ trajectories will be about 4 nm apart after travelling $250$ $\mu$m. For relativistic particles, the screening distance (effective range for $dE/dx$) is determined by the plasma frequency of the medium, $\omega_{p}$. For silver bromide, the dominant component of emulsion, $\hbar\omega_{p}=48$ eV [10] (in a complete emulsion, $\hbar\omega_{p}$ will be slightly lower). This yields a screening distance $c/\omega_{p}=4$ nm, which is very close to the calculated separation. V Conclusion We have calculated the Cherenkov radiation from $e^{+}e^{-}$ pairs as a function of the pair separation $d$. When $d^{2}<v^{2}/(\omega^{2}[1-\beta^{2}\epsilon(\omega)])$, the radiation is suppressed compared to that from two independent particles. This suppression affects the radiation from electromagnetic showers in dense media. Although the total radiation from a shower is not affected, emission from the front part of the shower is greatly reduced; this will affect studies of the shower development, and may affect measurements of the position of the shower. This work was funded by the U.S. National Science Foundation under Grant number OPP-0236449 and the U.S. Department of Energy under contract number DE-AC-76SF00098. References [1] P. A. Cherenkov, Dokl. Akad. Nauk. SSR 2, 451 (1934); S. I. Vavilov, Dokl. Akad. Nauk. SSR 2, 457 (1934); I. M. Frank and Ig. Tamm, Dokl. Akad. Nauk. SSR 14, 109 (1937). [2] I. M. Frank, Zh. fiz. SSR 7, 49 (1943). [3] V. Eidman, Transactions of the Gorky Research Physiotechnical Institute and Radiophysical Faculty of the Gorky State University, Scientific Notes 30 (1956). [4] N. L. Balazs, Phys. Rev. 104, 1220 (1956). [5] J. V. Jelley, Cherenkov Radiation and its applications (Pergamon Press, 1958). [6] E. Zas, F. Halzen and T. Stanev, Phys. Rev. D45, 362 (1992); J. Alvarez-Muniz et al., Phys. Rev. D68, 043001 (2003); S. Razzaque et al., Phys. Rev. D69, 047101 (2004); A. R. Beresnyak, astro-ph/0310295. [7] G. A. Askaryan and B. A. Dolgoshein, JETP Lett. 25, 213 (1977). [8] J. Alvarez-Muniz and E. Zas, Phys. Lett. B434, 396 (1998) . [9] D. H. Perkins, Phil. Mag. 46, 1146 (1955). [10] J.D. Jackson, Classical Electrodynamics, 3rd edition (John Wiley & Sons, New York, 1998). [11] M.S. Zolotorev and K.T. McDonald, preprint physics/0003096. [12] S. Klein, Rev. Mod. Phys. 71, 1501 (1999); S. Klein, hep-ex/0402028. [13] H. A. Bethe and W. Heitler, Proc. R. Soc. London, Ser A 146, 83 (1934). [14] S. Eidelman et al., Phys. Lett. B592, 1 (2004). [15] C. M. Hoffman, C. Sinnis, P. Fleury and M. Punch, Rev. Mod. Phys. 71, 897 (1999). [16] M. Ackermann, Astropart. Phys. 22, 127 (2004); the Icecube collaboration, IceCube Preliminary Design Document, Oct., 2001. Available at www.icecube.wisc.edu [17] S. Klein, astro-ph/0412546. [18] P. B. Price, K. Woschnagg, Astropart. Phys. 15, 97 (2001).
aainstitutetext: Department of Mathematical Sciences, Durham University, UKbbinstitutetext: Department of Theoretical Physics and Astrophysics, Belarusian State University, Minsk 220004, Belarusccinstitutetext: Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, Kraków, Polandddinstitutetext: BLTP, JINR, Dubna 141980, Moscow Region, Russia Resonance structures in kink-antikink collisions in a deformed sine-Gordon model Patrick Dorey b    Anastasia Gorina b    Ilya Perapechka c    Tomasz Romańczukiewicz d    and Yakov Shnir p.e.dorey@durham.ac.uk nastya.gorina.2931@gmail.com jonahex111@outlook.com tomasz.romanczukiewicz@uj.edu.pl shnir@theor.jinr.ru Abstract We study kink-antikink collisions in a model which interpolates smoothly between the completely integrable sine-Gordon theory, the $\phi^{4}$ model, and a $\phi^{6}$-like model with three degenerate vacua. We find a rich variety of behaviours, including integrability breaking, resonance windows with increasingly irregular patterns, and new types of windows near the $\phi^{6}$-like regime. False vacua, extra kink modes and kink fragmentation play important roles in the explanations of these phenomena. Our numerical studies are backed up by detailed analytical considerations. Keywords: soliton collisions, resonant structure, fractal ††arxiv: XXXX.XXXX 1 Introduction The static and dynamic behaviour of kinks, that is topological solitons in 1+1 dimensional classical scalar field theories, has attracted much attention over the years; see e.g. Manton:2004tk ; Vachaspati ; Shnir2018 . In the classical sine-Gordon theory, the complete integrability of the field equations leads to constraints on the dynamics of these solitons via the infinite number of integrals of motion, as reviewed in babelon . As a result, the collision of sine-Gordon solitons cannot excite radiation modes, and the asymptotic structure of the final configuration only differs from what would have been found in the absence of any interaction by a set of phase shifts. However, integrable models are exceptional, and most physical theories do not belong to this class. Usually, an integrable model can be considered as an approximation to a more realistic theory, and while some features of the integrable model may still persist in a deformed non-integrable system, others are lost, often in interesting ways. Aspects of this interplay between integrable and quasi-integrable models have attracted a lot of attention recently, see e.g. Ferreira:2012sd ; Ferreira:2010gh ; Uiimo . One example is the modified sine-Gordon model of Peyrard1 ; Peyrard2 which still supports kink solutions, though their scattering is much more complicated than in the original integrable model. The presence of defects (or impurities) and boundaries can also destroy the complete integrability of the sine-Gordon theory in a variety of ways malomed ; malomed2 ; Arthur:2015mva . The $\phi^{4}$ theory is similar to the sine-Gordon model in that they both support topological kinks, but the $\phi^{4}$ theory is not exactly integrable, and there is radiative energy loss in kink-antikink ($K\bar{K}$) collisions. Associated with this is the resonant energy exchange mechanism and the resulting chaotic dynamics of kinks, see e.g. Anninos:1991un ; Campbell:1983xu ; Goodman:2005 ; Makhankov:1978rg ; Moshir:1981ja ; Dorey:2011yw . Numerical simulations of the $\phi^{4}$ theory reveal that for certain initial velocities, the kink and antikink collide, become separated by a finite distance, then collide a second time before finally escaping to infinity. This is known as a (two-bounce) resonance window Anninos:1991un ; Campbell:1983xu . Solitons can also escape after three or more consecutive collisions, leading to an intricate nested structure of multi-bounce windows Anninos:1991un ; Campbell:1983xu . These resonance windows are related to the reversible exchange of energy between the translational mode of the $\phi^{4}$ kink and its internal mode Anninos:1991un ; Campbell:1983xu . This mechanism can be reasonably well approximated in a truncated model, which takes into account only two dynamical degrees of freedom, the collective coordinates of the kink and the internal mode Anninos:1991un ; Goodman:2005 ; Takyi:2016tnc ; Weigel:2013kwa . This approach is qualitatively effective but many attempts to derive it from first principles have been plagued by many ambiguities and discrepancies, see Weigel:2013kwa ; Weigel:2018xng ; Adam:2018tnv , although recent developments Manton:2020onl ; Manton:2021ipk show some promising results. Furthermore, some modifications of the $\phi^{4}$ model, see e.g. Christ-Lee ; Simas:2016hoo ; Demirkaya:2017euk , allow for existence of towers of internal modes localized on the kinks. Clearly, in such cases a simple resonance energy exchange mechanism cannot be applied. Resonance structures can also be observed in the triply-degenerate $\phi^{6}$ model Dorey:2011yw . The kinks in this model do not support localized modes, and the resonance effects are instead determined by collective modes trapped by the $K\bar{K}$ system. A modification of the collective coordinate approach reproduces the resonance dynamics of the $\phi^{6}$ $K\bar{K}$ collisions qualitatively well Takyi:2016tnc ; Weigel:2013kwa . The collective coordinate approximation is limited by the assumption that the modes which lead to resonances, and the modes of the continuum, are separated. However, as the frequency of the internal mode of a kink is shifted towards the mass threshold, an excitation of this mode may also excite the radiative modes of the continuous spectrum, affecting the resonance exchange mechanism. This motivates the current paper, an investigation of in $K\bar{K}$ collisions in a deformation of the sine-Gordon theory, other aspects of which were recently studied in Dorey:2019uap . The deformation lifts the infinite degeneracy of the sine-Gordon vacuum while leaving a $\mathbb{Z}_{2}$ symmetry unbroken. For small values of the deformation parameter, close to sine-Gordon, the model possesses two true vacua and a large number of false vacua. These open new channels of $K\bar{K}$ collisions via the excitation of a bubble of false vacuum and its subsequent decay, allowing a smooth transition from the kink-antikink reflection that occurs in the deformed theory to the kink-antikink transmission found in sine-Gordon. At larger values a rich variety of behaviours is found: first, a simple deformation of the $\phi^{4}$ pattern of resonance windows but then, as the kinks of the model acquire more localised modes, a much less regular pattern of windows together with transient structures we call pseudowindows. Finally, close to the critical coupling at which the model has three degenerate vacua, the kinks and antikinks partially fragment into subkinks and their scattering decomposes into a series of pairwise interactions, leading to a novel pattern of windows akin to recent studies of the scattering of wobbling kinks in the $\phi^{4}$ theory Alonso-Izquierdo:2020ldi . The paper is organised as follows. In Section 2 we review the deformed sine-Gordon model of Dorey:2019uap . We also discuss the dependence of the spectral structure of linear perturbations of the solutions on the value of the deformation parameter, and the fragmentation of kinks and antikinks which occurs as the $\phi^{6}$-like limit is approached. Section 3 explains the numerical methods we used to investigate kink scattering in this model, and gives a birds-eye view of our results. In Section 4 we present our detailed results for kink-antikink collisions in the first regime of interest, where the model transitions from sine-Gordon to $\phi^{4}$ behaviour, and discuss the resonance structures we observed. The collisions of kinks in the second regime, where the $\phi^{4}$ model moves to a $\phi^{6}$-like model, are discussed in Section 5. Conclusions and further remarks are contained in the last section. 2 The Model 2.1 Model structure We consider the simple theory of real scalar field in 1+1 space-time dimensions specified by the Lagrangian $$L=\frac{1}{2}\partial^{\mu}\phi\partial_{\mu}\phi-U(\phi)$$ (1) where the self-interaction potential $U(\phi)$ is $$U(\phi)=(1-\epsilon)\left(1-\cos\phi\right)+\frac{\epsilon\phi^{2}}{8\pi^{2}}\left(\phi-2\pi\right)^{2}\,,$$ (2) and $\epsilon$ is a real positive parameter. This model was proposed in Dorey:2019uap to investigate the decay of large amplitude breather-like states; our goal here is rather to study the collisional dynamics of its solitons, which turns out to be surprisingly rich. The parameter $\epsilon$ controls the deformation away from the infinitely-degenerate periodic sine-Gordon potential, which is recovered in the limit $\epsilon=0$, as shown in figure 1. For any non-zero value of $\epsilon$, the infinite dihedral group symmetry of the initial potential is broken down to $\mathbb{Z}_{2}$, generated by a reflection around $\phi=\pi$. For $\epsilon<0$ the energy is not bounded from below, and we therefore exclude these values. In addition, for $\epsilon>\epsilon_{cr}$, where $$\epsilon_{cr}=\frac{16}{16-\pi^{2}}\approx 2.609945762,$$ (3) the potential has a single global minimum at $\phi=\pi$ and the model does not support static kink solutions, so we exclude these values too, and confine our attention to the range $0\leq\epsilon\leq\epsilon_{cr}$. For all $\epsilon$ in the interior of this range, the potential has just two global minima, at $\phi=0$ and $\phi=2\pi$, and the model is parametrised in such a way that small perturbations around these true vacua always have mass equal to $1$. In addition to these perturbative states, the model has kink and antikink solutions which interpolate between the global minima, the kinks interpolating between $\phi=0$ at $x=-\infty$ and $\phi=2\pi$ at $x=+\infty$ and the antikinks doing the opposite. (We will often be a little loose in our language and refer to topological lumps of either kind as kinks.) Some examples are shown in figure 2, and our aim in the following is to analyse how they scatter. In addition to the true vacua, the model’s potential has a number of the local minima corresponding to false vacua, which can influence kink scattering in important ways. It is possible to consider perturbations around these false vacua, as long as they remain small. In such cases, for any local minimum $\psi$ of the potential with $U^{\prime}(\psi)=0$, the square of the mass of small perturbation is $U^{\prime\prime}(\psi)$. The false vacua vanish when this mass tends to zero. These critical points are therefore defined by the pair of nonlinear equations (for $\phi$ and the parameter $\epsilon$) $$\frac{\partial U}{\partial\phi}=0,\qquad\frac{\partial^{2}U}{\partial\phi^{2}}=0\,.$$ (4) These critical points allow us to determine the vacuum structure of the model. For example for $0.00105685<\epsilon<0.0074737$ there are four false vacua, near $\phi\approx-4\pi,-2\pi,4\pi,6\pi$, while for $0.0074737<\epsilon<0.0507066$ there are two false vacua (near $\phi\approx-2\pi$ and $4\pi$), and for $0.0507066<\epsilon<2$ there are only the two true vacua ($\phi=0$ and $\phi=2\pi$). Setting $\epsilon=1$ yields a shifted and rescaled potential of the $\phi^{4}$ model. Increasing $\epsilon$ further decreases the height of the potential barrier between the two vacua, and for $\epsilon>2$ the local maximum at $\phi=\pi$ splits into two humps located in the intervals $\pi<\phi<2\pi$ and $0<\phi<\pi$, with a local minimum between them at $\phi=\pi$. For $2<\epsilon<\epsilon_{cr}$ this minimum corresponds to a false vacuum, and for $\epsilon$ in this range the potential (2) has a very similar shape to that of the $\phi^{6}$ model considered many years ago by Christ and Lee Christ-Lee . The mass of small fluctuations about this false vacuum is $$m_{1}(\epsilon)=\sqrt{U^{\prime\prime}(\pi)}=\sqrt{(\epsilon-2)/2}\,.$$ (5) Finally, at $\epsilon=\epsilon_{cr}$ the vacuum at $\phi=\pi$ becomes degenerate with those at $\phi=0$ and $2\pi$, and the model resembles the triply-degenerate $\phi^{6}$ model that was found to exhibit resonant scattering in Dorey:2011yw . The field equation of the model (1) can be written as $$\phi_{tt}-\phi_{xx}+(1-\epsilon)\sin\phi+\frac{\epsilon\phi}{2\pi^{2}}\left(\phi-\pi\right)\left(\phi-2\pi\right)=0\,.$$ (6) For all $\epsilon\in[0,\epsilon_{cr})$, the static kink $\phi_{K}(x;\epsilon)$ depends parametrically on $\epsilon$ and interpolates between the vacua $\phi=0$ as $x\to-\infty$ and $\phi=2\pi$ as $x\to\infty$. In two cases its form is known analytically: $$\phi_{K}(x;\epsilon=0)=4\arctan e^{x-x_{0}}\,;\qquad\phi_{K}(x;\epsilon=1)=\pi\left(\tanh\frac{x-x_{0}}{2}+1\right).$$ (7) There is little visual difference between these two solutions, as can be seen from figure 2. Despite their similarity, the dynamical properties of these kinks are very distinct, as will be seen below. Stronger deformations of the potential (2) significantly affect not just the scattering but also the form of the topological solitons, and as the deformation parameter approaches the critical value $\epsilon_{cr}$, they become almost decomposed into pairs of subkinks interpolating between the true vacua at $\phi=0$ and $2\pi$ via the false vacuum at $\phi=\pi$, as also shown in figure 2. In the limit $\epsilon\to\epsilon_{cr}$ the potential becomes triply degenerate and the kinks split into two, in the corresponding different topological sectors. The mass of the kink, $$M(\epsilon)=\int_{0}^{2\pi}\sqrt{2U(\phi)}\,d\phi\,,$$ (8) can be obtained from the standard Bogomolnyi’ trick. It decreases from $M(0)=8$ through $M(1)=\frac{2\pi^{2}}{3}\approx 6.5797$ to $M(\epsilon_{cr})\approx 2.6363$ (cf. figure 3). Note that this limiting mass as $\epsilon\to\epsilon_{cr}$ is exactly twice the mass of a ‘true’ single kink of the model at $\epsilon=\epsilon_{cr}$. The Bogomol’nyi trick also reduces the second order static field equation to the first order Bogomol’nyi–Prasad–Sommerfield (BPS) equation $$\phi_{x}=\sqrt{2U(\phi)}$$ (9) which is much easier to solve and requires only a single integration constant. The profiles assume the vacuum values at spatial infinities. 2.2 Spectral structure We now consider the spectrum of linear perturbations of a single kink by setting $\phi(x,t)=\phi_{K}(x;\epsilon)+\eta(x,t)$ where $\eta(x,t)=\xi(x)e^{i\omega t}$. The corresponding equation for the fluctuation eigenfunctions is $$\left(-\frac{d^{2}}{dx^{2}}+V(x)\right)\xi(x)=\omega^{2}\xi(x)$$ (10) where $$V(x)=U^{\prime\prime}(\phi_{K}(x))=(1-\epsilon)\cos\phi_{K}(x)+\frac{\epsilon}{2\pi^{2}}\left(3(\phi_{K}(x)-\pi)^{2}-\pi^{2}\right)$$ (11) is the corresponding effective potential for the linear excitations, displayed in figure 4 for various representative values of $\epsilon$. Note that the parameters of the model (1) are fixed in such a way that the masses of the excitations around the true vacua $\phi=0$ and $\phi=2\pi$ remain the same and equal to $1$, as in the original sine-Gordon model. We can solve (10) numerically for each value of the deformation parameter $\epsilon$. For $\epsilon=0$ the sine-Gordon model is undeformed, and the spectrum of linearized kink excitations does not contain any internal modes, but rather just the usual translational zero mode and the states of the continuum. These modes are not affected by the variation of the parameter $\epsilon$, and exist for all of its values. However, as $\epsilon$ increases from zero an internal mode appears, its eigenfrequency $\omega_{1}$ smoothly decreasing from the mass threshold with increasing $\epsilon$ as shown in figure 5, where we used the transformed parameter $$\beta=\tanh^{-1}(\epsilon/\epsilon_{cr})$$ (12) for better visibility of the spectrum near the critical value $\epsilon=\epsilon_{cr}$. For $\epsilon=1$ the effective potential (11) reduces to the well-known Pöschl-Teller potential for the $\phi^{4}$ theory. In this case the corresponding spectrum of fluctuations contains the translational zero mode $\omega_{0}=0$, one internal mode of the kink with frequency $\omega_{1}=\sqrt{3}/2$, and the continuum modes $\omega_{k}=\sqrt{k^{2}+1}$. For $\epsilon>1$ more internal modes appear, their number tending to infinity as $\epsilon$ approaches $\epsilon_{cr}$. Recall that for $2<\epsilon<\epsilon_{cr}$ there is a false vacuum at $\phi=\pi$. As can be seen in figure 2 and as analysed in more detail in the next subsection, kinks in this region come to look like a pair of bound subkinks, the first interpolating $0\to\pi$ and the second $\pi\to 2\pi$, rather than a single kink $0\to 2\pi$. Each of these subkinks generates its own potential well, but these do not support localised modes. However the presence of the false vacuum yields an almost flat potential between the subkinks, as shown in figure 4. As $\epsilon\to\epsilon_{cr}$ the distance between the two subkinks tends to infinity. This limit corresponds to a very wide potential well, within which many bound modes can be trapped. At $\epsilon=\epsilon_{cr}$, the false vacuum $\phi=\pi$ becomes a true vacuum and the subkinks become independent topological defects. These new, smaller kinks are not symmetric: they interpolate between one of the original two vacua $\phi=0$ or $\phi=2\pi$, for which the mass of the linearized perturbations is $m=1$, and the new third vacuum at $\phi=\pi$. The mass of excitations about this vacuum is different, $m_{1}(\epsilon_{cr})=\sqrt{\frac{\pi^{2}-8}{16-\pi^{2}}}\approx 0.5522<1$. As can be seen by comparing figure 30 below with the corresponding plots in Dorey:2011yw , the collisional dynamics of these new kinks is close to the resonant $K\bar{K}$ scattering observed in the $\phi^{6}$ model, where the resonance structure appears not because of the presence of internal modes of the kinks but rather from collective modes trapped between them. 2.3 The double kink A kink can fragment into subkinks when one (or more) false vacua lie between two true vacua. This occurs in our model as $\epsilon\to\epsilon_{cr}$, the single kink fragmenting into two subkinks separated by a region of false vacuum. The field of the subkinks situated at $x=\pm L$ approaches the false vacuum $\psi(\epsilon)$ as $$\phi(x)-\psi(\epsilon)\approx\pm\frac{B}{m_{1}(\epsilon)}e^{\pm m_{1}(\epsilon)(x\mp L)}.$$ (13) The constant $B$ can be obtained from the numerical solution of the BPS equation (9). Its value depends on how we choose to define the positions of the subkinks. One natural choice is the point at which the field reaches the local maximum of potential. For $\epsilon=\epsilon_{cr}$ this is at $\phi=1.367646$ and $\phi=2\pi-1.367646=4.915539$, for which $B=1.210404$. The force acting on the subkinks is a sum of the standard kink-kink repulsion (see for example Manton:2004tk ) and the negative pressure of the false vacuum: $$F=2B^{2}e^{-2m_{1}(\epsilon)L}-U(\psi(\epsilon)).$$ (14) In this case $\psi(\epsilon)=\pi$ for all $\epsilon\in(2,\epsilon_{cr})$ and $$U(\psi(\epsilon))=\frac{16-\pi^{2}}{8}(\epsilon_{cr}-\epsilon)$$ (15) Static solutions balance between the scalar repulsion and false vacuum pressure, and the condition $F=0$ gives the separation between the subkinks to be $$L(\epsilon)=-\frac{1}{2m_{1}(\epsilon)}\log\left[\frac{U(\psi(\epsilon))}{2B^{2}}\right]\,.$$ (16) Keeping only the lowest terms in $\epsilon_{cr}-\epsilon$ and using $m_{1}(\epsilon_{cr})=\sqrt{\frac{\pi^{2}-8}{16-\pi^{2}}}$ we obtain $$L(\epsilon)=-\frac{1}{2}\sqrt{\frac{16-\pi^{2}}{\pi^{2}-8}}\log\left[\frac{16-\pi^{2}}{16B^{2}}(\epsilon_{cr}-\epsilon)\right]$$ (17) or, in terms of $\beta$ where near the critical value of $\epsilon$ we can use $\epsilon_{cr}-\epsilon\approx 2e^{-2\beta}$, $$L(\beta)=\frac{1}{2}\sqrt{\frac{16-\pi^{2}}{\pi^{2}-8}}\left[2\beta-\log\frac{16-\pi^{2}}{8B^{2}}\right].$$ (18) This formula approximates the positions of the subkinks very well for large values of $\beta$. 2.4 Small perturbations of the double kink The static solution representing a double kink balances the force repelling the subkinks and the attractive force due to the false vacuum trapped between them. When the subkinks are moved away from their equilibrium positions, say by some value $\delta L$, they will start to oscillate around these positions. Small displacements result in a returning force $$\delta F=\frac{dF}{dL}\delta L=2B^{2}e^{-2m_{1}L}(-2m_{1})\delta L$$ (19) but since at equilibrium $F=0$ we can get rid of the constant $B$ and write $$\delta F=-2m_{1}U(\psi)\delta L.$$ (20) By taking into account that the mass of the subkink is a half of the mass of the full kink $M_{1/2}=M(\epsilon)/2$ we can find the frequency of the small oscillations: $$\omega^{2}=\frac{4m_{1}U(\psi)}{M(\epsilon)}.$$ (21) The leading term as $\epsilon\to\epsilon_{cr}$ is $$\omega=\sqrt{\frac{(16-\pi^{2})(\epsilon_{cr}-\epsilon)}{2M(\epsilon_{cr})}}\approx 0.801\sqrt{\epsilon_{cr}-\epsilon}\approx 1.83\,e^{-\beta}.$$ (22) These oscillations move the subkinks closer together and then further apart, corresponding to an antisymmetric excitation of the translational modes of the subkinks. This motion corresponds to the excitation of the first non-zero mode $\omega_{1}$ of the full double kink, shown as a solid red curve on figure 5. We have checked numerically within the range $\epsilon_{cr}-\epsilon\in[10^{-5},10^{-2}]$, and $\omega_{1}(\epsilon)$ indeed scales as the square root of the difference $\epsilon_{cr}-\epsilon$ in this region, its numerical value agreeing with the approximation to at least three significant figures. 2.5 Further static solutions: the unstable lump The presence of a false vacuum allows for another class of static solutions, albeit with infinite energy. These involve subkinks and subantikinks which interpolate between neighbouring true and false vacua. In scalar field theories a kink and an antikink always attract each other. However, if they are surrounded by a false vacuum with a true vacuum trapped in between, there is an additional force trying to separate them and thereby expand the region of true vacuum. For wide separations this force is constant, depending only on the difference between the field theoretic potentials in the false and true vacua. Since the attractive kink - antikink force decays exponentially, this opens a possibility that there is a single distance where the two forces balance. Note that perturbation in any direction would lead to growing acceleration, destroying the unstable static state. In contrast to the situation for the double kinks, neighbouring true and false vacua are found both for $\epsilon$ small and for $\epsilon\to\epsilon_{cr}$. Suppose the relevant false vacuum is at $\phi=\psi$. In order to find the unstable solution we multiply the static version of (6) by $\phi_{x}$ and integrate once, imposing the false vacuum boundary condition $\phi=\psi$ at spatial infinity to find the BPS-like equation $$\phi_{x}=\pm\sqrt{2[U(\phi)-U(\psi)]}\,.$$ (23) The full-line solutions to this equation have zero topological charge and cannot be kinks: rather, they are unstable lumps, which can be viewed as static (sub-) kink and antikink pairs, as described in the opening paragraph of this section. The derivative $\phi_{x}$ vanishes before reaching the true potential minimum at one of the true vacua. Therefore we can apply the boundary condition at this point, thereby defining the position of the unstable solution: $$U(\phi(0))=U(\psi)\,.$$ (24) This is a transcendental equation for $\phi(0)$ and has to be solved numerically. But having the value $\phi(0)$ we can integrate the equation (23) with different signs for $x>0$ and $x<0$. Example solutions are shown in figure 6. Note that both for small (left panel) and near-critical (right panel) values of $\epsilon$, the unstable solution develops an antikink-kink structure as the potential difference between the false and true vacua decreases. For large separations (small values of $\epsilon$ or $\epsilon_{cr}-\epsilon$) we can use a similar approximation to that for the double kink and find the positions of the kink and antikinks with a few substitutions. The false vacuum is now on the outside pulling the kink and antikink apart and the true vacuum is trapped in between. Therefore in equation (16) $m_{1}$ has to be replaced by the mass of the true vacuum $m=1$. For small values of $\epsilon$ the vacuum at $\phi=4\pi$ is raised so that $U(4\pi)\approx 4\pi^{2}\epsilon$. Since the profiles of the kinks are known in the unperturbed model we can calculate $B=4$ explicitly. This leads to $$L(\epsilon)=-\frac{1}{2}\log\left(\frac{\pi^{2}\epsilon}{4}\right)$$ (25) which agrees well with the numerically found profiles. In the near critical case $U(\psi(\epsilon))$ is also known (15), but $B=3.003373$ had to be determined numerically $$L(\epsilon)=-\frac{1}{2}\log\left[\frac{16-\pi^{2}}{16B^{2}}(\epsilon_{cr}-\epsilon)\right]=\frac{1}{2}\left[2\beta-\log\frac{16-\pi^{2}}{8B^{2}}\right].$$ (26) In the case of unstable lumps the forces act in opposite directions than for double kinks, and equation (21) for the frequencies should therefore have the opposite sign (the true vacuum is inside and the false one outside), and instead of $m_{1}$ we can put $1$ to yield $$\omega^{2}=-\frac{4U(\psi)}{M(\epsilon)}.$$ (27) For $\epsilon\to 0$ we can use the mass of the sine-Gordon kink $M=8$ and the limiting eigenfrequency of the unstable mode is therefore $\omega^{2}=-4\pi^{2}\epsilon$. For $\epsilon\to\epsilon_{cr}$ we rather obtain $$\omega^{2}=-\frac{16-\pi^{2}}{M(\epsilon_{cr})}(\epsilon_{cr}-\epsilon)\approx-2.325(\epsilon_{cr}-\epsilon)\approx-12.14\,e^{-2\beta}\,.$$ (28) On their own, these unstable lumps have infinite energy, because of the false vacuum surrounding them. However, as will be seen below, at least for $\epsilon\to\epsilon_{cr}$ they can still play an important role in the dynamics of the system when considered locally. Note that for $\epsilon\to 0$ there are further unstable lump solutions to the static field equations corresponding to a region of false vacuum embedded in a region of even more false (higher energy) vacuum. These can be found and analysed exactly as above, but as we did not yet find a role for them in the scattering processes under consideration in this paper we will not discuss them further here. 3 Numerical methods and overall results We used the method of lines to solve the second order evolution equation (6), discretizing the spatial part using the standard five point stencil on a uniform grid. Given the even parity of our initial conditions we solved on a half interval, implementing the appropriate boundary condition $\phi(-x)=\phi(x)$ at $x=0$. For the second boundary condition we used the standard reflecting boundary, making sure to stop the evolution before any radiation could be reflected from the boundary and return to the centre of collision to interfere with the results. (Placing the second boundary at $x=T_{s}/1.8$, with $T_{s}$ the total run-time of the simulation, was typically enough for this.) For the time stepping function we used a fourth order symplectic method which conserves energy and is reliable even for long time evolution for isolated hamiltonian systems. The spatial distance between the grid points was usually $dx=0.05$ but we checked smaller values to verify the consistency of the results. The time step was usually set as $0.4dx$, but again, we also checked other values to confirm the convergence of our results. Figure 7 displays our results for $\bar{K}K$ collisions with initial conditions taken to be an antikink at $x=-x_{0}=-20$ and a kink at $x=x_{0}=20$, with velocities $v_{i}$ and $-v_{i}$ respectively, for the full range of values of $\epsilon\in[0,\epsilon_{cr}]$. The figure shows the post-collision value of the field at the centre of collision, as a function of $v_{i}$ and the perturbation parameter $\epsilon$. The field is measured $300$ units of time after the moment of intersection of the initial antikink and kink trajectories, that is at $300+x_{0}/v_{i}$ units of time from the start of the simulation. Since the spectrum of linearized perturbations about a single kink depends strongly on the value of the deformation parameter $\epsilon$, it is no surprise that the collisional dynamics of the kinks revealed in the figure has an intricate and varied structure. This structure is particularly rich near $\epsilon=\epsilon_{cr}$, and zoomed-in plots of this region are shown in figures 20 and 21 below. (For the last of these, figure 21, the increased widths of the antikink and kink in the regime plotted made it necessary to set $x_{0}=40$ to ensure that they were initially well separated.) Figure 7 naturally splits into two parts: regime 1, $0\leq\epsilon\lesssim 1.45$, which includes the interpolation between $\phi^{4}$ and sine-Gordon like scattering, and regime 2, $1.45\lesssim\epsilon\leq\epsilon_{cr}$, where the false vacuum at $\phi=\pi$ enters the picture and comes to play an important rôle as $\epsilon$ approaches $\epsilon_{cr}$. In the following two sections we will analyse these regimes in turn. 4 The first transition: from sine-Gordon to $\phi^{4}$ scattering 4.1 False vacuum effects for small $\epsilon$ At $\epsilon=0$ the model is integrable and scattering is perfectly elastic, and the vacua at $0$ and $2\pi$ are exactly degenerate with further vacua at $-2\pi$, $4\pi$ and so on. An initial kink - antikink pair interpolating from $0$ to $2\pi$ and then back to $0$ scatters to an oppositely-ordered antikink - kink pair for which the intermediate vacuum is $-2\pi$. For any nonzero value of $\epsilon>0$ such a final state is impossible, as value of the potential (2) at $\phi=-2\pi$ no longer degenerate with that at $\phi=0$. Nevertheless, for small values of the parameter $\epsilon\lesssim 0.05$ the potential retains a false vacuum at $\phi\approx-2\pi$ (along with its $\mathbb{Z}_{2}$ reflection at $\phi\approx 4\pi$), as illustrated in figure 1. The presence of such false vacua significantly affects the dynamics of the kinks Ashcroft:2016tgj ; Gomes:2018heu . In the model under consideration the collision channels associated with them enable a smooth transition from the situation for $\epsilon>0$ back to sine-Gordon scattering at $\epsilon=0$. Figure 8 shows a collection of space-time maps of the scalar field through the collision process. As the impact velocity becomes large enough to overcome the potential barrier, a quasi-elastic sine-Gordon like scattering of the kinks with a flipping of the central vacuum from true to false is observed. In other words, the radiative losses are almost negligible, the collision of the solitons produces a bubble of false vacuum, with the outgoing kinks bounding the false vacuum domain within the true vacuum. For small values of $\epsilon$ the difference between the true and false vacuum energies is small and the dynamics of the outgoing kinks can be treated using a thin wall approximation as in the problem of the decay of false vacuum, see Kobzarev:1974cp ; Coleman:1977py ; Voloshin:1985id . After the initial collision the new kinks bounding the false vacuum region move apart and decelerate Kiselev:1998gg ; Gomes:2018heu , stopping at the turning points $x=\pm L/2$ where the volume energy $16\pi^{2}\epsilon L$ of the bubble together with the bubble wall surface energy $E_{\sigma}$ becomes equal to the initial kinetic energy of the colliding kinks, i.e. $$E_{\sigma}+16\pi^{2}\epsilon L=\frac{2M(\epsilon)}{\sqrt{1-v_{in}^{2}}}\,.$$ (29) Taking into account that in the thin wall approximation the mass of the kink is not very different from the mass of the sG kink $M=8$, and $E_{\sigma}\approx 2M$, we find the maximal separation between the kinks bounding the false vacuum is $$L=\frac{1}{\pi^{2}\epsilon}\left(\frac{1}{\sqrt{1-v_{in}^{2}}}-1\right).$$ (30) This is consistent with deceleration from the velocity $v$ to $0$ with a constant rate of $a_{0}=\pi^{2}\epsilon$. From Kiselev:1998gg and Manton:2004tk we know that topological defects experience a force equal to the difference between the field potential levels of the true and false vacua. In our case $$F=U(4\pi)-U(2\pi)=U(0)-U(-2\pi)=8\pi^{2}\epsilon=Ma_{0}\,.$$ (31) The lifetime of the false vacuum bubble can also be calculated using the relativistic deceleration, giving $$t_{bubble}=2\sqrt{\frac{L}{a_{0}}(2+a_{0}L)}=\frac{2v}{\pi^{2}\epsilon\sqrt{1-v^{2}}}\,.$$ (32) The kinks then collide for a second time, and the resurrected $K\bar{K}$ pair finally separates in the true vacuum. The radiative energy loss for $\epsilon\ll 1$ is small, and thus the final escape velocity $v_{out}\approx v_{in}$. In the sine-Gordon limit $\epsilon\to 0$ the vacua become degenerate, $t_{bubble}$ diverges, and the kinks collide perfectly elastically with a flip of the vacuum $2\pi\to-2\pi$ in the final state so that sine-Gordon scattering is restored. 4.2 Effective model Figure 8 shows that as $\epsilon\to 0$ at fixed $v$ the individual collisions become almost elastic, so we can neglect all radiation effects as well as the coupling to the bound mode. On the other hand, the false vacuum continues to be important: as just described, for all non-zero $\epsilon$ it leads to a recollision a time $t_{bubble}$ after the initial collision, returning the vacuum at the origin to its original value of $2\pi$, while for $\epsilon=0$ there is no second collision, and the vacuum at the origin retains its flipped value of $-2\pi$ in the final state. It is straightforward to construct and study an effective model in this limit of $\epsilon$ small but nonzero. For $\epsilon=0$, analytic solutions describing $K\bar{K}$ collisions are known. For non-relativistic initial velocities we can adopt a moduli space approximation Manton:2020onl $$\phi(x,t)=4\arctan\left(\frac{\sinh a(t)}{\cosh x}\right).$$ (33) For $a$ and $x$ large and positive, $\phi(x,t)\approx 4\arctan\left(e^{a-x}\right)$ and approximates a single antikink located at $x=a$; similar considerations in the other quadrants confirm that $a$ is a modulus which asymptotically corresponds to the position of the antikink in the solution, with $-a$ being the position of the kink. In the sine-Gordon model, nonrelativistic scattering with velocity $v\ll 1$ is reproduced by setting $a(t)=a_{sg}(t)$ with $$\sinh a_{sg}(t)=\frac{\sinh vt}{v}.$$ (34) For small values of $\epsilon$ we can assume that the solution (33) approximates well the instantaneous shape of the field in the deformed model, replacing the above formula for $a_{sg}(t)$ with a general function $a(t)$. Putting this into the Lagrangian density and integrating over $x$ we obtain a Lagrangian describing a single particle $$L=\frac{1}{2}g(a)\dot{a}^{2}-(1-\epsilon)W_{sG}-\epsilon W_{4}$$ (35) where $$g(a)=\int_{-\infty}^{\infty}\left(\frac{\partial\phi}{\partial a}\right)^{2}dx=16\left(1+\frac{2a}{\sinh(2a)}\right)$$ (36) is a metric on the moduli space which plays the role of an effective mass, and $$W_{sG,4}=\int_{-\infty}^{\infty}\left[\frac{1}{2}\left(\frac{\partial\phi}{\partial x}\right)^{2}+U_{sG,4}(\phi)\right]dx$$ (37) is an effective potential where $U_{sG}(\phi)=1-\cos\phi$ and $U_{4}(\phi)=\frac{1}{8\pi^{2}}\phi^{2}(\phi{-}2\pi)^{2}$ are the sine-Gordon and $\phi^{4}$ parts of the self-interaction potential $U(\phi)$. The sine-Gordon integral can be calculated explicitly with the result $$W_{sG}=16\left[1-\frac{1}{2\cosh^{2}(2a)}\left(1+\frac{2a}{\sinh(2a)}\right)\right].$$ (38) Unfortunately we were unable to find a closed form of the effective potential coming from the $\phi^{4}$ part, which we denote as $W_{4}$. For large positive values of $a\gg 1$, corresponding to a kink - antikink pair, the effective potential tends to a constant, $W_{4}\to 13.426$. Large negative values of $a\ll-1$ correspond to a ‘wrongly-ordered’ antikink-kink pair separated by a region of the false vacuum. In this case $W_{4}$ ultimately grows linearly with $|a|$, with gradient $16\pi^{2}$, since the false vacuum section of length $2|a|$ has the constant energy density of $\epsilon U(-2\pi)=-8\pi^{2}\epsilon$. Numerical integration for $a\ll-1$ gives an asymptotic form $W_{4}(a)\approx 157.91|a|-188.52$, where the slope is equal to $16\pi^{2}$ as expected. The three functions determining the effective dynamics are shown in figure 9, and trajectories obtained from this model are included as dashed red lines in figure 8. The discrepancy is greatest down the right hand column, as expected since the effective model should be most accurate for low velocities. Indeed, the effective model trajectories and those obtained from full simulations are identical within the resolution of the figure for $v=0.2$. Figure 10 zooms in to the region of the first collision for the plots along the top row of figure 8, showing that the effective model manages to capture the dynamics even when the kinks collide. 4.3 Perturbations from $\phi^{4}$ As $\epsilon$ increases beyond $0.05$ the false vacuum at $\phi\approx-2\pi$ is lost, and after a transitional region the pattern of $\bar{K}K$ collisions comes to resemble the well-known resonant behaviour found in in the $\phi^{4}$ model Anninos:1991un ; Campbell:1983xu ; Goodman:2005 ; Makhankov:1978rg ; Moshir:1981ja , which is recovered for $\epsilon=1$. Provided the initial velocity $v_{i}$ of the colliding kinks is larger than an upper critical velocity $v_{c}$, they rebound and, while losing some of their velocity, escape back to infinity. Correspondingly the field at the origin reverts to its initial value of $0$, coloured yellow on figure 7. For $v_{i}<v_{c}$, the rebounding kinks from the initial collision no longer have escape velocity and recollide at the centre, after which they generally form an oscillating bound state, a so-called bion (sometimes also called an oscillon in the literature111Some authors make a distinction between the two terms, using the word oscillon for objects with well defined frequencies, but we will use the word bion for any form of slowly-decaying oscillating bound state throughout this paper.), which slowly radiates the energy away, annihilating into the vacuum $\phi=2\pi$ as $t\to\infty$. This vacuum is coloured blue on figure 7. However it turns out that there are ‘windows’ of velocities well below $v_{c}$, within which the recolliding kinks recover their ability to escape. For initial velocities in these ranges, the kinks collide, separate to a finite distance and turn around to collide a second time, and then escape to infinity. In fact there is a tower of these ‘two-bounce’ windows of decreasing width, labelled by an integer ‘window number’ $n$ related to the number of the oscillations of the kink’s internal mode between the two collisions Anninos:1991un ; Campbell:1983xu ; Goodman:2005 . This sequence of windows accumulates at the upper critical velocity $v_{c}$ as $n\to\infty$. These resonance windows are associated with the reversible exchange of energy between the internal and translational modes of the kinks, the condition for resonance leading to a prediction for the relationship between the window number $n$, the time interval between the two collisions $T$, and the frequency $\omega_{1}$ of the internal modes of the colliding kinks Campbell:1983xu $$\omega_{1}T=2\pi n+\delta$$ (39) where $n$ counts the number of oscillations of the internal mode between the two collisions and $\delta$ is a phase shift which we fix, as in Campbell:1983xu , to lie in the range $0\leq\delta<2\pi$. For some values of $n$ the energy imparted to the kinks after the second collision, while still high, might not be enough to allow their escape. These correspond to so-called false windows. The picture of $\phi^{4}$-like resonant scattering just described is preserved in our model provided the deformation parameter $\epsilon$ is not too far from one. Figure 11 shows some typical patterns of two-bounce windows, plotting the final velocity of the kinks $v_{out}$ as a function of the initial velocity $v_{in}$ inside these windows for various values of $\epsilon$, and showing that the deformed model (1) still supports sequences of resonance windows in this regime. Further three and higher bounce windows exist at the edges of each two-bounce window, but are not shown in the plot; we will not explore this aspect of the models in this paper. Figure 12 plots the inter-collision times for these two-bounce windows against the window number $n$, showing that the satisfy the expected linear relationship (39), as for the $\phi^{4}$ model. Fitted slopes and intercepts of the lines on the figure are given in table 1. The second and fifth columns show reasonable agreement, as predicted by (39). We conclude that the pattern of these windows can be explained by the same give-and-take energy transfer mechanism as in the original $\phi^{4}$ theory. The working of the resonance mechanism is clearly seen in figure 13, which shows field evolution at the origin inside the first four two-bounce windows for $\epsilon=0.6$. As the deformation parameter $\epsilon$ decreases from $1$, the upper critical velocity $v_{c}$ initially increases almost linearly, as can be seen in figures 7 and 11. At the same time the associated windows, and the gaps between them, become narrower. The maximal value of the upper critical velocity $v_{c}=0.5803$ occurs for $\epsilon=0.13$; the corresponding eigenvalue of the internal mode is $\omega\approx 0.975$. Evidently, as the frequency of the internal mode is approaching the continuum threshold, an excitation of the internal mode may also affect the modes of continuum leading to energy loss due to radiation. This effect can destroy the fine mechanism of the reversible energy exchange in kink-antikink collisions. Indeed, we observe that the structure of resonance windows in $K\bar{K}$ collisions is damaged as $\epsilon$ decreases. Surprisingly, the first two-bounce window that becomes false, at about $\epsilon=0.35$, is the one which was initially the widest, corresponding to the lowest number $n$ of oscillations of the internal mode. Figure 14 shows the loss of true windows in action. In the top plot, with $\epsilon=0.6$, all the windows seen at $\epsilon=1$ are still open. As $\epsilon$ decreases through $0.35$, the smallest-velocity window, for which $n=3$, becomes false, as seen in the second plot, taken at $\epsilon=0.3$. By the time $\epsilon$ has decreased to $0.2$, shown in the third plot, three more windows have become false. Finally the bottom plot shows that all resonance windows have closed at $\epsilon=0.1$. To show the pattern in more detail we located the two-bounce windows for $0.17<\epsilon<0.295$ and plotted them in figure 15. Because the upper critical velocity as well as the whole structure is a steep function of $\epsilon$ in this range ($\Delta v/\Delta\epsilon\approx 0.33064$) we used an affine transformation to reduce the shear effect and kept $\epsilon$ unchanged. The figure shows clearly that the windows with smaller values of $n$ close first as $\epsilon$ decreases. 5 The second transition: the emergence of the double kink 5.1 Multiple bound modes While the deformation parameter $\epsilon$ remains below 1, there is just one internal mode of the kink. As $\epsilon$ increases past this value, a second internal mode, with frequency $\omega_{2}$, emerges from the continuum, as shown in figure 5. The presence of this can be seen numerically in the power spectra of the field at the origin after a kink-antikink collision, shown in figure 16. The extra internal mode complicates the pattern of reversible energy exchange between the translational mode of the kinks and the internal modes, but while $\epsilon$ remains below about $1.4$, the frequency of the second mode $\omega_{2}$ is not very far below the mass threshold, and the leading role in the resonance scattering mechanism still belongs to the lowest frequency internal mode $\omega_{1}$. The structure of $\phi^{4}$-like escape windows labelled by the oscillation number $n$ of this mode survives, as seen in figure 17 (a) and the blue curves in figures 11 and 12. As $\epsilon$ increases through $1.25$ these windows start to close, again starting from the smallest values of $n$, and by $\epsilon\approx 1.5$ they have all gone. The resonance condition is no longer given by the simple relation (39), and the interplay between the two internal modes becomes more important as $\epsilon$ increases further and the second mode becomes well separated from the continuum. The upper critical velocity $v_{c}$ also begins to increase as $\epsilon$ becomes larger than 1.5, as seen in figure 7. Increasing $\epsilon$ further yields more complicated chaotic dynamics, and the sporadic reappearance of scattering windows below the upper critical velocity. The third internal mode appears in the spectrum of linearized perturbations while the frequencies of the first and the second modes continue to decrease. Figures 17 (b), (c) show the window structure at $\epsilon=2.17$, while figures 18 and 19 show some typical collision processes at the same value of $\epsilon$. The windows again accumulate as the critical velocity $v_{c}$ is approached, but the pattern is much less regular than the $\phi^{4}$-like resonant $\bar{K}K$ collisions considered thus far. The escape windows (indicated by the yellow colour) become less symmetric, as if one side of the window has merged with a false window. In addition, a set of light blue regions that we will call ‘pseudowindows’ appears. Their blue colour indicates that the centre field after collision is flipped into the $\phi=2\pi$ vacuum. Topologically, this corresponds to an annihilation: the value of the field at the origin in the final state is equal to its value at infinity. However, instead of there being slowly-decaying bion at the centre (signalled by white-grey-black stripes on figure 17), as formed in a $\phi^{4}$-like $\bar{K}K$ capture, two bions are ejected in opposite directions, rapdily transporting most of the energy away from the origin and leaving the centre field oscillating around the $2\pi$ vacuum with a relatively small amplitude. The collisions shown in the second and third plots of figures 18 and 19 show this distinction clearly: both are examples of antikink-kink capture, but in the second plot a pair of bions is emitted and the central field relaxes quickly to $2\pi$, while in the third plot a bion remains at the origin and the relaxation is much slower. Note that the concept of a pseudowindow relies on field measurements being taken at intermediate time scales, longer than the collision time but shorter than the time required for the radiative decay of a bion. At larger time scales the field at any fixed location, including the origin, will relax to one of the vacuum values, and so regions exhibiting white-grey-black stripes in our plots will ultimately revert to the same blue colour as the pseudowindows. Nevertheless, the extremely long lifetime of the bion means that the relevant time-scales are well separated, making the pseudowindows clearly visible. As discussed in the next section, more examples of this phenomenon can be observed for higher values of $\epsilon$, closer to $\epsilon_{cr}$. In fact, for very small values of $\epsilon$ some narrow pseudowindows also appear, as can be seen in figure 14. The full scan shown in figure 7 and the closer zoom for $\epsilon$ near to $\epsilon_{cr}$ plotted in figure 20 show that the critical velocity oscillates as $\epsilon$ increases beyond $1.5$. A possible explanation for this effect is that the local maxima of the escape velocity correspond to maximal loss of the energy due to radiation. This can happen when two conditions are fulfilled: (i) a large amount of energy is transferred to one of the internal modes; (ii) this mode quickly radiates the energy away via the coupling to the continuum. Indeed, the modes closest to the mass threshold radiate the most. One of the modes is also exactly at the threshold for $\epsilon=0$. However, this value corresponds to the integrable sine-Gordon model. Hence, the threshold mode is decoupled from the solitonic degrees of freedom, and cannot be excited during collisions. As a matter of fact, the first local maximum of the critical velocity is for $\epsilon=0.131$ for which the internal mode reaches $\omega=0.9951$. Similar values of frequencies of odd modes are reached for two more local maxima (Table 2). Another intriguing observation is that the local minima of the critical velocity correspond to the values of $\epsilon$ for which one of the frequencies reaches the value $3/4$ (within 1% error). This might indicate that some $3:4$ resonance with the threshold frequency plays an important role in the collision process. Note that this resonance can happen for both even and odd modes, whereas the maxima are only for the odd modes. This may mean that some higher nonlinear term (presumably fourth order) is responsible for the minima of the critical velocity. 5.2 Double kink collisions near the critical value of $\epsilon$ For $\epsilon>2$, a false vacuum appears at $\phi=\pi$ between the true vacua at $\phi=0$ and $\phi=2\pi$. The presence of this false vacuum deforms the kink, splitting it into two smaller subkinks, which become more and more separated as $\epsilon\to\epsilon_{cr}$, as illustrated in figure 2. The eigenvalue of the first internal mode $\omega_{1}$ rapidly approaches zero as this mode smoothly transforms into an antisymmetric linear combination of the translational modes of the subkinks, as explained in section 2.4. Similarly, a symmetric combination of these modes corresponds to the translational mode $\omega_{0}$. The oscillations of the critical velocity become more extreme as $\epsilon_{cr}$ is approached, their ‘waves’ ultimately overhanging in a series of spines as $\epsilon$ approaches $\epsilon_{cr}$, as seen in figure 21, where for better resolution we used $\beta=\tanh^{-1}(\epsilon/\epsilon_{cr})$ instead of $\epsilon$ for the horizontal scale. In preparing this figure we measured the central field, as for our other scans, at a time $T=300$ after the initial collision. Especially for larger values of $\beta$, the final value of the field at the origin may not have settled down by this time, and so we made some lower-resolution scans with longer evolution times. These showed that the yellow spaces between the bases of the spines have a tendency to close with time, but the further away from the base, or the smaller the value of $\beta$, the closer the spines shown in figure 21 were to their asymptotic forms. The white-grey-black regions of these spines indicate points where the incident $\bar{K}K$ pair annihilate to a centrally-located bion. Within the spines there are also (yellow) windows and (light blue) pseudowindows forming meandering stripes, where instead of the centrally located bion either a $\bar{K}K$ pair or a pair of escaping bions is produced. A cross-section of the right-hand edge of one of these spines, exhibiting both windows and pseudowindows, is shown in figure 22. This surprisingly intricate structure appears to be associated with the dissociation of the kinks into pairs of subkinks in this regime. Related issues were recently discussed in Zhong:2019fub , but in our model we can tune the distance between the half-kinks by varying the deformation parameter $\epsilon$, and make the double kinks infinitely wide as $\epsilon\to\epsilon_{cr}$. The initial $\bar{K}K$ collision no longer happens as a single event, but rather splits into four subcollisions between the weakly bound constituent subkinks, with the fourth collision being key in determining the ultimate fate of the process. Some examples are shown in figure 23, while figure 24 shows the corresponding energy densities, highlighting the fragmented nature of the kinks in this regime. The first of the four subcollisions results in a bounce of the inner subkinks combined with a vacuum flip at the centre. The inner subkinks, moving slightly slower than before, then collide with the outer subkinks which arrive slightly later due to the extended structure of the full defect, and bounce back again. At least for the processes shown in figure 26, these returning subkinks are moving faster than were the outgoing subkinks that resulted from the first bounce. This might be surprising, but note that the outer subkinks arrive at the second subcollision travelling at full speed, while the outgoing inner subkinks are moving more slowly, so the rest frame for the second subcollision is moving towards the centre. From this point the scenarios differ depending on the velocity and $\beta$. The ingoing subkinks can annihilate at the centre to form a bion surrounded by a region of false vacuum, as in the processes plotted in the left-hand columns of figures 23 and 24. These processes correspond to points inside the first four spines shown in figure 21. Alternatively, the inner subkinks can escape from this fourth subcollision to pair off again with the outer subkinks, as in the processes shown in the right-hand columns of figures 23 and 24, with the vacuum at the centre flipping back to its original value. These correspond to points in the yellow-coloured regions between the spines. This subkink capture/escape scenario is itself subject to a resonance mechanism, signs of which can be seen by counting the oscillations in the false vacuum visible between the first and second subcollisions in the processes shown in the right-hand column of figure 23. Figure 25 illustrates aspects of this scenario in more detail. The final velocity of the inner kink after the second central subcollision reveals some oscillational dependence on the initial velocity. This is very different from the monotonic growth above the upper critical velocity observed in the case of collisions of unexcited defects. However, exactly this feature was recently observed in Alonso-Izquierdo:2020ldi in the case of scattering of wobbling kinks. In that paper it was shown that if the excitation of the kinks is large enough it can lead to the appearance of new windows above the upper critical velocity. This is exactly the same effect that we observe for larger values of $\beta$ with the exception that the excitation is not bound to the subkinks but is a superposition of radiation and inner oscillations of the double kink. Scanning through figure 25 as $\beta$ varies gives a good intuition of how the spines in figure 21 are formed. More complicated evolutions after the fourth subcollision are also possible, leading to the appearance of windows and pseudowindows within the spines, provided suitable resonance conditions are met. Figure 26 shows three examples corresponding to three of the pseudowindows visible in figure 22, and one more showing the field evolution just outside the spine shown in that figure. The presence of the outer subkinks can also disturb the long-term evolution of the system. When a central bion is formed, the outer subkinks initially escape from the centre of collision, but because of the presence of the false vacuum they will always return and recollide with the bion at the centre. The time scale for this recollision is large, comparable with the period of the lowest oscillations of the double kink, which is of order $e^{\beta}$ (see section 2.4). The collision between the returning outer subkinks and the bion is a chaotic process depending on many factors including the velocity of the incoming subkinks, the amplitude and the phase of the bion at the moment of the collision, and omnipresent radiation. Some examples of such collisions are shown in figure 27. One striking feature is that near $\epsilon_{cr}$ large bubbles of the false vacuum can be created (white regions). Such bubbles tend to shrink but the presence of the central bion can impede their vanishing, as in plots (d), (f) and (i) of figure 27. In many cases as a final result two bions are ejected from the collision centre (plots (a), (b), (c) and (h) of figure 27). These events lie in the pseudowindows introduced above, signalled by regions of light blue colour on figures 17 and 20. Figure 28 shows a further example. After the initial collision a false vacuum bubble with an unstable lump (described in section 2.5) at centre is created. The field profile matches the profile of the lump out to $x\approx\pm 15$, beyond which the outer subkinks connect with the true vacuum, as shown in figure 28 (b). At $t\approx 160$ the unstable lump decays, two subkinks being ejected only to bounce off the outer subkinks and recollide at centre, forming a bion. This bion survives the first collision with the outer subkinks around $t\approx 350$. Its profile is encapsulated in the profile of the unstable lump (figure 28 (c)). In a sense the lump can be thought of as a critical profile of a bion with zero frequency. During the second collision at $t\approx 640$ the bion is highly perturbed and much of its energy is transferred to the outer subkinks, which clearly move with higher velocity after the bounce. For $\epsilon$ very close to $\epsilon_{cr}$, the energy of the false vacuum is only very slightly lifted above that of the true vacuum, and the vacuum pressure (which acts to shrink the region of false vacuum) is small. This means that other effects such as radiation pressure can be equally important. In this regime of our model the vacua on either side of the defect have significantly different masses associated with their small fluctuations, with the larger mass belonging to the true vacuum. As a result, radiation pressure on this defect acts in the opposite direction to vacuum pressure, expanding regions of false vacuum. (Similar radiation pressure effects, albeit in the absence of vacuum pressure, were previously discussed in Romanczukiewicz:2017hdu .) Signs of this effect can be seen in the plots in the left-hand column of figure 23, and in figure 26, in the slight acceleration of the outermost subkinks after the initial four-way collision. To illustrate the phenomenon and its relevance more clearly, the first two plots of figure 29 show the time-evolution of an initially-static configuration corresponding to a very closely-placed kink and antikink pair. The innermost subkinks, one from the kink and one from the antikink, attract each other and annihilate to create a bion a finite time after the start of the simulation. This bion then acts as a source of radiation embedded in a bubble of false vacuum, causing it to expand. The final plot shows that in the absence of radiation, the bubble instead contracts. 5.3 Small kinks at critical $\epsilon$ To finish, we discuss the situation when $\epsilon=\epsilon_{cr}$ and our model has three exactly-degenerate vacua $\{0,\pi,2\pi\}$. The full kink ceases to exist as a static configuration, and is replaced by two small kinks, $(0,\pi)$ and $(\pi,2\pi)$, which are independent topological solitons, along with the corresponding small antikinks. While the potential is not quite the same, the scattering of these small kinks and/or small antikinks is very similar to that of kinks and antikinks in the triply-degenerate $\phi^{6}$ model. Two small kinks repel each other and their scattering shows no resonant phenomena. Two types of collisions between a small kink and a small antikink are possible, depending on which vacuum sits between them. The mass of the small perturbations around the $\phi=\pi$ vacuum is lower than that around $\phi=0$ and $\phi=2\pi$. When a $(\pi,0)$ antikink collides with a $(0,\pi)$ kink, a potential well for the spectrum of small fluctuations is generated, within which many modes can exist. The width of the well depends on the distance between the defects. In this alignment the collisions do show a resonant structure, of the sort first discovered for the $\phi^{6}$ model in Dorey:2011yw . The resonant structure for this case is shown in figure 30 (a). By contrast, the collision of a $(0,\pi)$ kink to the left and a $(\pi,0)$ antikink to the right creates a potential bump between the defects where no bound modes can exist, and as a result collisions of this type show no resonant structure, as seen in figure 30 (b). The pattern of true and false windows visible in figure 30 (a) is similar to that seen previously, for the triply-degenerate $\phi^{6}$ model, in Dorey:2011yw . However the colour scheme used in figure 30 – chosen for consistency with the one used elsewhere in this paper – makes the precise structure hard to discern. Figure 31, which should be compared with figure 1 (e) of Dorey:2011yw , shows a more detailed picture. Similar to the situation for the $\phi^{6}$ model, but different from the sequences of closing windows seen earlier in this paper, we see that true and false windows are interleaved, something which might be associated with the fact that multiple resonant frequencies play a role in these cases. However the patterns are slightly different – for the $\phi^{6}$ model there is a single isolated false window directly following the first true window, while here two true windows are followed by two false ones. An interesting challenge to any effective model of kink – antikink collisions in models of this type, where the resonant modes live in the gap between the colliding solitons, would be to reproduce this distinction. 6 Conclusions In this paper we have studied the dynamical properties of a scalar field theory interpolating between the completely integrable sine-Gordon model, the usual $\phi^{4}$ theory, and a $\phi^{6}$-like theory with three degenerate vacua. Our numerical simulations of the collisions between kinks and antikinks in the model revealed a rich diversity of behaviours and suggest a number of avenues for further investigation. For $\epsilon\ll 1$ the key feature of the model is the breaking of integrability. We identified the role of the false vacuum in the smooth transition from the kink - antikink reflection that occurs for $\epsilon>0$ to the transmission found at $\epsilon=0$. Associated with this is a sharp decrease in the critical velocity $v_{cr}(\epsilon)$, down to zero at $\epsilon=0$ where integrability is restored. This is clear from our numerical results, but an analytic understanding of the behaviour of $v_{c}(\epsilon)$ for $\epsilon\ll 1$ would be very valuable. This regime would also be a good starting-point for an investigation of the quantum theory of this model. In the neighbourhood of $\epsilon=1$, we saw a smooth deformation of the $\phi^{4}$ picture; again we would like to understand the dependence of $v_{c}(\epsilon)$ (and indeed the window locations) on $\epsilon$ from an analytic point of view. At both ends of this regime, resonance windows transition from true to false, starting with those at the lowest velocities. As things stand this is a numerical observation. Beyond $\epsilon\approx 1.5$ the structure of the resonance windows becomes richer and more chaotic, with the emergence of novel structures that we called pseudowindows. In spite of the increased complexity of the behaviour of the model in this regime, we did see some signs of regularity, as reported in table 2. The region $\epsilon\to\epsilon_{cr}$ is particularly intriguing. Our zoomed-in scans, figures 20 and 21, revealed a novel and unexpected structure of ‘spines’, which we associated with the emergence of an internal structure to our kinks in this limit. We identified some of the relevant mechanisms, in particular in the decomposition of individual kink - antikink collisions into four subcollisions, but more work is needed to pin down the details. Preliminary studies show that similar structures arise in the somewhat-simpler model studied by Christ and Lee in Christ-Lee . Numerical work in this regime is particularly delicate as the presence of the false vacuum means that some relevant processes take place over extremely long timescales, and whether greater regularity will emerge in the extreme limit $\epsilon\to\epsilon_{cr}$, $\beta\to\infty$ remains an open question; but these same effects make the model in this regime an excellent arena for the study of phenomena such as radiation pressure. In this region we also observed the formation of bions/oscillons submerged in a bubble of false vacuum. Their shapes were bounded by the profile of an unstable lump, a static solution consisting of two subkinks placed in unstable equilibrium, balancing false vacuum pressure against the mutual attraction between subkinks. Finally, exactly at the limiting value $\epsilon=\epsilon_{cr}$ a triply-degenerate vacuum structure similar to the $\phi^{6}$ model was restored. The subkinks become fully independent defects and their collisions, characteristic of non-symmetric kinks, happen with very similar mechanisms to those found in the $\phi^{6}$ model. In spite of these similarities, the structure of false and true windows was different. To conclude, kink scattering in the deformed sine-Gordon model exhibits a remarkable variety of phenomena. Some of these we have analysed in detail, while others we have merely pointed out, and merit further study in their own right. We also expect that many of these newly found effects will be observed in other models, but we will leave the investigation of this question for future work. Acknowledgements.YS gratefully acknowledges partial support of the Ministry of Education of Russian Federation, project FEWF-2020-0003. TR wishes to thank National Science Centre, grant number 2019/35/B/ST2/00059. PED is grateful for support from the European Union’s Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No. 764850, SAGEX, and from the STFC under consolidated grant ST/T000708/1 “Particles, Fields and Spacetime”. References (1) N. S. Manton and P. M. Sutcliffe, Topological solitons. Cambridge monographs on mathematical physics. Cambridge University Press, Cambridge, July, 2004. (2) T. Vachaspati, Kinks and Domain Walls: An Introduction to Classical and Quantum Solitons. Cambridge University Press, 2006. (3) Y. M. Shnir, Topological and Non-Topological Solitons in Scalar Field Theories. Cambridge University Press, 7, 2018. (4) O. Babelon, D. Bernard and M. Talon, Introduction to Classical Integrable Systems. Cambridge Monographs on Mathematical Physics. Cambridge University Press, 2003, 10.1017/CBO9780511535024. (5) L. Ferreira, G. Luchini and W. J. Zakrzewski, The concept of quasi-integrability for modified non-linear Schrodinger models, JHEP 09 (2012) 103, [1206.5808]. (6) L. Ferreira and W. J. Zakrzewski, The concept of quasi-integrability: a concrete example, JHEP 05 (2011) 130, [1011.2176]. (7) D. Ullmo, M. Grinberg and S. Tomsovic, Near-integrable systems: Resonances and semiclassical trace formulas, Phys. Rev. E 54 (Jul, 1996) 136–152. (8) M. Remoissenet and M. Peyrard, A new simple model of a kink bearing hamiltonian, Journal of Physics C: Solid State Physics 14 (jun, 1981) L481–L485. (9) M. Peyrard and D. Campbell, Kink antikink interactions in a modified sine-Gordon Model, Physica D 9 (1983) 33–51. (10) B. A. Malomed, Inelastic interactions of solitons in nearly integrable systems. ii, Physica D: Nonlinear Phenomena 15 (1985) 385 – 401. (11) F. Zhang, Y. S. Kivshar, B. A. Malomed and L. Vázquez, Kink capture by a local impurity in the sine-gordon model, Physics Letters A 159 (1991) 318 – 322. (12) R. Arthur, P. Dorey and R. Parini, Breaking integrability at the boundary: the sine-Gordon model with Robin boundary conditions, J. Phys. A 49 (2016) 165205, [1509.08448]. (13) P. Anninos, S. Oliveira and R. Matzner, Fractal structure in the scalar lambda (phi**2-1)**2 theory, Phys. Rev. D 44 (1991) 1147–1160. (14) D. K. Campbell, J. F. Schonfeld and C. A. Wingate, Resonance Structure in Kink - Antikink Interactions in $\phi^{4}$ Theory, Physica D 9 (1983) 1. (15) R. H. Goodman and R. Haberman, Kink-antikink collisions in the $\phi^{4}$ equation: The n-bounce resonance and the separatrix map, SIAM J. Applied Dynamical Systems 4 (2005) 1195–1228. (16) V. Makhankov, Dynamics of Classical Solitons In Nonintegrable Systems, Phys. Rept. 35 (1978) 1–128. (17) M. Moshir, Soliton - Anti-soliton Scattering and Capture in $\lambda\phi^{4}$ Theory, Nucl. Phys. B 185 (1981) 318–332. (18) P. Dorey, K. Mersh, T. Romanczukiewicz and Y. Shnir, Kink-antikink collisions in the $\phi^{6}$ model, Phys. Rev. Lett. 107 (2011) 091602, [1101.5951]. (19) I. Takyi and H. Weigel, Collective Coordinates in One-Dimensional Soliton Models Revisited, Phys. Rev. D 94 (2016) 085008, [1609.06833]. (20) H. Weigel, Kink-Antikink Scattering in $\varphi^{4}$ and $\phi^{6}$ Models, J. Phys. Conf. Ser. 482 (2014) 012045, [1309.6607]. (21) H. Weigel, Collective Coordinate Methods and Their Applicability to $\varphi^{4}$ Models, 1809.03772. (22) C. Adam, T. Romanczukiewicz and A. Wereszczynski, The $\phi^{4}$ model with the BPS preserving defect, JHEP 03 (2019) 131, [1812.04007]. (23) N. S. Manton, K. Oleś, T. Romańczukiewicz and A. Wereszczyński, Kink moduli spaces: Collective coordinates reconsidered, Phys. Rev. D 103 (2021) 025024, [2008.01026]. (24) N. S. Manton, K. Oles, T. Romanczukiewicz and A. Wereszczynski, Collective coordinate model of kink-antikink collisions in $\phi^{4}$ theory, 2106.05153. (25) N. H. Christ and T. D. Lee, Quantum expansion of soliton solutions, Phys. Rev. D 12 (Sep, 1975) 1606–1627. (26) F. C. Simas, A. R. Gomes, K. Z. Nobrega and J. C. R. E. Oliveira, Suppression of two-bounce windows in kink-antikink collisions, JHEP 09 (2016) 104, [1605.05344]. (27) A. Demirkaya, R. Decker, P. G. Kevrekidis, I. C. Christov and A. Saxena, Kink dynamics in a parametric $\phi^{6}$ system: a model with controllably many internal modes, JHEP 12 (2017) 071, [1706.01193]. (28) P. Dorey, T. Romanczukiewicz and Y. Shnir, Staccato radiation from the decay of large amplitude oscillons, Phys. Lett. B 806 (2020) 135497, [1910.04128]. (29) A. Alonso Izquierdo, J. Queiroga-Nunes and L. M. Nieto, Scattering between wobbling kinks, Phys. Rev. D 103 (Feb, 2021) 045003. (30) J. Ashcroft, M. Eto, M. Haberichter, M. Nitta and M. Paranjape, Head butting sheep: Kink Collisions in the Presence of False Vacua, J. Phys. A 49 (2016) 365203, [1604.08413]. (31) A. R. Gomes, F. Simas, K. Nobrega and P. Avelino, False vacuum decay in kink scattering, JHEP 10 (2018) 192, [1805.00991]. (32) I. Kobzarev, L. Okun and M. Voloshin, Bubbles in Metastable Vacuum, Sov. J. Nucl. Phys. 20 (1975) 644–646. (33) S. R. Coleman, The Fate of the False Vacuum. 1. Semiclassical Theory, Phys. Rev. D 15 (1977) 2929–2936. (34) M. Voloshin, Decay of false vacuum in (1+1)-Dimensions, Yad. Fiz. 42 (1985) 1017–1026. (35) V. Kiselev and Y. Shnir, Forced topological nontrivial field configurations, Phys. Rev. D 57 (1998) 5174–5183, [hep-th/9801001]. (36) Y. Zhong, X.-L. Du, Z.-C. Jiang, Y.-X. Liu and Y.-Q. Wang, Collision of two kinks with inner structure, JHEP 02 (2020) 153, [1906.02920]. (37) T. Romańczukiewicz, Could the primordial radiation be responsible for vanishing of topological defects?, Phys. Lett. B 773 (2017) 295–299, [1706.05192].
Analysis of equilibrium and turbulent fluxes across the separatrix in a gyrokinetic simulation I. Keramidas Charidakos University of Colorado Boulder, Boulder    J. R. Myra Lodestar Research Corporation, 2400 Central Avenue, Boulder, Colorado 80301, USA    S. Parker University of Colorado Boulder, Boulder    S. Ku Princeton Plasma Physics Laboratory    R.M. Churchill Princeton Plasma Physics Laboratory    R. Hager Princeton Plasma Physics Laboratory    C.S. Chang Princeton Plasma Physics Laboratory Abstract The SOL width is a parameter of paramount importance in modern tokamaks as it controls the power density deposited at the divertor plates, critical for plasma-facing material survivability. An understanding of the parameters controlling it has consequently long been sought (Connor et al. 1999 NF 39 2). Prior to Chang et al. (2017 NF 57 11), studies of the tokamak edge have been mostly confined to reduced fluid models and simplified geometries, leaving out important pieces of physics. Here, we analyze the results of a DIII-D simulation performed with the full-f gyrokinetic code XGC1 which includes both turbulence and neoclassical effects in realistic divertor geometry. More specifically, we calculate the particle and heat $E\times B$ fluxes along the separatrix, discriminating between equilibrium and turbulent contributions. We find that the density SOL width is impacted almost exclusively by the turbulent electron flux. In this simulation, the level of edge turbulence is regulated by a mechanism we are only beginning to understand: $\nabla B$-drifts and ion X-point losses at the top and bottom of the machine, along with ion banana orbits at the low field side (LFS), result in a complex poloidal potential structure at the separatrix which is the cause of the $E\times B$ drift pattern that we observe. Turbulence is being suppressed by the shear flows that this potential generates. At the same time, turbulence, along with increased edge collisionality and electron inertia, can influence the shape of the potential structure by making the electrons non-adiabatic. Moreover, being the only means through which the electrons can lose confinement, it needs to be in a balance with the original direct ion orbit losses to maintain charge neutrality. I INTRODUCTION Studies of the plasma edge are indispensable for a variety of tokamak operation and performance issues such as the density and power scrape-off-layer (SOL) widthschang2017gyrokinetic ; meier2017drifts ; meier2016analysis ; myra2011reduced ; russell2015modeling ; pankin2015kinetic ; rozhansky2018structure ; reiser2017drift , intrinsic rotation and momentum transportstoltzfus2012tokamak ; stoltzfus2012transport ; muller2011experimental ; loizu2014intrinsic ; groebner2009intrinsic ; chang2008spontaneous ; seo2014intrinsic , edge flowschankin2007possible ; hoshino2007numerical ; kirnev2005edge2d ; labombard2004transport ; pigarov2008simulation ; pitts2005edge and the L-H transitionaydemir2012pfirsch ; shaing1989bifurcation ; chang2017fast . Out of all the above mentioned important and inter-related facets of edge physics, the SOL width is crucial in tokamak operation as it governs the power density delivered at the divertor plates. Indeed, a viable fusion reactor would need to have plasma-facing materials that can withstand large heat loads without the need of regular replacement. However, an ITER exhaust power of $100\text{\,}\mathrm{MW}$, concentrated on a speculated $\sim$1\text{\,}\mathrm{mm}$$ SOL width would produce a whopping $5.5\text{\,}\frac{\mathrm{GW}}{{\mathrm{m}}^{2}}$ of parallel power density, placing stringent constraints on the lifetime of the divertor materialsgoldston2015theoretical . It is therefore of utmost importance to be able to predict the SOL width for a given discharge based on its parameters but also to gain an understanding of the physical mechanisms that control its size. To this end, Eicheich2013scaling has constructed scaling relations for the power SOL width, $\lambda_{q}$, based on multiple regressions of relevant parameters from various experimental discharges. He has arrived at a scaling law of the form $\lambda_{q}\sim B^{-1.19}_{p}$, where $B_{p}$ is the poloidal magnetic field, based on which the SOL width of ITER is inferred to be $1\text{\,}\mathrm{mm}$. Goldstongoldston2011heuristic has provided theoretical justification for the above scaling by devising a heuristic model that predicts the SOL width. In summary, this model rests on the assumption that the perpendicular $\nabla B$ and curvature drift ion flows are balanced exactly by parallel flows with one half of the flux being directed to the divertors at a speed of $\frac{C_{s}}{2}$ and the other half returning upstream via the Pfirsch-Schlütter current. This mechanism is responsible for setting the density SOL width, $\lambda_{n}$. The next assumption is that this density channel fills with turbulent electrons and empties through Spitzer-Härm conductivity, establishing the power SOL width. The heuristic model correctly captures the inverse relation between $\lambda_{q}$ and $B_{p}$ proposed by Eich and can thus, with reasonable accuracy, reproduce the experimental results. Until recently, simulations of the edge have relied upon reduced fluid models in simplified geometries omitting essential physics. This changed with the introduction of XGC1chang2009compressed ; ku2009full ; ku2016new ; churchill2017pedestal ; chang2017fast which is a full-f, Particle-In-Cell, 5D gyrokinetic code using realistic X-point geometry, optimized to simulate the tokamak edge. The code has gyrokinetic ions and offers an option for drift-kinetic, fluid or adiabatic electrons. Employing it in a study of ITER SOL width, Chang et.alchang2017gyrokinetic discovered that the simulated width is six times larger than the one predicted by the Eich fit or the Goldston heuristic model, casting doubt on whether the heretofore assumed scaling relations continue to be valid at the ITER regime. The explanation that they provide regarding this disparity has to do with the importance of edge electron turbulenced2011convective ; myra2015turbulent ; myra2016theory in the new regime. More specifically, they claim that the electrostatic-‘blobby’ edge turbulence is setting the scale of the electron channel. The size of a blob or a streamer (or any other coherent turbulent structure) is, at least partially, related to the width of the edge shearing layerfurno2008experimental ; bisai2005formation , wherein the blobs are being generated. In present day tokamaks, this shearing layer width is relatively small and the density channel is dominated by ion neoclassical flows inside the SOL. On ITER however, due to the much larger size of the shearing layer, the radial extent of blobs and streamers is bigger and turbulence dominates the density width. From the above discussion, it is clear that important figures of merit such as the SOL width are the result of a non-linear interaction between particle orbit effects (e.g., neoclassical effects and ion X-point losseschang2002x ; ku2004property ) with the electrostatic-‘blobby’ edge turbulence. The interaction is through the radial and poloidal edge electric field, established by the ion losses and the sheared radial flows that it generates which can influence both the turbulence and the ion flows. Such interplay can often yield unexpected results and thus merits further study with state-of-the-art codes which might illuminate certain aspects of this complex process. Here, we analyze the results of an XGC1 simulation of DIII-D from the viewpoint of equilibrium and turbulent $E\times B$ particle and heat fluxes across the last closed flux surface (LCFS). Within this framework, we attempt to understand the relationship between the direct ion orbit losses and the electrostatic potential that drives these fluxes as well as the effect of the generated shear on the turbulence. Furthermore, we try to measure the impact of turbulence on the electron transport and thereby, test the second assumption of the heuristic model which states that only turbulent electrons are responsible for filling the density SOL channel. The organization of the paper is as follows: In Section II we give a brief description of the simulation, it’s geometry and some relevant parameters. In Section III we provide the definitions of the measured fluxes and the methodology of extracting them from the XGC1 data. We present the equilibrium and turbulent $E\times B$ particle and heat fluxes across the separatrix and interpret the observed patterns using the notions of ion X-losses, ion banana drifts, non-adiabaticity, quasineutrality and shear flows. In Section IV we perform numerical integrations to assess the size of the different fluxes and extricate the contribution of each to the cross-separatrix transport and hence, to the SOL width. In Section V we summarize and draw conclusions. II Simulation In this section we are going to provide a description of the simulation domain and parameters. The simulation is of the DIII-Dluxon2002design discharge # 144981, which is an H-mode heated by neutral beam injection. It is initialized with experimental profiles taken at time $3175\text{\,}\mathrm{ms}$. The simulation inputs include experimental kinetic profiles of electron density and temperature ($n_{e}$ and $T_{e}$), ion temperature ($T_{i}$), and magnetic equilibrium, from a kinetic EFIT magnetic reconstruction. In Fig. 1 we see a poloidal cross-section of the simulation domain along with the directions of the unit vectors of the cylindrical coordinate system $(R,\zeta,Z)$ used in this paper. The origin of the coordinate system in the poloidal cross-section will be taken to be the point $(R,Z)=(R_{o},Z_{o})=($1.67\text{\,}\mathrm{m}$,0)$. The geometry is lower single null but with a secondary “virtual” upper X-point (outside of the simulation domain). The magnetic field is in the negative $\hat{\zeta}$ direction making the ions $\nabla B$-drift towards the X-point and the electrons towards the top. The black dashed line denotes the separatrix. The yellow lines are closed and open flux surfaces and the black and red continuous lines are showing the poloidal projection of the banana orbits (black for ascending-red for descending parts of the orbit). Those have been plotted using contours of toroidal angular momentum. The closed dashed black line close to the center is a potato orbit. The orientation of the coordinate system and the toroidal magnetic field direction are also shown. In the rest of the paper, when we refer to the poloidal angle $\theta$, of a separatrix point, we will mean the angle formed between the line passing through that point and the line passing through the low-field side midplane with both lines passing through the origin. Positive angles are in the counter-clockwise direction. The total simulation time is $0.16\text{\,}\mathrm{ms}$ with a time-step of $2.3\text{\times}{10}^{-4}$$\mathrm{ms}$. Although this time is not enough for the core turbulence to saturate it is adequate for the saturation of the edge turbulence. The poloidal magnetic field is $B_{pol}=$0.42\text{\,}\mathrm{T}$$ (measured at the outboard midplane, in the positive $\hat{e}_{\theta}$ direction) and the toroidal current is $I_{p}=$1.5\text{\,}\mathrm{MA}$$. Equilibrium plasma density at the edge was measured to be $n\approx$3.5\text{\times}{10}^{19}\text{\,}\mathrm{m}^{-3}$$ with a Greenwald density of $n_{G}\approx$15.6\text{\times}{10}^{19}\text{\,}\mathrm{m}^{-3}$$. Sound speed and poloidal flow were estimated from the data to be $C_{s}\approx$15\text{\times}{10}^{4}\text{\,}\frac{\mathrm{m}}{\mathrm{s}}$$ and $V_{\theta}\approx$4\text{\times}{10}^{4}\text{\,}\frac{\mathrm{m}}{\mathrm{s}}$$ respectively. Safety factor close to the edge was found to be $q_{95}=3.7$ giving a connection length of about $qR\approx$8.6\text{\,}\mathrm{m}$$. This connection length, along with an electron thermal velocity of $u_{te}\approx$5.1\text{\times}{10}^{6}\text{\,}\frac{\mathrm{m}}{\mathrm{s}}$$ results in a transit frequency of $593\text{\,}\mathrm{kHz}$. The collision times were calculated at $\tau_{ei}=$3.5\text{\,}\mathrm{\SIUnitSymbolMicro s}$$ and $\tau_{ii}=$0.5\text{\,}\mathrm{ms}$$. The ions and electrons are weakly collisional thus, the adiabatic invariant $\mu$ is conserved over many bounce times. Given that in the next section, the trapped particle orbits are going to play a fundamental role in our interpretation of the flux patterns, here, we give a few relevant numerical estimates regarding those. The fraction of trapped particles at the edge was estimated from the relation $\sqrt{\epsilon}(1.46-0.46\epsilon)\approx 0.767$ with $\epsilon$ being the aspect ratio. The bounce times of deeply trapped particles were calculated to be $\tau_{b,i}\approx$0.16\text{\,}\mathrm{ms}$$ and $\tau_{b,e}=$6\text{\,}\mathrm{\SIUnitSymbolMicro s}$$. Technically, the bounce time of a deeply trapped particle whose banana orbit turning point is located at the high field side (HFS) midplane is infinite. The above bounce times are simple estimates found by dividing the length of a poloidal circuit with the magnetic drift speed of each particle and they are presented here to make the point that almost all of the ions have enough time, within the simulation, to complete a banana orbit. With $\epsilon^{3/2}\approx 0.2$, we calculate $\nu_{\star i}=\frac{\nu_{ii}q_{95}R}{u_{ti}}\approx 0.12$ and $\nu_{\star e}=\frac{\nu_{ei}q_{95}R}{u_{te}}\approx 0.48$, we can deduce that at the edge, the electrons are in the plateau regime ($\epsilon^{3/2}<\nu_{\star e}<1$) whereas the ions are in the banana transport regime ($\nu_{\star i}<\epsilon^{3/2}$). III The Fluxes III.1 Flux definitions One of the goals of the present paper is to explore the relationship between equilibrium and turbulent processes and the associated fluxes across the separatrix. For dynamical quantities such as the plasma density $n$, or velocity $v$ we define: $$\displaystyle n$$ $$\displaystyle=\left<n\right>_{t,\zeta}+\delta n\,,$$ $$\displaystyle v$$ $$\displaystyle=\left<v\right>_{t,\zeta}+\delta v\,,$$ (1) where $\left<\cdots\right>$ denotes an average in either time $(t)$ or toroidal planes $(\zeta)$, or both. The time averages considered here are taken over a time interval late in the simulation where a quasi-steady turbulent state has been achieved. Employing Eq. (1), the fundamental relationship for the fluxes is: $$\left<nv\right>_{t,\zeta}=\left<n\right>_{t,\zeta}\left<v\right>_{t,\zeta}+% \left<\delta n\delta v\right>_{t,\zeta}\,,$$ (2) where the cross terms $\left<n\delta v\right>_{t,\zeta}$ and $\left<v\delta n\right>_{t,\zeta}$ vanish due to the vanishing of $\left<\delta n\right>_{t,\zeta}$ and $\left<\delta v\right>_{t,\zeta}$. These “fluxes” are not technically transport fluxes but rather local density weighted flows. Although collisions are fully accounted for in the simulation, the rapid electron transit time implies that the radial excursions of electrons due to either magnetic or $E\times B$ drifts are small and can cancel without causing net transport as is well known from neoclassical theory. In this paper, we will continue to refer to them as fluxes for brevity. As will be seen, they provide a useful diagnostic for understanding basic properties of both the background plasma and the resulting turbulence. Often in the literature, we find the first piece of the rhs of Eq. (2) to be called the equilibrium flux whereas the second piece is known as the turbulent flux. In this work, we employ the same definition for the turbulent flux however, for convenience in analysis the definition of the equilibrium flux employed in this paper is slightly modified. Our equilibrium flux is given by $\Gamma_{eq}=\left<\left<n\right>_{\zeta}\left<v\right>_{\zeta}\right>_{t}$. These two definitions of the equilibrium flux are not exactly equivalent. They are connected by the relation: $\left<\left<n\right>_{\zeta}\left<v\right>_{\zeta}\right>_{t}=\left<n\right>_{% t,\zeta}\left<v\right>_{t,\zeta}-\left<\left<\delta n\right>_{\zeta}\left<% \delta v\right>_{\zeta}\right>_{t}$ . However, independent calculation of the difference $\left<\left<\delta n\right>_{\zeta}\left<\delta v\right>_{\zeta}\right>_{t}$ , has shown that this “ringing” term, associated with temporal fluctuations of the toroidally averaged fields, is three orders of magnitude smaller than the two equilibrium fluxes indicating that, for the level of numerical accuracy assumed in this work, we can take the two definitions to be equivalent. The definitions for the heat fluxes are similar, with the replacement of the density by the pressure of each species. Because the simulation is quasineutral, we can not distinguish between the densities or the net particle fluxes of the two species. In general both magnetic and ExB drifts contribute to the local particle fluxes crossing the separatrix at a particular poloidal position. In principle, the electron and ion ExB drift fluxes can be different due to the larger ion gyroradius. This orbit averaging effect results in the suppression of the ion $E\times B$ fluxes by a factor $\Gamma_{o}(b=k^{2}_{\perp}\rho^{2}_{i})=e^{-b}I_{o}(b)$. Close to the separatrix, $\rho_{i}\approx$2\text{\,}\mathrm{mm}$$ and from power spectra of equilibrium and turbulent fields, $k^{eq}_{\perp}\approx$61\text{\,}\mathrm{m}^{-1}$$, $k^{tur}_{\perp}\approx$123\text{\,}\mathrm{m}^{-1}$$ which gives a suppression of about $1\%-2\%$ for the equilibrium and about $5\%-6\%$ for the turbulent $E\times B$ fluxes. In the case of the heat fluxes though, a clear distinction between ions and electrons can be made, based on their different temperatures. Finally, we should note that, given the available diagnostics, we lack the capability of calculating the turbulent polarization current and its flux. Nevertheless, in later sections, we estimate its magnitude relative to the rest of the fluxes. III.2 Flux calculation In this section, we describe our method for calculating the various fluxes, using the XGC1 data, which illustrates the limitations of the particular data set and can be useful to practitioners in the field. The calculation of radial $E\times B$ drift velocities from the density and potential fields, whose values we know on the computational nodes, involves derivatives along the $R$ and $Z$ directions. However, a plot of time and toroidally averaged $n_{e}$ and $\Phi$ on the separatrix as the one in Fig. 3 shows that, despite the fact that a characteristic trend can be immediately identified, the values are noisy, especially at the low field side (LFS). Therefore, the strategy of interpolating the computational node values along horizontal and vertical lines and then taking derivatives along those lines to construct the vector velocity can give highly inaccurate results particularly for a sub-dominant component, such as the radial component of the $E\times B$ velocity. For that reason, we resorted to expressing the required derivatives as derivatives with respect to the poloidal angle $\theta$. This can be accomplished by writing the magnetic field in terms of the coordinate system $(\Psi,\zeta,\theta)$. In this coordinate system, the magnetic field is $B=RB_{\zeta}\nabla\zeta+\nabla\Psi\times\nabla\zeta$. Using the formula $\mathcal{J}^{-1}=\nabla\Psi\cdot\nabla\theta\times\nabla\zeta$ for the inverse Jacobian, we can turn the expression for the radial component of the $E\times B$ velocity, $V_{E_{\Psi}}=\hat{e}_{\Psi}\cdot\frac{\hat{b}\times\nabla\Phi}{B}$, into $$V_{E_{\Psi}}=-\left(\frac{1}{B^{2}RB_{p}}\right)\left(\frac{RB_{\zeta}}{% \mathcal{J}}\frac{\partial\Phi}{\partial\theta}-B^{2}_{p}\frac{\partial\Phi}{% \partial\zeta}\right)\,.$$ For all equilibrium quantities, $\frac{\partial\Phi_{eq}}{\partial\zeta}=0\,,$ since the system has axisymmetry. For turbulent quantities, we can compute such derivatives using the assumption that turbulent structures are field aligned. Demanding $B\cdot\nabla\delta\Phi\approx 0\,,$ we arrive at the equation $$\frac{\partial\delta\Phi}{\partial\zeta}=\frac{R}{B_{\zeta}\mathcal{J}}\frac{% \partial\delta\Phi}{\partial\theta}\,,$$ which can be readily used. The above expressions allow us to work with derivatives in the poloidal direction for the calculation of the $E\times B$ velocity. This procedure has the advantage that we only need to do a single interpolation and smoothing of $n_{e}$ and $\Phi$ around the LCFS and then take the derivative. The results thus obtained are much less noisy. III.3 The Particle Fluxes First, we will describe the fluxes qualitatively and attempt to give an intuitive explanation for their shapes. Afterwards, we will seek to arrive at relevant quantitative conclusions based on their relative sizes. III.3.1 Equilibrium Flux We start with the spatial distribution of the equilibrium, $E\times B$ particle flux normal to, and hence crossing, the separatrix, which is common between the two species, as discussed above. In Fig. 2 we observe that the magnitude of the particle flux (arrow length and color indicate flux intensity) is not uniform and exhibits a strong poloidal variation, not all of which can be explained by the difference in field strength between the inboard and outboard sides. Certainly, the flux is suppressed at the HFS compared to the LFS. However, at the LFS, the flux seems to be concentrated at two lobes, one outward, starting near the X-point and extending up to a little below the midplane, and one inward, which starts above the midplane and extends near the top of the machine. At the top, near the top X-point, the flux becomes again slightly outward, before it turns again to slightly inward at the HFS all the way to the bottom X-point. To understand the poloidal pattern we just described, we turn to the distributions of density and potential on the LCFS which we show in Fig. 3. There, we plot the two terms of the adiabatic relation so that we get a measure of the electron departure from adiabaticity at various locations along the LCFS. Moreover, the $\frac{\phi}{T_{e}}$ term has the same structure as the potential itself, given that the electrons are almost isothermal on the flux surfaces. From this plot, we immediately recognize the points where the global minimum ($\phi\approx 120^{\circ}$) and global maximum ($\phi\approx 270^{\circ}$) are located, which are the top and bottom X-points respectively. Although the exact location of these two extreme points is a result of a complex interplay between Pfirsch-Schlütter flows and collisional effects aydemir2012pfirsch ; aydemir2009intrinsic , it follows from the direction of the magnetic drifts: ions drift downwards and electrons upwards therefore, the potential arranges itself so that it attracts ions from the core at the top and electrons at the bottom. In both cases, the extremization of the potential, follows an analogous increase/decrease of density. In the case of the bottom X-point, an important factor that would lead to a density build-up, is the X-point ion loss. The X-point losschang2002x ; ku2004property of hot ions leads to a lower ion temperature at the LCFS. It should be noted that, in our simulation, the $E_{r}$ well from the X-loss has already been established. Therefore, the loss cone has moved to higher energies, resulting in hot ion loss. Although the scalar ion pressure is not constant along a field linechurchill2017total , approximate pressure balance is consistent with the lower temperature to be offset by a density increase. These two extremal points of potential establish an electric field in the HFS region. It is this electric field which in turn creates the $E\times B$ flow which is responsible for the inward flux pinch we see in Fig. 2 on the HFS as well as the dominant outward flux on the LFS. The reason why this flux is not as high as the ones we see on the LFS, even if the electric field is stronger, is the mitigating effect of the strong magnetic field. In addition to the global minimum and the global maximum of $\phi$, we also have two local ones. The positions of those, correlate with the two bulges of opposite direction that we see on the LFS in Fig. 2. Indeed, the directions of the fluxes are completely in line with the electric field directions established by the alternating pairs of minimums and maximums of $\phi$. However, in contrast to the two global extremal points of potential, it is not immediately clear what is responsible for the two local ones in the LFS. Here, we propose that this potential structure is produced by the orbits of trapped ions which, as we saw in the previous section, constitute roughly $76\%$ of the total ion population near the edge. A trapped ion moving on its banana orbit exits the separatrix at some point below the LFS midplane, leaving behind it a negative charge. If the electrons were completely adiabatic, negative charge would be instantly neutralized by electrons moving along the field lines to prevent the potential build-up. But, as we can readily verify from Fig. 3, there is a significant departure from adiabaticity at the LFS. Therefore, a trapped ion leaving the separatrix will create a negative potential in order to attract more ions. Conversely, the region above the LFS midplane is the region where the trapped ions re-enter the LCFS (at least those that didn’t get entrained in the parallel SOL flow) generating a positive charge. The non-adiabaticity of the electrons, forces the potential to increase, in order to attract them at that location. Thus neoclassical ion orbits combined with X-point losses are the dominant mechanism responsible for the structure of the poloidal electric field, and hence the equilibrium radial particle flux in this simulation. The non-adiabaticity of the electrons at the separatrix, seems to be crucial for the exact shape of the edge potential. This non-adiabaticity at the edge can be understood if we compare the time rates of all terms in the electron momentum equation. More specifically, the inertial term, i.e., the transit frequency, was given in Section II as $593\text{\,}\mathrm{kHz}$. Although large, it is comparable to the electron collision frequency (also given in II) and the turbulent frequency, which is discussed in the next section and it is found to be $640\text{\,}\mathrm{kHz}$. Therefore, both collisions and the inertial term are important in the electron momentum equation leading to the observed departure from adiabaticity. III.3.2 Banana Tip distribution The subject of neoclassical direct ion orbit losses and their contribution to the edge flows and electric fields has been treated substantially, both computationally and theoreticallydegrassie2015thermal ; battaglia2014kinetic ; miyamoto1996direct ; stacey2015distribution ; stacey2011effect ; stacey2016recent ; stacey2013interpretation ; wilks2017calculation ; wilks2016improvements ; hahn2005wall . Here, we offer a new argument to support our claim that the potential structure at the LFS is due to the excursions of trapped ions on their banana orbits. Assuming that the trapped particles, with orbits within a banana width from the edge, will create either a charge excess or a charge hole at the point of inflection (“banana tip”), we give an estimate for the angular distribution of those banana tips at the LFS. To get the fraction of trapped particles with banana tips in the range $(0,\theta)$, where $\theta$ is the angle that the magnetic field is $B$ and zero is the angle at $B_{min}$ (LFS midplane), we perform the following integral: $$\displaystyle f_{t}(B)$$ $$\displaystyle=\frac{1}{\sqrt{2\pi}v^{3}_{t}}\Sigma_{\sigma}\int d\!E\,e^{-% \frac{E}{v^{2}_{t}}}\int^{\mu=E/B_{min}}_{\mu=E/B}d\!\mu\,\frac{B_{min}}{\sqrt% {2}\sqrt{E-\mu B_{min}}}\,,$$ $$\displaystyle f_{t}(B)$$ $$\displaystyle=\frac{1}{\sqrt{2\pi}v^{3}_{t}}\Sigma_{\sigma}\int d\!E\,e^{-% \frac{E}{v^{2}_{t}}}\sqrt{2E}\left(1-\frac{B_{min}}{B}\right)^{\frac{1}{2}}\,,$$ $$\displaystyle f_{t}(B)$$ $$\displaystyle=\frac{1}{2}\Sigma_{\sigma}\left(1-\frac{B_{min}}{B}\right)^{% \frac{1}{2}}\,,$$ (3) where we have changed coordinates from velocity space to the $E$, $\mu$ space, with $E$ being the energy, $\mu$ the adiabatic invariant. Here, $\sigma$ denotes the sign of the particles parallel velocity. Plugging in the angular formula for $B$, $B=\frac{B_{o}}{1+\epsilon\cos\theta}$, valid for circular flux surfaces, we turn this into the cumulative angular distribution for banana tips: $$X(\theta)=\frac{1}{2}\Sigma_{\sigma}\sqrt{\frac{\epsilon}{1+\epsilon}}\sqrt{1-% \cos\theta}\,.$$ (4) Interpreting $X(\theta)$ as $X(\theta)=\int^{\theta}_{0}d\theta\,g(\theta)\,,$ we get $g(\theta)=\frac{dX(\theta)}{d\theta}\,.$ Therefore, $$g(\theta)=\frac{1}{2}\sqrt{\frac{\epsilon}{1+\epsilon}}\frac{\sin\theta}{\sqrt% {1-\cos\theta}}\,.$$ (5) The poloidal distribution of the flux caused by these particle excursions is the product of the tip distribution with the actual magnetic flux. In Fig. 4 we can see this flux compared to the $E\times B$ fluxes. We point out that indeed, the “trapped” flux, tracks qualitatively the equilibrium $E\times B$ flux. Two things should be noted: the modelling of the tip distribution is rather simple as it assumes circular flux surfaces and completely ignores the effect of the X-point and the ion losses there. Also, a qualitative similarity between the fluxes is all we can hope for. Indeed, if the equilibrium $E\times B$ flux pattern at the LFS is the result of the potential structure due to the trapped particle excursions, as we claim, then, the relationship between their relative sizes is far from obvious. III.3.3 Turbulent Flux We proceed with the description of the turbulent $E\times B$ flux which we show in Figs. 4 and  5. There, we observe that particle turbulence is entirely confined to the LFS and has a ballooning-like shape which peaks near the midplane, gets interrupted above it up to roughly $50^{\circ}$, picks up again after that and recedes off at the top of the machine. A probable explanation for such a shape is provided by Fig. 6. There, we overlay the plots of the turbulent flux and the $E\times B$ shearing rate on the LCFS. We observe that as the shearing rate exceeds a certain value (roughly at $640kHz$) the turbulent flux drops sharply. When the $\Omega_{E\times B}$ drops below this value, the turbulence raises again. Indeed, we have measured the main frequency of the turbulence in this discharge to be very close to $640\text{\,}\mathrm{kHz}$. We can imagine therefore, that in the absence of shear, the turbulence would have created a ballooning structure in the LFS, peaking at the midplane and tapering off at the top, where the curvature drive is minimal. However, the presence of shear at exactly the right level, suppresses it locally creating the resulting patternburrell1997effects ; hahm1995flow ; terry2000suppression ; biglari1990influence . Recall from the previous discussion that the electrostatic potential pattern, and hence the shear, appears to be controlled by neoclassical drift-orbit effects. It is notable that the system has arranged itself so that this equilibrium shear is of sufficient magnitude to influence the turbulence. We have verified that turbulence generated Reynolds stresses play a negligible role in establishing these sheared flows, which is not unexpected for this H-mode phase of the dischargemuller2011experimental . In addition to shear, it is possible that the observed turbulence pattern may also be related to the existence of curvature driven modes which peak at the top of torusabdoul2017generalised ; halpern2013ideal . Poloidal flux surface expansion, i.e., the distance between adjacent flux surfaces which is increasing towards the top of the machine, may also play a role. First, it increases the outward velocity and hence the $\Gamma$ of filamentary turbulent structures in order to maintain their field-aligned charactergalassi2017drive . Second, flux surface expansion reduces $E\times B$ shear more quickly than the density or temperature gradients, since the former are proportional to a second radial derivative of the potential and therefore scale more strongly than the gradients which are proportional to a single radial derivative of the profiles. Lastly, given the turbulent frequency, we can evaluate the relative size of the ion polarization flux compared to the turbulent $E\times B$ flux. Their ratio is approximately $\frac{\omega_{tur}}{\omega_{ci}}\approx 0.04$, which means that our inability to calculate the polarization current does not threaten the credibility of our results, at the present level of precision. III.4 The Heat Fluxes In this section we present the qualitative features of the heat fluxes. Contrary to the $E\times B$ particle fluxes where we couldn’t distinguish between the two species (apart from the, almost negligible, gyroaveraging effect of the ion orbits), for the heat fluxes we have a clear separation due to the different temperatures, as shown on Fig. 7. Therefore, in what follows, the calculation of the fluxes still assumes quasineutrality but each of the species has its own temperature. In calculating the heat fluxes, we employ Eq. (2) where in the place of velocity we substitute the pressure $P_{s}=nT_{s}$, with $s$ denoting the species. Therefore, the heat flux is given by the formula $$\left<Pv\right>_{t,\zeta}=\left<P\right>_{t,\zeta}\left<v\right>_{t,\zeta}+% \left<\delta P\delta v\right>_{t,\zeta}\,,$$ (6) where again, we call the first term of the rhs the “equilibrium” and the second the “turbulent” heat flux. The velocity here is the $E\times B$ one. In Figs. 8-9 we present the equilibrium and turbulent $E\times B$ fluxes of electrons and ions respectively, along the LCFS. The shapes of both species equilibrium flux is similar except for the scaling effect of the temperature difference. This can be easily understood from Fig. 7 where we plot the temperatures as a function of the poloidal angle. There, we see that the electrons are very close to being isothermal. Therefore, the qualitative features between the particle equilibrium and electron equilibrium $E\times B$ heat fluxes are common. The ion temperature on the other hand, does not remain constant along the separatrix. Specifically, it drops near the two X-points indicating that, close to these areas, hotter ions have exited the plasma. High energy ions with the right pitch angle will be in the ion loss conechang2002x ; ku2004property . An interesting feature of the ion temperature is that the average temperatures below and above the outboard midplane are not the same: the region below the midplane has an average temperature which is $4.5\text{\,}\mathrm{eV}$ higher than the average temperature above. We can understand this difference in the context of the trapped ion banana orbits that we talked about in the particle fluxes section. The ions exiting the flux surface below the midplane are on the inward side of their banana orbit. They travel along a region of higher temperature therefore the temperature locally increases. Equivalently, when they re-enter above the midplane, they are coming from outside the LCFS where the temperature has dropped. Indicatively, a typical value for a banana orbit width near the edge is $6\text{\,}\mathrm{mm}$ and the temperature drop across this distance is about $2\text{\,}\mathrm{eV}$, accounting in part for the observed average temperature difference. Moreover, the re-entering ions could have originated from further away in the SOL than a typical banana orbit width, making the above estimate just a lower bound for the real temperature difference. Despite the above observations, the qualitative features of the equilibrium $E\times B$ ion heat flux are very similar to those of the equilibrium particle and electron heat $E\times B$ fluxes. The real difference between the two species heat flux behavior comes from the comparison between the relative sizes of equilibrium and turbulent heat fluxes. In the case of the electrons, turbulence dominates; neoclassical electron transport is negligible because of the small electron banana width. Thus, the local equilibrium $E\times B$ electron flux shown in Fig. 8 will be further reduced by cancellation from bounce-orbit averaging. Turbulence is the sole mechanism for net electron particle loss across the separatrix. It must compensate for the net loss of ions in order to maintain charge balance. Turbulent electron energy loss follows. In contrast, for the ions, it seems that the equilibrium process, set up by the neoclassical mechanism of trapped particle orbits, is indeed the dominant mode of ion loss, in this simulation, both for heat and particles. Both species LFS turbulent ballooning shapes are being interrupted in the high shear region. Although the ion turbulent heat flux never recovers and drops to zero, the electron turbulent heat flux picks up again after the initial dip. Moreover, in the region above the midplane, the electron turbulent heat flux is almost twice the size of the equivalent ion one, despite the big difference of the absolute temperature of the two species, with the ions being significantly hotter. This is a clear indication that, in the case of electrons, turbulence is absolutely necessary for the plasma to retain charge balance, whereas in the case of ions, the neoclassical processes are more effective. In Figs. 10-11 we present the effective diffusivities which are defined as $D=-\frac{\Gamma}{\partial n/\partial x}$, $\chi_{s}=-\frac{Q_{s}}{\partial P_{s}/\partial x}$ and we separate them again into equilibrium and turbulent contributions. The equilibrium diffusivities are presumably driven by neoclassical ion physics as discussed previously, but because of ambipolarity, equilibrium and turbulent diffusivities are fundamentally coupled. A first observation is that both equilibrium and turbulent diffusivities are of a similar order of magnitude, consistent with this coupling. In Ref. 2017APS..DPPTP11079, , methods developed for application to pedestal turbulence provide information about the modes present, based on the relative sizes of the transport channels. It is interesting to apply those concepts to our situation for qualitative insight. Here we find comparable turbulent diffusivities near the midplane in particle, electron heat and ion heat channels. This is suggestive of an underlying mode with an MHD character, possibly resistive ballooning or Kelvin Helmholtz in this electrostatic simulation. Near the top of the torus the electron diffusivity is dominant, which may suggest a mode driven primarily by the electron temperature gradient, or a mode with an electron drift character. Indeed we find our turbulence has a frequency very close to the electron diamagnetic drift frequency. A final observation is that, based on the scale of the above diffusivities, the time scale for further profile evolution is much larger than the total time of the simulation. We have indeed verified that the profiles remain constant throughout, which is a clear indication of achievement of a quasi-steady turbulent state. IV Quantitative Results IV.1 Separatrix-crossing Fluxes Here, we calculate the integrated currents and flux surface averages of the above fluxes and reach some conclusions regarding their relative importance in establishing the SOL width. First, we start with some definitions. The fluxes of the previous section are normal to the separatrix components of the total flux, i.e., $\Gamma_{N}=\frac{-B_{R}}{B_{p}}\left<n\cdot V^{E\times B}_{Z}\right>+\frac{B_{% Z}}{B_{p}}\left<n\cdot V^{E\times B}_{R}\right>$. For a quantity $A$, the flux surface average of it is defined as: $\langle A\rangle=\frac{\int d\zeta d\theta\mathcal{J}A}{\int d\zeta d\theta% \mathcal{J}}\,.$ For the calculation of such integrals, we must know the Jacobian factor the inverse of which is given by the well known formula: $$\displaystyle\mathcal{J}^{-1}$$ $$\displaystyle=\nabla\Psi\cdot\nabla\theta\times\nabla\zeta$$ $$\displaystyle=RB_{p}\hat{e}_{\Psi}\cdot\frac{1}{r\sqrt{1+\frac{1}{r^{2}}\frac{% d^{2}r}{d\theta^{2}}}}\hat{e}_{\theta}\times\frac{1}{R}\hat{e}_{\zeta}$$ $$\displaystyle=\frac{B_{p}}{r\sqrt{1+\frac{1}{r^{2}}\frac{d^{2}r}{d\theta^{2}}}% }\,,$$ (7) with $r$ being the distance between $(R_{o},Z_{o})$ and the point on the flux surface. In the second step of the above calculation we have replaced the usual formula $\nabla\theta=\frac{1}{r}\hat{e}_{\theta}$, which holds in a circular geometry, with $\nabla\theta=\frac{1}{r\sqrt{1+\frac{1}{r^{2}}\frac{d^{2}r}{d\theta^{2}}}}\hat% {e}_{\theta}$, which is an approximation for a general shape flux surface, provided the integrand is non-singular and single valued, that minimizes the integration error. Using this formula, the flux surface average (FSA) of a scalar $A$ is given by $$\langle A\rangle=\frac{\int d\zeta d\theta\frac{r\sqrt{1+\frac{1}{r^{2}}\frac{% d^{2}r}{d\theta^{2}}}}{Bp}A}{\int d\zeta d\theta\frac{r\sqrt{1+\frac{1}{r^{2}}% \frac{d^{2}r}{d\theta^{2}}}}{Bp}}\,.$$ (8) If we perform a flux-surface average on the continuity equation, we arrive at the expression: $$\frac{\partial\langle n\rangle}{\partial t}+\frac{1}{V^{\prime}}\frac{\partial% }{\partial\Psi}V^{\prime}\langle\Gamma\cdot\nabla\Psi\rangle=0\,,$$ (9) where $V^{\prime}=\int d\zeta d\theta\mathcal{J}$ and $\nabla\Psi=RB_{p}\hat{e}_{\Psi}\,$. Therefore, the physically relevant quantity is $\langle\Gamma\cdot\nabla\Psi\rangle$, which is calculated using the explicit formula: $$\langle\Gamma\cdot\nabla\Psi\rangle=\frac{\int d\zeta d\theta rR\sqrt{1+\frac{% 1}{r^{2}}\frac{d^{2}r}{d\theta^{2}}}\Gamma}{\int d\zeta d\theta\frac{r\sqrt{1+% \frac{1}{r^{2}}\frac{d^{2}r}{d\theta^{2}}}}{Bp}}\,.$$ (10) We can also define the surface integral of the flux across the LCFS. This should be interpreted as the particle or heat current, i.e., the number of particles per second that cross the separatrix due to the various fluxes. This will be denoted as $I_{\Gamma}$ and calculated using $$I_{\Gamma}=\int\Gamma Rd\zeta\left(\sqrt{1+\frac{1}{r^{2}}\frac{d^{2}r}{d% \theta^{2}}}\right)rd\theta\,,$$ (11) with similar expressions for the heat currents. In Table 1, we present the integrals over the whole separatrix. We have also kept for reference the numbers for flux from the magnetic drifts. In Tables 2-3 we give the results for the integrated heat fluxes. Here, we also define an averaging factor, $AF$, which is defined as $AF=\langle\Gamma\rangle/max\left(\Gamma\right)$ and shows the degree of “cancellation” of each particular flux. If we assume that the density and potential are constant on a flux surface, then the FSA’s of both the equilibrium $E\times B$ and the magnetic fluxes can be shown to be identically zero. Close to the edge of the plasma, as we saw in the previous section, the constancy of those quantities on the separatrix is violated hence, we don’t expect the $AF$ to be exactly zero. Indeed, as we see in Table 4, the averaging factor for both the magnetic drift flux and the equilibrium $E\times B$ flux are very small but nonzero, indicating that the bulk of these fluxes are returning into the main plasma whereas, the $AF$ of the turbulent $E\times B$ flux is larger. $AF$’s of the heat fluxes are similar. A few comments about the relative sizes of those integrals are in order: In the particle fluxes and ion heat flux case, the size of the turbulent current is roughly three times larger than the equilibrium one. In the case of the electron heat fluxes though, the turbulent current is almost ten times larger than the equilibrium one indicating the importance of turbulence for the electrons, supporting the main argument of the previous section. IV.2 Particle Balance in the SOL In this sub-section, we explore particle balance in the SOL by comparing the flux of electrons exiting across the separatrix with the exhaust flux in the SOL, which is dominated by the parallel flows. The competition between these cross-field and parallel transport processes is what sets the density SOL width. Regarding the separate contribution of each flux to the setting of the SOL width the ratio of the currents doesn’t tell the full story. Despite the fact that the turbulent flux is fully localized at the outboard midplane, contributing solely to the LFS SOL, the equilibrium fluxes have an important inward component at the HFS. Moreover, as we have mentioned in the previous sections, the particle $E\times B$ equilibrium flux cannot take electrons outside of the separatrix. Therefore, we start with the assumption that the current from the turbulent $E\times B$ particle flux is the total electron current exiting the separatrix. Now, we take a Gaussian box whose one side is the part of LCFS starting at the point where the $E\times B$ turbulent particle flux begins and ending where it ends. The top and bottom of this box are horizontal lines, starting at these two locations and extending outwards, deep inside the SOL. The extent of these lines is left to to be determined by the behavior of the normal fluxes on them. The right hand side of the box is unspecified but we assume that no flux leaks from this side of the box to the wall. In Figs. 12-13 we have plotted the normal component of the particle fluxes at the bottom and top caps of the Gaussian box respectively. The normal component of the particle flux includes contributions from the $E\times B$ flux, the magentic drift flux and, importantly, the parallel flows. We are not concerned with distinguishing the contributions from equilibrium and turbulent $E\times B$ therefore, we just present the total. An important note here is that, since we do not distinguish between equilibrium and turbulence, in the calculation of the $E\times B$ velocity we take $\frac{\partial\phi}{\partial\zeta}=0$. We believe that the error introduced by such an approximation is minimal since, the $E\times B$ contribution to the normal fluxes is only important in a very narrow region close to the separatrix. There, the poloidal magnetic field is almost zero hence, $\frac{\partial\phi}{\partial\zeta}\approx B\cdot\nabla\phi=0$. Generally, the parallel flux contribution is dominant everywhere and the only place where the other fluxes seem to be relevant is at a very narrow channel close to the LCFS. In both figures, the negative $Z$ direction is downwards, which means that for the top cap, negative fluxes are entering the box whereas for the bottom cap negative fluxes are leaving. In the case of the top cap, it seems that there is a well defined point where all fluxes drop to zero, around $\Psi_{N}\approx 1.16$. This seems to be a reasonable place to truncate the integration of the fluxes in order to find the total current crossing the top of the box. The total current thus found is $$8.61\text{\times}{10}^{21}\text{\,}\mathrm{s}^{-1}$=$1.38\text{\,}\mathrm{kA}$$, entering the box. For the bottom cap, things are not so clear cut. Although the fluxes coming from $E\times B$ and magnetic drifts do indeed drop to zero, the parallel flux seems to remain constant. This had to be expected since there is a significant degree of ionization happening in the SOL. It is very hard to precisely locate a point to truncate the integration, meaning to find where the SOL ends and what is left after that is simply ionization. We believe that a sensible point to stop our integration is at $\Psi_{N}\approx 1.025$ (shown in figure 12). The total current found from this choice is $$1.17\text{\times}{10}^{22}\text{\,}\mathrm{s}^{-1}$=$1.87\text{\,}\mathrm{kA}$$, exiting the box. If we extend the integration further out, this number will increase. As we said above, the total electron current crossing into the box from the separatrix, has to be the one coming from the turbulent $E\times B$ flux given in Table 1 as $$3.02\text{\times}{10}^{21}\text{\,}\mathrm{s}^{-1}$=$0.48\text{\,}\mathrm{kA}$$. If we compare this number to the total current found from the top and bottom of the boxes, we see that it accounts for $98\%$. A direct conclusion that can be drawn from such a comparison is that indeed, as presumed in the Goldston model, the density channel is filled up almost entirely with turbulent electron flux. Of course, had we taken the limit of integration of the bottom cap further out, this percentage would have dropped however, it is reasonable to assume that this result would be severely contaminated by atomic processes that take place outside the LCFS and have nothing to do with turbulence and flows that cross the separatrix. Here, we need to note that even though we can safely claim that the electron contribution to the SOL channel is almost entirely turbulent, a similar statement can not be made for the ions. From the data that we have available, we have no way of isolating the turbulent and the neoclassical contributions since turbulence, in the ions’ case, exists on top of a neoclassical background. If we repeat the previous calculation for electron heat fluxes, we find both the top and bottom caps of the box to have well defined limits where all fluxes go to zero. However, the comparison of the total electron heat current exiting the Gaussian box from the caps to the electron heat current entering from the separatrix, shows that the latter is more than twice the former. A plausible explanation for this result is that heat is dissipated in this volume due to various atomic processes, predominantly ionization. A detailed account of the relative strengths of such processes is beyond the scope of this paper. V Summary and Conclusions In this paper we have analyzed the results of an XGC1 DIII-D simulation focusing on the fluxes exiting the separatrix. We reconstructed the $E\times B$ flux distinguishing between the equilibrium and the turbulent part. The poloidal patterns of the two particle fluxes were presented. For the equilibrium flux, we traced it’s poloidal shape to the LCFS potential. We interpreted this potential structure as the result of ion drifts. Specifically, we could clearly distinguish the impact of the $\nabla B$-drifts and X-point losses on the potential’s global maximum and minimum at the bottom X-point and top of the machine, respectively. For the poloidal potential variation at the LFS, we argued that it’s due to the trapped ion orbits close to the edge that exit and re-enter the confinement. We constructed a simplified model for the angular distribution function of the banana orbits’ inflection points and showed that the resulting “trapped-particle” flux follows the shape of the equilibrium flux, to support our argument. Moreover, we noted that LFS edge electrons exhibit a significant departure from adiabaticity. We provided as reasons for this departure the fact that the turbulent frequency and the collision rate are comparable to the transit frequency. The turbulent flux pattern was also given. It appears to be strongly ballooning but with a significant region of suppression. We contended that this suppression is coming from the ion-neoclassically generated $E\times B$ shear which seems to be at the right level to affect the turbulence exactly at that region. We presented the $E\times B$ heat fluxes for ions and electrons at the separatrix. From the difference in size between the turbulent heat fluxes of the two species it is clear that turbulence is more important for the electrons since it is their only way of losing confinement. Ions are exiting the separatrix through neoclassical mechanisms, therefore, turbulent electron heat flux has to increase in order to maintain quasineutrality. The electrons were found to be isothermal whereas the ion temperature exhibits significant drops near the X-points, in accordance with the predictions of X-loss theory. The small temperature difference between the upper and lower halfs of the LFS can be, in part, explained by ion banana orbits exiting from a hotter and re-entering from a cooler part of the machine. The equilibrium and turbulent diffusivities of those fluxes were calculated, providing insight regarding the modes present. Numerical integrations were performed to calculate the flux-surface-average fluxes and the outgoing particle and heat currents from the separatrix. Lastly, we confirmed that anomalous electron diffusion accounts for almost all of the electrons filling the SOL particle channel. Given the importance of the predictability of the SOL width for modern tokamaks, more state-of-the-art gyrokinetic simulations need to be analyzed to elucidate the complex physical processes that give rise to it. The interaction between the ion drifts and the turbulence seems to be critical for setting the size of the channel and the study of the fluxes and flows at the edge provides illuminating insights into the nature of the system dynamics. In future publications we will examine simulations from different tokamaks and look into the characteristics of the turbulent “blobs” from this and other simulations. These efforts will be directed towards obtaining a better understanding of the interaction of neoclassical and turbulent physics, and the extent to which the present results are applicable to other plasma and device regimes. Acknowledgements I. Keramidas Charidakos would like to acknowledge useful help with the data from Jugal Chowdhury at the early phase of this project. We acknowledge computing resources on Titan at OLCF through the 2015 INCITE and the 2016 ALCC awards. Work supported by the U.S. Department of Energy Office of Science, Office of Fusion Energy Sciences under Award Number DE-FG02-97ER54392 and by subcontract SO15882-C with PPPL under the U.S. Department of Energy HBPS SciDAC project. References [1] JW Connor, GF Counsell, SK Erents, SJ Fielding, B LaBombard, and K Morel. Comparison of theoretical models for scrape-off layer widths with data from compass-d, jet and alcator c-mod. Nuclear fusion, 39(2):169, 1999. [2] RJ Goldston. Heuristic drift-based model of the power scrape-off width in low-gas-puff h-mode tokamaks. Nuclear Fusion, 52(1):013009, 2011. [3] Choong Seock Chang, S Ku, Alberto Loarte, Vassili Parail, Florian Koechl, Michele Romanelli, Rajesh Maingi, J-W Ahn, T Gray, J Hughes, et al. Gyrokinetic projection of the divertor heat-flux width from present tokamaks to iter. Nuclear Fusion, 57(11):116023, 2017. [4] ET Meier, RJ Goldston, EG Kaveeva, MA Makowski, S Mordijck, VA Rozhansky, I Yu Senichenkov, and SP Voskoboynikov. Drifts, currents, and power scrape-off width in solps-iter modeling of diii-d. Nuclear Materials and Energy, 12:973–977, 2017. [5] ET Meier, RJ Goldston, EG Kaveeva, MA Makowski, S Mordijck, VA Rozhansky, I Yu Senichenkov, and SP Voskoboynikov. Analysis of drift effects on the tokamak power scrape-off width using solps-iter. Plasma Physics and Controlled Fusion, 58(12):125012, 2016. [6] JR Myra, DA Russell, DA D’Ippolito, J-W Ahn, Rajesh Maingi, RJ Maqueda, DP Lundberg, DP Stotler, SJ Zweben, J Boedo, et al. Reduced model simulations of the scrape-off-layer heat-flux width and comparison with experiment. Physics of Plasmas, 18(1):012305, 2011. [7] David A Russell, Daniel A D’Ippolito, James R Myra, John M Canik, Travis K Gray, and Stewart J Zweben. Modeling the effect of lithium-induced pedestal profiles on scrape-off-layer turbulence and the heat flux width. Physics of Plasmas, 22(9):092311, 2015. [8] AY Pankin, T Rafiq, AH Kritz, GY Park, CS Chang, D Brunner, RJ Groebner, JW Hughes, B LaBombard, JL Terry, et al. Kinetic modeling of divertor heat load fluxes in the alcator c-mod and diii-d tokamaks. Physics of Plasmas, 22(9):092511, 2015. [9] V Rozhansky, E Kaveeva, I Senichenkov, and E Vekshina. Structure of the classical scrape-off layer of a tokamak. Plasma Physics and Controlled Fusion, 60(3):035001, 2018. [10] D Reiser and T Eich. Drift-based scrape-off particle width in x-point geometry. Nuclear Fusion, 57(4):046011, 2017. [11] T Stoltzfus-Dueck. Tokamak-edge toroidal rotation due to inhomogeneous transport and geodesic curvature. Physics of Plasmas, 19(5):055908, 2012. [12] T Stoltzfus-Dueck. Transport-driven toroidal rotation in the tokamak edge. Physical review letters, 108(6):065002, 2012. [13] SH Müller, JA Boedo, KH Burrell, RA Moyer, DL Rudakov, WM Solomon, et al. Experimental investigation of the role of fluid turbulent stresses and edge plasma flows for intrinsic rotation generation in diii-d h-mode plasmas. Physical review letters, 106(11):115001, 2011. [14] J Loizu, P Ricci, FD Halpern, S Jolliet, and A Mosetto. Intrinsic toroidal rotation in the scrape-off layer of tokamaks. Physics of Plasmas, 21(6):062309, 2014. [15] RJ Groebner, KH Burrell, WM Solomon, et al. Intrinsic toroidal velocity near the edge of diii-d h-mode plasmas. Nuclear Fusion, 49(8):085020, 2009. [16] CS Chang and S Ku. Spontaneous rotation sources in a quiescent tokamak edge plasma. Physics of Plasmas, 15(6):062510, 2008. [17] Janghoon Seo, CS Chang, S Ku, JM Kwon, W Choe, and Stefan H Müller. Intrinsic momentum generation by a combined neoclassical and turbulence mechanism in diverted diii-d plasma edge. Physics of Plasmas, 21(9):092501, 2014. [18] AV Chankin, DP Coster, N Asakura, G Corrigan, SK Erents, W Fundamenski, HW Müller, RA Pitts, PC Stangeby, and M Wischmeier. A possible role of radial electric field in driving parallel ion flow in scrape-off layer of divertor tokamaks. Nuclear Fusion, 47(8):762, 2007. [19] K Hoshino, A Hatayama, N Asakura, H Kawashima, R Schneider, and D Coster. Numerical analysis of the sol/divertor plasma flow with the effect of drifts. Journal of nuclear materials, 363:539–543, 2007. [20] GS Kirnev, G Corrigan, D Coster, SK Erents, W Fundamenski, GF Matthews, and RA Pitts. Edge2d code simulations of sol flows and in–out divertor asymmetries in jet. Journal of nuclear materials, 337:271–275, 2005. [21] B LaBombard, JE Rice, AE Hubbard, JW Hughes, M Greenwald, J Irby, Y Lin, B Lipschultz, ES Marmar, CS Pitcher, et al. Transport-driven scrape-off-layer flows and the boundary conditions imposed at the magnetic separatrix in a tokamak plasma. Nuclear fusion, 44(10):1047, 2004. [22] A Yu Pigarov, SI Krasheninnikov, B LaBombard, and TD Rognlien. Simulation of parallel sol flows with uedge. Contributions to Plasma Physics, 48(1-3):82–88, 2008. [23] RA Pitts, P Andrew, X Bonnin, AV Chankin, Y Corre, G Corrigan, D Coster, I Duran, T Eich, SK Erents, et al. Edge and divertor physics with reversed toroidal field in jet. Journal of Nuclear Materials, 337:146–153, 2005. [24] AY Aydemir. Pfirsch–schlüter current-driven edge electric fields and their effect on the l–h transition power threshold. Nuclear Fusion, 52(6):063026, 2012. [25] Ker-Chung Shaing and EC Crume Jr. Bifurcation theory of poloidal rotation in tokamaks: A model for l-h transition. Physical Review Letters, 63(21):2369, 1989. [26] CS Chang, S Ku, GR Tynan, R Hager, RM Churchill, I Cziegler, M Greenwald, AE Hubbard, and JW Hughes. Fast low-to-high confinement mode bifurcation dynamics in a tokamak edge plasma gyrokinetic simulation. Physical review letters, 118(17):175001, 2017. [27] RJ Goldston. Theoretical aspects and practical implications of the heuristic drift sol model. Journal of Nuclear Materials, 463:397–400, 2015. [28] Thomas Eich, AW Leonard, RA Pitts, W Fundamenski, RJ Goldston, TK Gray, A Herrmann, A Kirk, A Kallenbach, O Kardaun, et al. Scaling of the tokamak near the scrape-off layer h-mode power width and implications for iter. Nuclear fusion, 53(9):093031, 2013. [29] CS Chang, S Ku, PH Diamond, Z Lin, S Parker, TS Hahm, and N Samatova. Compressed ion temperature gradient turbulence in diverted tokamak edge. Physics of Plasmas, 16(5):056108, 2009. [30] S Ku, CS Chang, and PH Diamond. Full-f gyrokinetic particle simulation of centrally heated global itg turbulence from magnetic axis to edge pedestal top in a realistic tokamak geometry. Nuclear Fusion, 49(11):115021, 2009. [31] S Ku, Robert Hager, Choong-Seock Chang, JM Kwon, and Scott E Parker. A new hybrid-lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma. Journal of Computational Physics, 315:467–475, 2016. [32] RM Churchill, CS Chang, S Ku, and J Dominski. Pedestal and edge electrostatic turbulence characteristics from an xgc1 gyrokinetic simulation. Plasma Physics and Controlled Fusion, 59(10):105014, 2017. [33] DA D’Ippolito, JR Myra, and SJ Zweben. Convective transport by intermittent blob-filaments: Comparison of theory and experiment. Physics of Plasmas, 18(6):060501, 2011. [34] JR Myra, DA D’Ippolito, and DA Russell. Turbulent transport regimes and the scrape-off layer heat flux width. Physics of Plasmas, 22(4):042516, 2015. [35] JR Myra, DA Russell, and SJ Zweben. Theory based scaling of edge turbulence and implications for the scrape-off layer width. Physics of Plasmas, 23(11):112502, 2016. [36] I Furno, B Labit, M Podestà, A Fasoli, SH Müller, FM Poli, P Ricci, C Theiler, S Brunner, A Diallo, et al. Experimental observation of the blob-generation mechanism from interchange waves in a plasma. Physical review letters, 100(5):055004, 2008. [37] N Bisai, A Das, S Deshpande, R Jha, P Kaw, A Sen, and R Singh. Formation of a density blob and its dynamics in the edge and the scrape-off layer of a tokamak plasma. Physics of plasmas, 12(10):102515, 2005. [38] CS Chang, Seunghoe Kue, and H Weitzner. X-transport: A baseline nonambipolar transport in a diverted tokamak plasma edge. Physics of Plasmas, 9(9):3884–3892, 2002. [39] Seunghoe Ku, Hoyul Baek, and CS Chang. Property of an x-point generated velocity-space hole in a diverted tokamak plasma edge. Physics of plasmas, 11(12):5626–5633, 2004. [40] James L Luxon. A design retrospective of the diii-d tokamak. Nuclear Fusion, 42(5):614, 2002. [41] AY Aydemir. An intrinsic source of radial electric field and edge flows in tokamaks. Nuclear Fusion, 49(6):065001, 2009. [42] Randy M Churchill, John M Canik, CS Chang, R Hager, Anthony W Leonard, Rajesh Maingi, R Nazikian, and Daren P Stotler. Total fluid pressure imbalance in the scrape-off layer of tokamak plasmas. Nuclear Fusion, 57(4):046029, 2017. [43] John S deGrassie, Jose A Boedo, and Brian A Grierson. Thermal ion orbit loss and radial electric field in diii-d. Physics of Plasmas, 22(8):080701, 2015. [44] DJ Battaglia, KH Burrell, CS Chang, S Ku, JS Degrassie, and BA Grierson. Kinetic neoclassical transport in the h-mode pedestal. Physics of Plasmas, 21(7):072508, 2014. [45] Kenro Miyamoto. Direct ion orbit loss near the plasma edge of a divertor tokamak in the presence of a radial electric field. Nuclear fusion, 36(7):927, 1996. [46] Weston M Stacey and Matthew T Schumann. The distribution of ion orbit loss fluxes of ions and energy from the plasma edge across the last closed flux surface into the scrape-off layer. Physics of Plasmas, 22(4):042504, 2015. [47] Weston M Stacey. The effect of ion orbit loss and x-loss on the interpretation of ion energy and particle transport in the diii-d edge plasma. Physics of Plasmas, 18(10):102504, 2011. [48] WM Stacey. Recent developments in plasma edge theory. Contributions to Plasma Physics, 56(6-8):495–503, 2016. [49] WM Stacey, M-H Sayer, J-P Floyd, and RJ Groebner. Interpretation of changes in diffusive and non-diffusive transport in the edge plasma during pedestal buildup following a low-high transition in diii-d. Physics of Plasmas, 20(1):012509, 2013. [50] TM Wilks, WM Stacey, and TE Evans. Calculation of the radial electric field from a modified ohm’s law. Physics of Plasmas, 24(1):012505, 2017. [51] TM Wilks and WM Stacey. Improvements to an ion orbit loss calculation in the tokamak edge. Physics of Plasmas, 23(12):122505, 2016. [52] SH Hahn, Seunghoe Ku, and CS Chang. Wall intersection of ion orbits induced by fast transport of pedestal plasma over an electrostatic potential hill in a tokamak plasma edge. Physics of plasmas, 12(10):102501, 2005. [53] KH Burrell. Effects of e$\times$ b velocity shear and magnetic shear on turbulence and transport in magnetic confinement devices. Physics of Plasmas, 4(5):1499–1518, 1997. [54] TS Hahm and KH Burrell. Flow shear induced fluctuation suppression in finite aspect ratio shaped tokamak plasma. Physics of Plasmas, 2(5):1648–1651, 1995. [55] PW Terry. Suppression of turbulence and transport by sheared flow. Reviews of Modern Physics, 72(1):109, 2000. [56] H Biglari, PH Diamond, and PW Terry. Influence of sheared poloidal rotation on edge turbulence. Physics of Fluids B: Plasma Physics, 2(1):1–4, 1990. [57] PA Abdoul, David Dickinson, CM Roach, and Howard Read Wilson. Generalised ballooning theory of two-dimensional tokamak modes. Plasma Physics and Controlled Fusion, 60(2):025011, 2017. [58] Federico D Halpern, Sebastien Jolliet, Joaquim Loizu, Annamaria Mosetto, and Paolo Ricci. Ideal ballooning modes in the tokamak scrape-off layer. Physics of Plasmas, 20(5):052306, 2013. [59] Davide Galassi, Patrick Tamain, Hugo Bufferand, Guido Ciraolo, Ph Ghendrih, C Baudoin, Clothilde Colin, Nicolas Fedorczak, N Nace, and Eric Serre. Drive of parallel flows by turbulence and large-scale e$\times$ b transverse transport in divertor geometry. Nuclear Fusion, 57(3):036029, 2017. [60] M. Kotschenreuther, X. Liu, D. Hatch, L. Zheng, S. Mahajan, A. Diallo, R. Groebner, A. Hubbard, J. Hughes, C. Maggi, S. Saarelma, and JET Contributors. Gyrokinetic analysis of pedestal transport. In APS Meeting Abstracts, page TP11.079, October 2017.
The almost-sure population growth rate in branching Brownian motion with a quadratic breeding potential J. Berestycki , É. Brunet , J. W. Harris  and S. C. Harris Laboratoire de Probabilités et Modèles Aléatoires, Université Pierre et Marie Curie (UPMC Paris VI), CNRS UMR 7599, 175 rue du Chevaleret, 75013 Paris, France. email:julien.berestycki@upmc.fr.Laboratoire de Physique Statistique, École Normale Supérieure, UPMC Université Paris 6, Université Paris Diderot, CNRS, 24 rue Lhomond, 75005 Paris, France. email: eric.brunet@lps.ens.fr.Department of Mathematics, University of Bristol, University Walk, Bristol, BS8 1TW, U.K. email: john.harris@bristol.ac.uk. Supported by the Heilbronn Institute for Mathematical Research.Department of Mathematical Sciences, University of Bath, Claverton Down, Bath, BA2 7AY, U.K. email: s.c.harris@bath.ac.uk Abstract In this note we consider a branching Brownian motion (BBM) on $\mathbb{R}$ in which a particle at spatial position $y$ splits into two at rate $\beta y^{2}$, where $\beta>0$ is a constant. This is a critical breeding rate for BBM in the sense that the expected population size blows up in finite time while the population size remains finite, almost surely, for all time. We find an asymptotic for the almost sure rate of growth of the population. AMS 2000 subject classification: 60J80. Keywords: Branching Brownian motion. 1 Introduction We consider a branching Brownian motion with a quadratic breeding potential. Each particle diffuses as a driftless Brownian motion on $\mathbb{R}$, and splits into two particles at rate $\beta y^{2}$, where $\beta>0$ and $y$ is the spatial position of the particle. We let $N_{t}$ be the set of particles alive at time $t$, and then, for each $u\in N_{t}$, $Y_{u}(t)$ is the spatial position of particle $u$ at time $t$ (and for $0\leq s<t,Y_{u}(s)$ is the spatial position of the unique ancestor of $u$ alive at time $s$). We will call this process the $(\beta y^{2};\mathbb{R})$-BBM. It is known that quadratic breeding is a critical rate for population explosions. If the breeding rate were instead $\beta|y|^{p}$ for $p>2$, the population size would almost surely explode in a finite time. However, for the $(\beta y^{2};\mathbb{R})$-BBM the expected number of particles blows up in a finite time while the total number of particles alive remains finite almost surely, for all time. For $p\in[0,2)$ the expected population size remains finite for all time. See Itô and McKean [3, pp 200–211] for a proof of these facts using solutions to related differential equations. In this note we prove the following result on the almost sure rate of growth of $|N_{t}|$. Theorem 1. Suppose that the initial configuration consists of a finite number of particles at arbitrary positions in $\mathbb{R}$. Then, almost surely, $$\lim_{t\to\infty}\frac{\ln\ln|N_{t}|}{t}=2\sqrt{2\beta}.$$ We define $R_{t}:=\max_{u\in N_{t}}Y_{u}(t)$ to be the right-most particle in the $(\beta y^{2};\mathbb{R})$-BBM. In Harris and Harris [2] it was shown that $$\lim_{t\to\infty}\frac{\ln R_{t}}{t}=\sqrt{2\beta}$$ (1) almost surely, and this result is crucial in our proof of Theorem 1 because it allows us both some control over the maximum breeding rate in the BBM, and also to show that the growth rate of $|N_{t}|$ is dominated by particles with spatial positions near $R_{t}$. Indeed, a BBM in which every particle has branching rate $\beta R_{t}^{2}$ would see its population grow as in Theorem 1, and this point will provide our upper bound. Furthermore we shall see that, for any $\delta>0$, at all sufficiently large times $t$, a single particle located near $R_{t}$ will single-handedly build during time interval $[t,t+\delta]$ a progeny so large that it is again of the magnitude given by Theorem 1, which will prove the lower bound. 2 Proof of the growth rate To prove Theorem 1, it is sufficient to consider the case of the initial configuration of particles being a single particle at position $x\in\mathbb{R}$. Upper bound. Since the breeding potential is symmetric about the origin, we have from equation (1) that for $\varepsilon>0$ fixed, there exists, almost surely, a random time $\tau<\infty$ such that for all $u\in N_{t}$, $$\ln|Y_{u}(t)|<(\sqrt{2\beta}+\varepsilon)t,\quad\text{for all }t>\tau.$$ As a consequence of this, the breeding rate of any particle in the population is bounded by $\bar{\beta}_{t}:=\beta e^{2(\sqrt{2\beta}+\varepsilon)t}$ for all $t>\tau$. We now introduce a coupled branching process $(\bar{N}_{t},t\geq 0)$ as follows. For each $t\geq 0$ it consists in a population of particles $\{u:u\in\bar{N}_{t}\}$ whose positions are denoted by $\{\bar{Y}_{u}(t):u\in\bar{N}_{t}\}.$ Until time $\tau$ the two processes $N_{t}$ and $\bar{N}_{t}$ coincide and for all $t\geq\tau$ we will have $N_{t}\subseteq\bar{N}_{t}.$ Furthermore, we want that after time $\tau$ all particles in $\bar{N}_{t}$ branch at rate $\bar{\beta}_{t}.$ In order for this to make sense we must construct $(\bar{N}_{t},t\geq 0)$ conditionally on $(N_{t},t\geq 0).$ Given $(N_{t},t\geq 0),$ we let $(\bar{N}_{t},t\leq\tau)=(N_{t},t\leq\tau)$. For $t\geq\tau$, each particle $u\in N_{t}$ gives birth to an extra particle $v\in\bar{N}_{t}\backslash N_{t}$ at rate $\bar{\beta}_{t}-\beta Y_{u}(t)^{2}$ (note that if $u\in N_{t}$ then $Y_{u}(t)=\bar{Y}_{u}(t)$). Each particle thus created starts an independent BBM in $(\bar{N}_{t},t\geq 0)$ with time-dependent branching rate $\bar{\beta}_{t}$ at time $t$. Thus it is clear that all particles in $\bar{N}_{t}$ branch at rate $\bar{\beta}_{t}$ for $t\geq\tau.$ It is furthermore clear that $$|N_{t}|\leq|\bar{N}_{t}|$$ for all $t\geq 0.$ The upshot is that after time $\tau$, $\bar{N}_{t}$ is a pure birth process, and as such is very well studied. Consider $(Z_{t},t\geq\tau)$ a pure birth process starting with a single particle at time $\tau$ with inhomogeneous rate $(\bar{\beta}_{t},t\geq\tau)$, and define $$V_{t}:=\inf\Big{\{}s\geq\tau:\int_{\tau}^{s}\bar{\beta}_{u}\,\mathrm{d}u\geq t% \Big{\}}.$$ It is easily seen that $(Z_{V_{t}},t\geq 0)$ is simply a Yule process (i.e. a pure birth process where particles split into two at rate 1) and so we know that $$Z_{V_{t}}e^{-t}\to W_{\infty}\;\text{ as }t\to\infty$$ almost surely, where $W_{\infty}$ has an exponential mean 1 distribution. However, since ${\int_{\tau}^{V_{t}}\bar{\beta}_{u}\,\mathrm{d}u}=t$ we see that this implies $$Z_{t}\exp\Big{(}-\int_{\tau}^{t}\bar{\beta}_{s}\,\mathrm{d}s\Big{)}\to W_{% \infty}\;\text{ as }t\to\infty.$$ As $\bar{N}_{t}$ for $t\geq\tau$ is composed of the union of the offspring of the $|N_{\tau}|$ particles present at time $\tau$, we can write $$M_{t}:=|\bar{N}_{t}|\exp\Big{(}-\int_{\tau}^{t}\bar{\beta}_{s}\,\mathrm{d}s% \Big{)}=\sum_{i=1}^{|N_{\tau}|}Z^{(i)}_{t}\exp\Big{(}-\int_{\tau}^{t}\bar{% \beta}_{s}\,\mathrm{d}s\Big{)}$$ where the $Z^{(i)}$ are independent, identically distributed, copies of a pure birth process with birth rate $\bar{\beta}_{t}.$ By conditioning on the value of $|N_{\tau}|$ (which is almost surely finite), we see that $$M_{t}\to\bar{M}_{\infty}\in(0,\infty)\;\text{ as }t\to\infty$$ where, more precisely, $\bar{M}_{\infty}$ is distributed as the sum of $|N_{\tau}|$ independent exponential variables with mean 1. Thus $$\ \left(\ln|\bar{N}_{t}|-\int_{\tau}^{t}\bar{\beta}_{s}\,\mathrm{d}s\right)\to% \ln\bar{M}_{\infty}\;\text{ as }t\to\infty$$ and since $\int_{\tau}^{t}\bar{\beta}_{s}\,\mathrm{d}s=\frac{1}{c}[\exp(ct)-\exp(c\tau)]$ with $c=2(\sqrt{2\beta}+\varepsilon)$, it follows that $$\lim_{t\to\infty}\frac{\ln\ln|\bar{N}_{t}|}{t}=2(\sqrt{2\beta}+\varepsilon),$$ almost surely. Finally, since $\varepsilon$ may be arbitrarily small, using the coupling constructed above, we have that $$\limsup_{t\to\infty}\frac{\ln\ln|N_{t}|}{t}\leq 2\sqrt{2\beta},$$ almost surely, as required. Lower bound. Fix $\varepsilon>0$. From (1), we know that there exists, almost surely, a random time $\tau^{\prime}<\infty$ such that for all $t>\tau^{\prime}$, $$R_{t}>e^{(\sqrt{2\beta}-\varepsilon)t}.$$ (2) Fix $\delta>0$ and let $t_{n}:=n\delta,n\in\{0,1,2,\ldots\}.$ We will show that it is sufficient to consider only the offspring of the rightmost particles $R_{t_{n}}$ to prove the lower bound for the global population growth rate. There are two steps to the argument. First, we will couple the sub-population descended from $R_{t_{n}}$ during the time interval $[t_{n},t_{n+1}]$ to a BBM in which the expected population size remains finite. Second, we will show that, as $n$ tends to infinity, the population sizes of the coupled processes grow sufficiently quickly. For this, we use equation (2) to give us a lower bound on the breeding rate in the coupled processes. Let $N^{(n)}_{s}$ be the set of descendants of $R_{t_{n}}$ at time $s\in[t_{n},t_{n+1}]$. As for the upper bound we introduce a coupled process $(\widetilde{N}^{(n)}_{s},s\in[t_{n},t_{n+1}]).$ This time we have that for all $s\in[t_{n},t_{n+1}],\widetilde{N}^{(n)}_{s}\subseteq N^{(n)}_{s}\subseteq N_{s}.$ More precisely, $(\widetilde{N}^{(n)}_{s},s\in[t_{n},t_{n+1}])$ is obtained from $(N^{(n)}_{s},s\in[t_{n},t_{n+1}])$ by cancelling some of the split events in $N^{(n)}_{s},$ in the following way: if at time $s$ a particle $u\in\widetilde{N}^{(n)}_{s}\subseteq N^{(n)}_{s}$ splits in the original process $N^{(n)}_{s},$ it also splits in $\widetilde{N}^{(n)}_{s}$ with probability $$\frac{\widetilde{\beta}_{n}(Y_{u}(s))}{\beta Y_{u}(s)^{2}}\in[0,1],$$ where $$\widetilde{\beta}_{n}(x):=\begin{cases}\beta e^{2(\sqrt{2\beta}-2\varepsilon)t% _{n}}&\text{if }x\geq e^{(\sqrt{2\beta}-2\varepsilon)t_{n}},\\ 0&\text{otherwise.}\end{cases}$$ If the split event is rejected, one of the two offspring in $N^{(n)}_{s}$ is chosen at random to be the one which we keep in $\widetilde{N}^{(n)}_{s}.$ The process $(\widetilde{N}^{(n)}_{s},t_{n}\leq s\leq t_{n+1})$ is thus a BBM started at time $t_{n}$ from a single particle at position $R_{t_{n}}$ with space-dependent branching rate $\widetilde{\beta}_{n}$. Observe that we trivially have $\widetilde{N}^{(n)}_{s}\subseteq N^{(n)}_{s}$ for all $s\in[t_{n},t_{n+1}],$ as announced, since we obtain $\widetilde{N}^{(n)}_{s}$ from $N^{(n)}_{s}$ merely by erasing some particles from $N^{(n)}.$ We also require another process on the same probability space, coupled to $(\widetilde{N}_{s}^{(n)},s\in[t_{n},t_{n+1}])$. This process is denoted $(\widehat{N}^{(n)}_{s},s\in[t_{n},t_{n+1}])$, and is defined by adding particles to $(\widetilde{N}^{(n)}_{s},s\in[t_{n},t_{n+1}])$ in such a way that every particle in $\widehat{N}^{(n)}$ breeds at constant rate $\beta e^{2(\sqrt{2\beta}-2\varepsilon)t_{n}}$, irrespective of its spatial position. More specifically, for $s\in[t_{n},t_{n+1}]$, every particle $u\in\widetilde{N}_{s}^{(n)}$ gives birth to an extra particle $v\in\widehat{N}_{s}^{(n)}\backslash\widetilde{N}_{s}^{(n)}$ at rate $$\beta e^{2(\sqrt{2\beta}-2\varepsilon)t_{n}}\mathbf{1}_{\{Y_{u}(s)<e^{(\sqrt{2% \beta}-2\varepsilon)t_{n}}\}},$$ and each particle thus created initiates an independent BBM in $(\widehat{N}^{(n)}_{s},s\in[t_{n},t_{n+1}])$ with constant breeding rate $\beta e^{2(\sqrt{2\beta}-2\varepsilon)t_{n}}$. Note that we have $\widetilde{N}^{(n)}_{s}\subseteq\widehat{N}_{s}^{(n)}$ for all $s\in[t_{n},t_{n+1}]$. Now for $v\in N^{(n)}_{t_{n+1}}$ define the event $$A_{n}(v):=\Big{\{}\min_{s\in[t_{n},t_{n+1}]}Y_{v}(s)<e^{(\sqrt{2\beta}-2% \varepsilon)t_{n}}\Big{\}},$$ and set $A_{n}=\cup_{v\in\widetilde{N}^{(n)}_{t_{n+1}}}A_{n}(v)$, i.e., the event that there exists a descendant $v$ of $R_{t_{n}}$ (in the modified process $\widetilde{N}^{(n)}$) such that $Y_{v}(s)<e^{(\sqrt{2\beta}-2\varepsilon)t_{n}}$ for some $s\in[t_{n},t_{n+1}]$. Observe that $$\Big{\{}\widetilde{N}_{t_{n+1}}^{(n)}\neq\widehat{N}_{t_{n+1}}^{(n)}\Big{\}}% \subseteq A_{n}.$$ (3) We also define the event $B_{n}$ as $$B_{n}:=\Big{\{}R_{t_{n}}>e^{(\sqrt{2\beta}-\varepsilon)t_{n}}\Big{\}},$$ and let $\{{\mathcal{F}}_{t}\}_{t\geq 0}$ be the natural filtration for the $(\beta y^{2};\mathbb{R})$-BBM. Our aim is to show that, almost surely, only finitely many of the events $A_{n}$ occur, from which it follows that the populations of the coupled subprocesses $\widetilde{N}^{(n)}$ are equal to the populations of the $\widehat{N}^{(n)}$ processes for all sufficiently large $n$, almost surely. Then the final step is to show that the populations $\widehat{N}^{(n)}$ grow sufficiently quickly to imply the desired lower bound on the size of the original population. We start by writing the event of interest as $$A_{n}=(A_{n}\cap B_{n})\cup(A_{n}\cap B_{n}^{c}),$$ and we recall from equation (1) that $P(\limsup B_{n}^{c})=0$, and hence only finitely many of the events $A_{n}\cap B_{n}^{c}$ occur. We now use a standard ‘many-to-one’ argument (see, for example, Hardy and Harris [1]) to bound the probabilities $P(A_{n}\cap B_{n}|{\mathcal{F}}_{t_{n}})$. Let $\mathbb{P}^{x}$ be the law of a driftless Brownian motion $Y$ started at the point $x\in\mathbb{R}$ (and $\mathbb{E}^{x}$ the expectation with respect to this law). Observing that $B_{n}\in{\mathcal{F}}_{t_{n}}$, we have $$\displaystyle\mathbf{1}_{B_{n}}P(A_{n}|{\mathcal{F}}_{t_{n}})$$ $$\displaystyle\leq\mathbf{1}_{B_{n}}E\bigg{[}\sum_{v\in\widetilde{N}^{(n)}_{t_{% n+1}}}\mathbf{1}_{A_{n}(v)}\bigg{|}{\mathcal{F}}_{t_{n}}\bigg{]}$$ $$\displaystyle=\mathbf{1}_{B_{n}}\mathbb{E}^{R_{t_{n}}}\bigg{(}\exp\Big{(}\int_% {0}^{\delta}\widetilde{\beta}_{n}(Y_{s})\,\mathrm{d}s\Big{)};\min_{s\in[0,% \delta]}Y_{s}<e^{(\sqrt{2\beta}-2\varepsilon)t_{n}}\bigg{)}$$ $$\displaystyle\leq\exp\Big{(}{\beta e^{2(\sqrt{2\beta}-2\varepsilon)t_{n}}% \delta\Big{)}}\,\mathbb{P}^{e^{(\sqrt{2\beta}-\varepsilon)t_{n}}}\left(\min_{s% \in[0,\delta]}Y_{s}<e^{(\sqrt{2\beta}-2\varepsilon)t_{n}}\right).$$ Using the reflection principle, we obtain that there exists $C>0$ such that $$\displaystyle\mathbb{P}^{e^{(\sqrt{2\beta}-\varepsilon)t_{n}}}\left(\min_{s\in% [0,\delta]}Y_{s}<e^{(\sqrt{2\beta}-2\varepsilon)t_{n}}\right)$$ $$\displaystyle=2\mathbb{P}^{0}\Big{(}Y_{\delta}<e^{(\sqrt{2\beta}-\varepsilon)t% _{n}}(e^{-\varepsilon t_{n}}-1)\Big{)}$$ $$\displaystyle\leq C\exp\Big{(}-\frac{1}{2\delta}(e^{(\sqrt{2\beta}-\varepsilon% )t_{n}}(e^{-\varepsilon t_{n}}-1))^{2}\Big{)}.$$ Combining this series of inequalities gives, almost surely, a faster than exponentially decaying upper bound on $P(A_{n}\cap B_{n}|{\mathcal{F}}_{t_{n}})$, and so we have shown that $$\sum_{n\geq 0}P(A_{n}\cap B_{n}|{\mathcal{F}}_{t_{n}})<\infty$$ almost surely. Since $A_{n}\cap B_{n}\in{\mathcal{F}}_{t_{n+1}}$, Lévy’s extension of the Borel-Cantelli lemmas (see Williams [4, Theorem 12.15]) lets us conclude that, almost surely, only finitely many of the events $A_{n}\cap B_{n}$ occur. Thus only finitely many of the events $A_{n}$ occur, almost surely. Recalling equation (3) we see that, almost surely, there exists a random integer $n_{1}<\infty$ such that for all $n>n_{1}$, we have $\widetilde{N}^{(n)}_{s}=\widehat{N}_{s}^{(n)}$ for all $s\in[t_{n},t_{n+1}]$. The population size $|\widehat{N}_{s}^{(n)}|$ is a Yule process with constant breeding rate $\beta e^{2(\sqrt{2\beta}-2\varepsilon)t_{n}}$, for $s\in[t_{n},t_{n+1}]$. Hence $|\widehat{N}^{(n)}_{t_{n+1}}|$ has a geometric distribution with parameter $$p_{n}:=\exp\Big{(}-\delta\beta e^{2(\sqrt{2\beta}-2\varepsilon)t_{n}}\Big{)}.$$ If we define $$q_{n}:=\Big{\lceil}\exp\Big{(}e^{2(\sqrt{2\beta}-3\varepsilon)t_{n}}\Big{)}% \Big{\rceil},$$ then the probability that the number of particles in $\widehat{N}^{(n)}_{t_{n+1}}$ is smaller than $q_{n}$ is $$P(|\widehat{N}^{(n)}_{t_{n+1}}|\leq q_{n})=1-(1-p_{n})^{q_{n}}=p_{n}q_{n}+o(p_% {n}q_{n}).$$ Using the Borel-Cantelli lemmas again, we see that there exists almost surely a random integer $n_{2}<\infty$ such that $|\widehat{N}^{(n)}_{t_{n+1}}|>q_{n}$ for all $n>n_{2}$. Finally, for all $n>\max\{n_{1},n_{2}\}$ and all $t\in[t_{n+1},t_{n+2}]$, we have that there are at least as many particles in $N_{t}$ as there are descendants of $R_{t_{n}}$ at time $t_{n+1}$, which is to say that $|N_{t}|\geq|\widetilde{N}^{(n)}_{t_{n+1}}|$. Hence, for $n$ sufficiently large, $$\frac{\ln\ln|N_{t}|}{t}\geq\frac{\ln\ln|\widetilde{N}^{(n)}_{t_{n+1}}|}{t_{n}}% \frac{t_{n}}{t}\geq 2\sqrt{2\beta}-7\varepsilon,$$ (4) almost surely. (Certainly we must have $n>\max\{n_{1},n_{2}\}$, but we may require that $n$ be larger still in order that the multiplicative factor $t_{n}/t$ is close enough to 1 for equation (4) to hold.) We can take $\varepsilon$ to be arbitrarily small, and so obtain $$\liminf_{t\to\infty}\frac{\ln\ln|N_{t}|}{t}\geq 2\sqrt{2\beta},$$ almost surely, which completes the proof. References [1] Hardy, R., and Harris, S. C. A spine approach to branching diffusions with applications to ${L}^{p}$-convergence of martingales. In Séminaire de Probabilités XLII, vol. 1979 of Lecture Notes in Math. Springer, Berlin, 2009, pp. 281–330. [2] Harris, J. W., and Harris, S. C. Branching Brownian motion with an inhomogeneous breeding potential. Ann. Inst. Henri Poincaré Probab. Stat. 45, 3 (2009), 793–801. [3] Itô, K., and McKean, H. P. Diffusion processes and their sample paths. Die Grundlehren der Mathematischen Wissenschaften, Band 125. Academic Press Inc., Publishers, New York, 1965. [4] Williams, D. Probability with martingales. Cambridge University Press, 1991.
Area expanding ${\mathscr{C}}^{1+\alpha}$ Suspension Semiflows Oliver Butterley oliver.butterley@univie.ac.at Faculty of Mathematics, Universität Wien, Nordbergstraße 15, 1090 Wien, Austria (Date:: January 11, 2021) Abstract. We study a large class of suspension semiflows which contains the Lorenz semiflows. This is a class with low regularity (merely ${\mathscr{C}}^{1+\alpha}$) and where the return map is discontinuous and the return time is unbounded. We establish the functional analytic framework which is typically employed to study rates of mixing. The Laplace transform of the correlation function is shown to admit a meromorphic extension to a strip about the imaginary axis. As part of this argument we give a new result concerning the quasi-compactness of weighted transfer operators for piecewise ${\mathscr{C}}^{1+\alpha}$ expanding interval maps. It is a pleasure to thank Carlangelo Liverani for many helpful discussions and comments. Research partially supported by the ERC Advanced Grant MALADY (246953). 1. Introduction Some dynamical systems exhibit very good statistical properties in the sense of, for example, exponential decay of correlation and the stability of the invariant measure under deterministic or random perturbations. Such properties have been shown for many discrete-time dynamical systems and more recently for some flows. Very strong results now exist for smooth contact Anosov flows [11, 19, 9, 30, 31, 13]. Good results also exist for suspension flows over uniformly-expanding Markov maps when the system is ${\mathscr{C}}^{2}$ or smoother [27, 7, 5]. The above are all rather smooth and regular systems and arguably not realistic or relevant in many physical systems. There are two important examples which come to mind: dispersing billiards [10] and the Lorenz flow [21]. The fine statistical properties of both these systems remain, to some extent, open problems. We therefore direct our interest to systems with rather low regularity. Some recent progress includes the proof of exponential mixing for piecewise-cone-hyperbolic contact flows [6] and also for a class of three-dimensional singular flows [4]. This is our theme: To make some progress on the understanding of the fine results on statistical properties of systems with low regularity. The primary motivation for this study is the Lorenz flow mentioned above. This is a smooth three-dimensional singular hyperbolic flow. The work of Araújo and Varandas [4] proved exponential decay of correlation for a class of volume-expanding flows with singularities, a class which is inspired by the Lorenz flow. However their method required the existence of a ${\mathscr{C}}^{2}$ stable foliation. Unfortunately the stable foliation of the Lorenz flow is merely ${\mathscr{C}}^{1+\alpha}$ and so there seems to be no hope of extending their strategy to the original problem. The problem of the stable foliations being merely ${\mathscr{C}}^{1+\alpha}$ for Lorenz like flows has been partially tackled by Galatolo and Pacifico [12] followed by Araújo, Galatolo and Pacifico [2] but results on decay of correlations are limited to the return map and not the flow. In this paper we make some progress in a complementary direction. There exist two major strategies for approaching this problem: The first possible strategy is to construct an anisotropic Banach space in order to study the flow directly as was done for contact Anosov flows [19] and piecewise-cone-hyperbolic contact flows [6]. The second possible strategy is to study the Lorenz semiflow (details given in Section 3) which is given by quotienting along the stable manifolds. At this stage it is unclear how to construct the space required for the first possibility and we therefore consider the second. This however requires one to work with a system which is merely ${\mathscr{C}}^{1+\alpha}$. In this paper we focus on a particular class of semiflows which are suspensions over expanding interval maps. This class includes the Lorenz semiflows. They have low regularity in the following four ways: (1) The expansion of the return map may be unbounded. I.e. the derivative of the return map blows up close to certain points of discontinuity. This is an issue seen in both billiard systems and the Lorenz flow. (2) The inverse of the derivative of the return map is merely Hölder continuous. (3) The return time function is unbounded. This is a direct result of the zeros of the vector field associated to the flow. However in the case of certain suspension semiflows this has already been shown to not be a barrier to good statistical properties [7]. (4) The semiflow is merely area-expanding and not uniformly-expanding in the sense that it is not possible to define an invariant conefield which is uniformly transversal to the flow direction. This puts us in the category of singular hyperbolicity [23, 25]. In order to study the class of flows considered in this paper, and other systems which are the object of current research, it is crucial to understand whether these above issues are real barriers to good statistical properties or merely technical difficulties. On this issue we succeed in making some progress in the present work showing that the listed issues are not real barriers to the statistical properties, at least in this setting. For proving exponential decay of correlation for flows there is one established approach which involves studying the Laplace transform of the correlation function. We apply this strategy to our present setting and show that the Laplace transform of the correlation function admits a meromorphic extension into the left half plane. In Section 2 we define precisely the class of semiflows we are interested in and state the results. In Section 3 we discuss Lorenz flows and demonstrate the connection with the class of semiflows we consider. In Section 4 we give a generalisation of the result of Keller on “generalised bounded variation” [18] such that it is possible to apply to our present application. This is a new result for the essential spectral radius of such transfer operators for these piecewise expanding interval maps and the section is independent of the others. Section 5 contains the proof of the main result, reducing the problem to the study of certain weighted transfer operators and then using the results of Section 4. 2. Results For our purposes we define a suspension semiflow to be the triple $(\Omega,f,\tau)$: Where $\Omega$ is an open interval and $\{\omega_{i}\}_{i\in{\mathcal{I}}}$ is a finite or countable set of disjoint open sub-intervals which exhaust $\Omega$ modular a set of zero Lebesgue measure; Where $f\in{\mathscr{C}}^{1}(\tilde{\Omega},\Omega)$ (for convenience let $\tilde{\Omega}=\bigsqcup_{i\in{\mathcal{I}}}\omega_{i}$) is a bijection when restricted to each $\omega_{i}$; And where $\tau\in{\mathscr{C}}^{0}(\Omega,{\mathbb{R}}_{+})$ such that $\int_{\Omega}\tau(x)\ dx<\infty$. In a moment we will add some stronger assumptions on the regularity of $f$ and $\tau$. We call $f$ the return map and $\tau$ the return time function. Suppose that $(\Omega,f,\tau)$ is given. Let $\Omega_{\tau}:=\{(x,s):x\in\tilde{\Omega},0\leq s<\tau(x)\}$ which we call the state space. For all $(x,s)\in\Omega_{\tau}$ and $t\in[0,\tau(x)-s]$ let $$\Phi^{t}(x,s):=\begin{cases}(x,s+t)&\text{if $t<\tau(x)-s$}\\ (f(x),0)&\text{if $t=\tau(x)-s$}.\end{cases}$$ (2.1) Note that $\Phi^{u+t}(x,s)=\Phi^{u}\circ\Phi^{t}(x,s)$ for all $u,t$ such that each term is defined. The flow is then defined for all $t\geq 0$ by assuming that this relationship continues to hold. Now we define the class of suspension semiflows which we will study. Firstly we require that the return map is expanding, i.e. that111In general it is sufficient to suppose that there exists $n\in{\mathbb{N}}$ such that $\smash{{\lVert{1/(f^{n})^{\prime}}\rVert}_{\mathbf{L^{\infty}}(\Omega)}<1}$. In which case one simply considers the $n$th iterate of the suspension flow and proceeds as before, although care must be taken with assumption (2.2). ${\lVert{1/f^{\prime}}\rVert}_{\mathbf{L^{\infty}}(\Omega)}<1$. We suppose that there exist some $\alpha\in(0,1)$ and $\sigma>0$ such that the following three conditions hold. Firstly we must have some, albeit weak, control on the regularity. We assume that222 We say that some $\xi:\Omega\to{\mathbb{C}}$ is “$\alpha$-Hölder on $\Omega$” if there exists $H_{\xi}<\infty$ such that $\left\lvert{\xi(x)-\xi(y)}\right\rvert\leq H_{\xi}\left\lvert{x-y}\right\rvert% ^{\alpha}$ for all $x,y\in\Omega$ with the understanding that this inequality is trivially satisfied if $x\in\omega_{i}$, $y\in\omega_{i^{\prime}}$, $i\neq i^{\prime}$ since in this case $\left\lvert{x-y}\right\rvert$ is not finite. Note that $H_{\xi}$ does not depend on $i$. $$x\mapsto\frac{e^{z\tau(x)}}{f^{\prime}(x)}\quad\quad\text{is $\alpha$-H\"{o}% lder on $\tilde{\Omega}$ for each $\Re(z)\in[-\sigma,0]$}.$$ (2.2) Furthermore we must require sufficient expansion in proportion to the return time. We assume that $$\sup_{i\in{\mathcal{I}}}\left({{\lVert{\tfrac{1}{f^{\prime}}}\rVert}_{\mathbf{% L^{\infty}}(\omega_{i})}}\right)^{\alpha}e^{{\sigma}{\lVert{\tau}\rVert}_{% \mathbf{L^{\infty}}(\omega_{i})}}<1$$ (2.3) Finally, to deal with the possibility of a countable and not finite number of disconnected components of $\Omega$, we assume that $$\sum_{i\in{\mathcal{I}}}{\lVert{\tfrac{1}{f^{\prime}}}\rVert}_{\mathbf{L^{% \infty}}(\omega_{i})}e^{\sigma{\lVert{\tau}\rVert}_{\mathbf{L^{\infty}}(\omega% _{i})}}<\infty.$$ (2.4) Note that we never require any lower bound on $\tau$. Let $\nu$ denote some $f$-invariant probability measure which is absolutely continuous with respect to Lebesgue on $\Omega$. The existence of such a probability measure is already known but is also implied by the results of Section 4. For simplicity we assume that this absolutely continuous invariant probability measure is unique. It holds that $\mu:=\nu\otimes\operatorname{Leb}/\nu(\tau)$ is a $\Phi^{t}$-invariant probability measure which is absolutely continuous with respect to Lebesgue on $\Omega_{\tau}$. Given $u,v:\Omega_{\tau}\to{\mathbb{C}}$ which are $\alpha$-Hölder we define for all $t\geq 0$ the correlation $$\operatorname{\xi}(t):=\mu(u\cdot v\circ\Phi^{t})-\mu(u)\cdot\mu(v).$$ Main Theorem. Suppose the suspension semiflow is as described above, in particular satisfying the assumptions (2.2), (2.3), and (2.4). Then the Laplace transform of the correlation $\widehat{\operatorname{\xi}}(z):=\int_{0}^{\infty}e^{-zt}\operatorname{\xi}(t)% \ dt$ admits a meromorphic extension to the set $\{z\in{\mathbb{C}}:\Re(z)\geq-\sigma\}$. The proof of this theorem is given in Section 5 and is based on the results of Section 4. The argument involves the usual method of “twisted transfer operators” but for this setting we require a generalisation of Keller’s previous work [18] on ${\mathscr{C}}^{1+\alpha}$ expanding interval maps which is the content of Section 4. Let us recall in detail some closely related results which were mentioned in the introduction. Baladi and Vallée [7] (argument later extended to higher dimensions [5] by Avila, Yoccoz and Gouëzel) studied suspension semiflows which had return maps which were Markov and also ${\mathscr{C}}^{2}$. They allowed the return time to be unbounded but only in a mild way as they required $\tau^{\prime}/f^{\prime}$ to be bounded. As part of the study of Lorenz-like flows Araújo and Varandas [4] studied suspension semiflows very similar to the present setting but had to additionally require that the return map was ${\mathscr{C}}^{2}$ rather than our weaker assumption of ${\mathscr{C}}^{1+\alpha}$. We therefore see that our setting is more general and sufficiently general to be used for the study of Lorenz flows (see Section 3). However in each of the above mentioned cases exponential decay of correlation is proven, a significantly stronger result that is proven in this present work. To obtain results on exponential decay of correlation in would be additionally necessary to have an oscillatory cancelation argument as pioneered by Dolgopyat [11]. But since we already have the Lasota-Yorke inequality which we prove in Section 4, it is suffices to prove this stronger estimate in the $\mathbf{L^{1}}$ norm. All indication therefore suggests that, although far from trivial, there is hope of proving such an estimate. 3. Lorenz Semiflows Introduced in 1963 as a simple model for weather, the Lorenz flow [21] is a smooth three dimensional flow which, from numerical simulation, appeared to exhibit a robust chaotic attractor. In the late 1970s Afraĭmovič, Bykov and Silnikov [1] and Guckenheimer and Williams [14, 33] introduced a geometric model of the Lorenz flow and some years later, in 2002, Tucker [32] showed that the geometric Lorenz flow really was a representative model for the original Lorenz flow and hence showed that the Lorenz attractor really did exist. This flow has long proved elusive to thorough study. It is not uniformly hyperbolic. The class of singular hyperbolic flows was introduced and studied in the late 1990s by Morales, Pacifico, Pujals [23, 25, 24]. This class of flows contains the uniform hyperbolic flows and also contains the Lorenz attractor. Whereas the uniformly hyperbolic flows are the flows which are structurally stable as shown by Hayashi [15, 16], the singular hyperbolic flows are the flows which are stably transitive. It is known that singular hyperbolic flows are chaotic in that they are expansive and admit an SRB measure [3]. Some further results are known limited to the particular case of the Lorenz attractor. It is know to be mixing [22] and that the Central Limit Theorem and Invariance Principle hold [17]. As mentioned earlier, a class of Lorenz-like flows has been shown to mix exponentially [4] although this result is limited to such flows which have ${\mathscr{C}}^{2}$ stable foliations, a property which cannot be expected to hold in general or for the original Lorenz flow. To show that the Lorenz flow reduces to a suspension semiflow of the class introduced in Section 2 it is necessary to collect together some known facts. This is a procedure similar to the one described in [22]. Firstly it is known that stable manifolds exist for the Lorenz attractor and they are ${\mathscr{C}}^{1+\gamma}$ for some $\gamma\in(0,1)$. Quotienting along these stable manifolds the three dimensional Lorenz flow may be reduced to a ${\mathscr{C}}^{1+\gamma}$ area expanding semiflow on a two dimensional branched manifold where the equilibrium point is (or indeed, all the equilibrium points are) after a ${\mathscr{C}}^{1+\gamma}$ change of coordinates to linearise close to the singularity, of the form $$\phi_{t}:(x,y)\mapsto(xe^{\lambda t},ye^{-\beta\lambda t}),$$ where $\lambda>0$ and $\beta\in(0,1)$. The equilibrium point of the Lorenz flow is of saddle type with two negative eigenvalues and one positive eigenvalue. We write the eigenvalues as $\zeta_{ss}<\zeta_{s}<0<\zeta_{u}$. The dominated splitting implies that $\zeta_{u}>\left\lvert{\zeta_{s}}\right\rvert$. This is the origin of the above two coefficients. We have $\lambda=\zeta_{u}$ and $\beta=\left\lvert{\zeta_{s}}\right\rvert/\zeta_{u}$. We now choose a suitable Poincaré section (made of, perhaps, many disconnected components) and thereby reduce the semiflow to a suspension described by a return map and return time function. Away from the regions containing an equilibrium point this is simple and yields a ${\mathscr{C}}^{1+\gamma}$ return map and return time function where the return time function is bounded. Closer to the singularities the construction is slightly more delicate. For $x\in(0,1)$ let $$\tau(x):=-\tfrac{1}{\lambda}\ln(x),\quad\quad f(x):=x^{\beta}.$$ These definitions have the useful consequence that $\phi_{\tau(x)}:(x,1)\mapsto(1,f(x))$ for all $x\in(0,1)$ and that $\phi_{\tau(x)}:\{(x,1):x\in(0,1)\}\to\{(1,y):y\in(0,1)\}$ is a bijection. It is convenient to further subdivide these components of the Poincaré section and so for each $i\in{\mathbb{N}}$ let $\omega_{i}:=(e^{-(i+1)},e^{-i})$. We must verify that the conditions (2.2), (2.3), and (2.4) are satisfied for this suspension semiflow. We choose $\alpha:=\min\{\gamma,(1-\beta)/(2-\beta)\}$ and $\sigma>0$ such that $$\sigma<\alpha\lambda(1-\beta),$$ (3.1) the larger the better. Note this implies that $\alpha\in(0,\frac{1}{2})$ and that $$\sigma\leq\lambda(1-\beta-\alpha).$$ (3.2) Let $\Re(z)\in[-\sigma,0]$. First note that $$\frac{e^{-z\tau(x)}}{f^{\prime}(x)}=x^{\frac{z}{\lambda}+1-\beta}$$ and that $\Re(z/\lambda+1-\beta)\geq-\sigma/\lambda+1-\beta\geq\alpha$ by (3.2). Note that $y^{\zeta}-x^{\zeta}=\zeta\int_{x}^{y}s^{\zeta-1}\ ds$ for all $x,y\in{\mathbb{R}}$ and so for $y\geq x$ we have $${\smash{y^{\zeta}-x^{\zeta}}}\leq\left\lvert{\zeta}\right\rvert\int_{x}^{y}s^{% \Re(\zeta)-1}\ ds=\frac{\left\lvert{\zeta}\right\rvert}{\Re(\zeta)}({\smash{y^% {\Re(\zeta)}-x^{\Re(\zeta)}}}).$$ Consequently $x\mapsto{e^{z\tau(x)}}/{f^{\prime}(x)}$ is $\alpha$-Hölder on $(0,1)$. We must now show that the other two estimates hold. We have the simple estimates $${\lVert{\tfrac{1}{f^{\prime}}}\rVert}_{\mathbf{L^{\infty}}(\omega_{i})}=e^{-i(% 1-\beta)},\quad\quad{\lVert{\tau}\rVert}_{\mathbf{L^{\infty}}(\omega_{i})}=% \tfrac{1}{\lambda}(i+1).$$ This means that $${\lVert{\tfrac{1}{f^{\prime}}}\rVert}_{\mathbf{L^{\infty}}(\omega_{i})}^{% \alpha}e^{{\sigma}{\lVert{\tau}\rVert}_{\mathbf{L^{\infty}}(\omega_{i})}}=e^{-% i[\alpha(1-\beta)-\sigma/\lambda]}e^{\sigma/\lambda}$$ and by (3.1) we know that $\alpha(1-\beta)-\sigma/\lambda>0$. This means that it is simple to arrange that (2.3) and (2.4) are satisfied. Unfortunately we are not quite done since we have not shown that ${\lVert{1/f^{\prime}}\rVert}_{\mathbf{L^{\infty}}(\Omega)}<1$. We do know however that there exists $n\in{\mathbb{N}}$ such that ${\lVert{1/(f^{n})^{\prime}}\rVert}_{\mathbf{L^{\infty}}(\Omega)}<1$. Consequently we instead consider the $n$th iterate suspension semiflow. Conditions (2.3) and (2.4) are still satisfied by the iterate. However care must be taken by the Hölder continuity assumption (2.2). It may happen that this is now only satisfied for some $\tilde{\alpha}\in(0,\alpha)$. Consequently this smaller value of $\alpha$ must be used from the start of the construction. Actually the return map is bounded in ${\mathscr{C}}^{1}$ norm away from a neighbourhood of the equilibrium point and therefore with some knowledge of returns to the region of the equilibrium point a more optimal $\alpha$ could be choosen. The above estimates mean that the results of the previous section apply to the Lorenz semiflows. We make a few more comments about this particular suspension semiflow. This suspension semiflow presents the difficulty that the return time is not bounded but moreover $\tau^{\prime}/f^{\prime}$ is not bounded. (Such a condition is crucially required in [7, 5].) This means that the flow is not uniformly expanding in the sense of there existing an invariant conefield, uniformly bounded away from the flow direction, inside of which there is uniform expansion. Lorenz semiflows as discussed above are our main application although we study a more general class of suspension semiflows. 4. Generalised Bounded Variation We must consider the weighted transfer operators associated to expanding maps of the interval which have countable discontinuities and for which the inverse of the derivative and the weighting are merely Hölder continuous. This means that we cannot study the transfer operator acting on any relatively standard spaces. One possibility is the generalised bounded variation introduced by Keller [18] and used for expanding interval maps. However he does not consider the case when there are countable discontinuities and also does not consider the case of general weights. Saussol [28] used the same spaces for multi-dimensional expanding maps and showed that countable discontinuities are allowable but again did not study a general classes of weights and furthermore required the derivative of the map to be bounded. These are the spaces we will use in this section. Although not proven in the references with delicate estimates these spaces, as we will prove in this section, are useful for our application. Other possibly useful Banach spaces are available [29, 8, 20] but each suffers from some limitation which prevents the use in this present setting without imposing undesirable further conditions on the semiflow we wish to study. One particular problem is that we cannot guarantee that the weighting is bounded (see Section 5) and consequently we cannot guarantee that the weighted transfer operator is bounded on $\mathbf{L^{1}}$. Keller’s Banach space of generalised bounded variation [18] is contained within $\mathbf{L^{\infty}}$, a distinct difference to the available alternatives [8, 20]. This suggests the possibility that the transfer operator is bounded on this space even when not bounded on $\mathbf{L^{1}}$. In the remainder of this section we see that this speculation is shown to be correct. 4.1. The Banach Space The following definitions are identical to [18] with minor changes of notation. For any interval ${\mathcal{S}}$ and $h:{\mathcal{S}}\to{\mathbb{C}}$ let $$\operatorname{osc}\left[h,{\mathcal{S}}\right]:=\operatorname{ess\,sup}% \displaylimits\left\{\left\lvert{h(x_{1})-h(x_{2})}\right\rvert:x_{1},x_{2}\in% {\mathcal{S}}\right\}$$ where the essential supremum is taken with respect to Lebesgue measure on ${\mathcal{S}}^{2}$. Let ${B}_{\epsilon}(x):=\{y\in{\mathbb{R}}:\left\lvert{x-y}\right\rvert\leq\epsilon\}$. If $\alpha\in(0,1)$ and $\Omega$ is some finite or countable union of open intervals let $$\left\lvert{h}\right\rvert_{\mathfrak{B}_{\alpha}}:=\sup_{\epsilon\in(0,% \epsilon_{0})}\epsilon^{-\alpha}\int_{\Omega}\operatorname{osc}\left[h,{B}_{% \epsilon}(x)\cap\Omega\right]\ dx,$$ (4.1) where $\epsilon_{0}>0$ is some fixed parameter. Hence let $$\mathfrak{B}_{\alpha}:=\left\{h\in\mathbf{L^{1}}(\Omega):\left\lvert{h}\right% \rvert_{\mathfrak{B}_{\alpha}}<\infty\right\}.$$ The seminorm defined above will depend on $\epsilon_{0}>0$ although the sets $\mathfrak{B}_{\alpha}$ do not. It is known [18, Theorem 1.13] that this set is a Banach space when equipped with the norm $$\left\lVert{h}\right\rVert_{\mathfrak{B}_{\alpha}}:=\left\lvert{h}\right\rvert% _{\mathfrak{B}_{\alpha}}+{\lVert{h}\rVert}_{\mathbf{L^{1}}(\Omega)},$$ that $\mathfrak{B}_{\alpha}\subset\mathbf{L^{\infty}}(\Omega)$, and that the embedding $$\mathfrak{B}_{\alpha}\hookrightarrow\mathbf{L^{1}}(\Omega)\quad\quad\text{is % compact}.$$ (4.2) 4.2. Piecewise Expanding Transformations As before, we suppose that $\Omega$ is an open interval and $\{\omega_{i}\}_{i\in{\mathcal{I}}}$ is a finite or countable set of disjoint open sub-intervals which exhaust $\Omega$ modular a set of zero Lebesgue measure (for convenience let $\tilde{\Omega}=\bigsqcup_{i\in{\mathcal{I}}}\omega_{i}$) and that we are given $$f\in{\mathscr{C}}^{1}(\tilde{\Omega},\Omega)$$ which is bijective when restricted to each $\omega_{i}$. We further suppose that we are given $\xi:\Omega\to{\mathbb{C}}$ which we call the weighting. We require that $${\lVert{1/f^{\prime}}\rVert}_{\mathbf{L^{\infty}}(\Omega)}\in(0,1),$$ (4.3) furthermore that $$\sum_{i\in{\mathcal{I}}}{\lVert{1/f^{\prime}}\rVert}_{\mathbf{L^{\infty}}(% \omega_{i})}{\lVert{\xi}\rVert}_{\mathbf{L^{\infty}}(\omega_{i})}<\infty,$$ (4.4) and finally that $\frac{\xi}{f^{\prime}}:\Omega\to{\mathbb{C}}$ is $\alpha$-Hölder. I.e. there exists $H_{\xi}<\infty$ and $\alpha\in(0,1)$ such that $$\left\lvert{\tfrac{\xi}{f^{\prime}}(x)-\tfrac{\xi}{f^{\prime}}(y)}\right\rvert% \leq H_{\xi}\left\lvert{x-y}\right\rvert^{\alpha}\quad\quad\text{for all $x,y% \in\omega_{i}$ for each $i\in{\mathcal{I}}$}.$$ (4.5) For convenience let $f_{i}:\omega_{i}\to\Omega$ denote the restriction of $f$ to $\omega_{i}$. As usual the weighted transfer operator is given, for each $h:\Omega\to{\mathbb{C}}$, by333For any set, e.g. $A$, we let $\mathbf{1}_{A}$ denote the indicator function of that set. $${\mathscr{L}}_{\xi}h(x):=\sum_{i\in{\mathcal{I}}}\left(\frac{\xi\cdot h}{f^{% \prime}}\right)\circ f_{i}^{-1}(x)\cdot\mathbf{1}_{f\omega_{i}}(x).$$ (4.6) By (4.4) we know that ${\mathscr{L}}_{\xi}:\mathbf{L^{\infty}}(\Omega)\to\mathbf{L^{\infty}}(\Omega)$ is well defined even thought, since we do not require ${\lVert{\xi}\rVert}_{\mathbf{L^{\infty}}(\Omega)}<\infty$, we cannot guarantee that the operator is well defined on $\mathbf{L^{1}}(\Omega)$. The purpose of this section is to prove the following new result which is a generalisation of the work of Keller [18] to the case of countable discontinuities and unbounded weightings. Theorem 4.1. Suppose the transformation $f:\Omega\to\Omega$ and the weighting $\xi:\Omega\to{\mathbb{C}}$ are as above and satisfy (4.3), (4.4) and (4.5). Then ${\mathscr{L}}_{\xi}:\mathfrak{B}_{\alpha}\to\mathfrak{B}_{\alpha}$ is a bounded operator with essential spectral radius not greater than $$\lambda:=\sup_{i\in{\mathcal{I}}}{\lVert{1/f^{\prime}}\rVert}_{\mathbf{L^{% \infty}}(\omega_{i})}^{\alpha}{\lVert{\xi}\rVert}_{\mathbf{L^{\infty}}(\omega_% {i})}.$$ By a standard argument (see for example [19, p.1281]) the essential spectral radius estimate of the above theorem follows from the compact embedding (4.2) and the Lasota-Yorke type estimate contained in the following theorem. In the case where ${\lVert{\xi}\rVert}_{\mathbf{L^{\infty}}(\Omega)}<\infty$ an elementary estimate shows that ${\lVert{{\mathscr{L}}_{\xi}}\rVert}_{\mathbf{L^{1}}(\Omega)}\leq{\lVert{\xi}% \rVert}_{\mathbf{L^{\infty}}(\Omega)}$ and so, once the essential spectral radius estimate has been shown, this implies that the spectral radius is not greater than ${\lVert{\xi}\rVert}_{\mathbf{L^{\infty}}(\Omega)}$. Theorem 4.2. Suppose that $f$ and $\xi$ are as per the assumptions of Theorem 4.1. Then for all $\delta>0$ there exists $C_{\delta}<\infty$ such that $$\left\lVert{{\mathscr{L}}_{\xi}h}\right\rVert_{\mathfrak{B}_{\alpha}}\leq(2+% \delta)\lambda\left\lVert{h}\right\rVert_{\mathfrak{B}_{\alpha}}+C_{\delta}{% \lVert{h}\rVert}_{\mathbf{L^{1}}(\Omega)}\quad\quad\text{for all $h\in% \mathfrak{B}_{\alpha}$}.$$ The remainder of this section is devoted to the proof of the above proposition. This estimate is an extension of the result of Keller [18] to our setting. The proof follows a similar argument to Keller’s original with various additional complications, in particular because of the weighting $\xi$ and the possibility that ${\mathcal{I}}$ is merely countable. As such we are forced to redo the proof but when possible we refer to the relevant theorems and lemmas which we can reuse. 4.3. Proof of Theorem 4.2 We may assume that $\delta\leq 1$. First $\epsilon_{0}>0$ must be carefully chosen and it is convenient to divide the index set as ${\mathcal{I}}={\mathcal{I}}_{1}\cup{\mathcal{I}}_{2}$. By (4.4) we may choose a finite set ${\mathcal{I}}_{1}\subset{\mathcal{I}}$ such that $$\sum_{i\in{\mathcal{I}}_{2}}{\lVert{1/f^{\prime}}\rVert}_{\mathbf{L^{\infty}}(% \omega_{i})}{\lVert{\xi}\rVert}_{\mathbf{L^{\infty}}(\omega_{i})}\leq\frac{% \lambda\delta}{16},$$ (4.7) where ${\mathcal{I}}_{2}:={\mathcal{I}}\setminus{\mathcal{I}}_{1}$. Let $\Gamma:=32\delta^{-1}+2$. Choosing $\epsilon_{0}$ sufficiently small we ensure that $$\left\lvert{f\omega_{i}}\right\rvert\geq\epsilon_{0}\Gamma\quad\quad\quad\text% {for all $i\in{\mathcal{I}}_{1}$}$$ (4.8) and that $$\epsilon_{0}^{\alpha}\leq\frac{\delta\lambda}{8(8+{\delta})H_{\xi}\Gamma}.$$ (4.9) (The reason for this particular choice will subsequently become clear (4.18).) If $\left\lvert{f\omega_{i}}\right\rvert>2\epsilon_{0}\Gamma$ for some $i\in{\mathcal{I}}_{1}$ we chop $\omega_{i}$ into pieces such that $\epsilon_{0}\Gamma\leq\left\lvert{f\omega_{j}}\right\rvert\leq 2\epsilon_{0}\Gamma$ for all the resulting pieces. If $\left\lvert{f\omega_{i}}\right\rvert>2\epsilon_{0}\Gamma$ for some $i\in{\mathcal{I}}_{2}$ we chop $\omega_{i}$ into pieces as before but in this case we move the resulting pieces into the set ${\mathcal{I}}_{1}$. This means that the estimate (4.7) remains unaltered. Note that ${\mathcal{I}}_{1}$ may no longer be a finite set. To conclude we have arranged so that (4.7), (4.8), and (4.9) hold and furthermore that $$\left\lvert{f\omega_{i}}\right\rvert\leq 2\epsilon_{0}\Gamma\quad\quad\quad% \text{for all $i\in{\mathcal{I}}$}.$$ (4.10) Fix $h\in\mathfrak{B}_{\alpha}$. We start by noting that by the definition (4.1) of the seminorm and the definition (4.6) of the transfer operator $$\begin{split}\displaystyle\left\lvert{{\mathscr{L}}_{\xi}h}\right\rvert_{% \mathfrak{B}_{\alpha}}&\displaystyle=\sup_{\epsilon\in(0,\epsilon_{0})}% \epsilon^{-\alpha}\int_{\Omega}\operatorname{osc}\left[{\mathscr{L}}_{\xi}h,{B% }_{\epsilon}(x)\cap\Omega\right]\ dx\\ &\displaystyle\leq\sup_{\epsilon\in(0,\epsilon_{0})}\sum_{i}\epsilon^{-\alpha}% \int_{\Omega}\operatorname{osc}\left[\left(\tfrac{\xi\cdot h}{f^{\prime}}% \right)\circ f_{i}^{-1}\cdot\mathbf{1}_{f\omega_{i}},{B}_{\epsilon}(x)\cap% \Omega\right]\ dx.\end{split}$$ (4.11) To proceed we take advantage of several estimates which have already been proved elsewhere. Firstly by [18, Theorem 2.1] for each $i\in{\mathcal{I}}_{1}$, since $\left\lvert{f\omega_{i}}\right\rvert\geq(32\delta^{-1}+2)\epsilon_{0}$ by (4.8) and $\left\lvert{f\omega_{i}}\right\rvert\geq 4\epsilon_{0}$, we have that $$\begin{split}&\displaystyle\int_{\Omega}\operatorname{osc}\left[\left(\tfrac{% \xi\cdot h}{f^{\prime}}\right)\circ f_{i}^{-1}\cdot\mathbf{1}_{f\omega_{i}},{B% }_{\epsilon}(x)\cap\Omega\right]\ dx\\ &\displaystyle\quad\quad\quad\leq(2+\tfrac{\delta}{4})\int_{f\omega_{i}}% \operatorname{osc}\left[\left(\tfrac{\xi\cdot h}{f^{\prime}}\right)\circ f_{i}% ^{-1},{B}_{\epsilon}(x)\cap f\omega_{i}\right]\ dx\\ &\displaystyle\quad\quad\quad+\frac{\epsilon}{\epsilon_{0}}\int_{f\omega_{i}}% \left\lvert{\tfrac{\xi\cdot h}{f^{\prime}}}\right\rvert\circ f_{i}^{-1}(x)\ dx% .\end{split}$$ (4.12) For $i\in{\mathcal{I}}_{2}$ (where $\left\lvert{f\omega_{i}}\right\rvert$ may be small) we use the following, more basic estimate. By [28, Proposition 3.2 (ii)] for each $i$ $$\begin{split}\displaystyle\operatorname{osc}\left[\left(\tfrac{\xi\cdot h}{f^{% \prime}}\right)\circ f_{i}^{-1}\cdot\mathbf{1}_{f\omega_{i}},{B}_{\epsilon}(x)% \cap\Omega\right]&\displaystyle\leq\operatorname{osc}\left[\left(\tfrac{\xi% \cdot h}{f^{\prime}}\right)\circ f_{i}^{-1},{B}_{\epsilon}(x)\cap f\omega_{i}% \right]\cdot\mathbf{1}_{f\omega_{i}}\\ &\displaystyle+2{\lVert{\tfrac{\xi\cdot h}{f^{\prime}}}\rVert}_{\mathbf{L^{% \infty}}(\omega_{i})}\mathbf{1}_{F_{i,\epsilon}}(x)\end{split}$$ where $F_{i,\epsilon}$ denotes the set of all points $x\in\Omega$ which are within a distance of $\epsilon$ of the end points of the interval $f\omega_{i}$. Since $\left\lvert{\smash{\int_{f\omega_{i}}\mathbf{1}_{F_{i,\epsilon}}(x)\ dx}}% \right\rvert\leq 2\epsilon$ the above implies that $$\begin{split}&\displaystyle\int_{\Omega}\operatorname{osc}\left[\left(\tfrac{% \xi\cdot h}{f^{\prime}}\right)\circ f_{i}^{-1}\cdot\mathbf{1}_{f\omega_{i}},{B% }_{\epsilon}(x)\cap\Omega\right]\ dx\\ &\displaystyle\quad\quad\quad\leq\int_{f\omega_{i}}\operatorname{osc}\left[% \left(\tfrac{\xi\cdot h}{f^{\prime}}\right)\circ f_{i}^{-1},{B}_{\epsilon}(x)% \cap f\omega_{i}\right]\ dx\\ &\displaystyle\quad\quad\quad+4\epsilon{\lVert{\xi}\rVert}_{\mathbf{L^{\infty}% }(\omega_{i})}{\lVert{h}\rVert}_{\mathbf{L^{\infty}}(\Omega)}{\lVert{1/f^{% \prime}}\rVert}_{\mathbf{L^{\infty}}(\omega_{i})}.\end{split}$$ (4.13) Note that the integral term in the middle line of the above equation is identical to the integral term of the middle line of (4.12). We also require the following basic estimate for the $\operatorname{osc}\left[\cdot,\cdot\right]$ of a product. Lemma 4.3. Suppose ${\mathcal{S}}\subset\Omega$ is an interval, $g_{1}:{\mathcal{S}}\to{\mathbb{C}}$, $g_{1}:{\mathcal{S}}\to{\mathbb{C}}$ and $y\in{\mathcal{S}}$. Then $$\operatorname{osc}\left[g_{1}\cdot g_{2},{\mathcal{S}}\right]\leq\left\lvert{g% _{1}(y)}\right\rvert\cdot\operatorname{osc}\left[g_{2},{\mathcal{S}}\right]+2{% \lVert{g_{2}}\rVert}_{\mathbf{L^{\infty}}({\mathcal{S}})}\cdot\operatorname{% osc}\left[g_{1},{\mathcal{S}}\right].$$ Proof. Suppose $x_{1},x_{2},y\in{\mathcal{S}}$. It suffices to observe that $$\begin{split}\displaystyle(g_{1}\cdot g_{2})(x_{1})-(g_{1}\cdot g_{2})(x_{2})&% \displaystyle=g_{1}(y)\left(g_{2}(x_{1})-g_{2}(x_{2})\right)\\ &\displaystyle+g_{2}(x_{1})\left(g_{1}(x_{1})-g_{1}(y)\right)+g_{2}(x_{2})% \left(g_{1}(y)-g_{1}(x_{2})\right).\qed\end{split}$$ This means in particular that (this is the term which appears in the middle lines of (4.12) and (4.13)) $$\begin{split}&\displaystyle\int_{f\omega_{i}}\operatorname{osc}\left[\left(% \tfrac{\xi\cdot h}{f^{\prime}}\right)\circ f_{i}^{-1},{B}_{\epsilon}(x)\cap f% \omega_{i}\right]\ dx\\ &\displaystyle\quad\quad\quad\quad\leq\int_{f\omega_{i}}\left\lvert{\tfrac{\xi% }{f^{\prime}}}\right\rvert\circ f_{i}^{-1}(x)\cdot\operatorname{osc}\left[h% \circ f_{i}^{-1},{B}_{\epsilon}(x)\cap f\omega_{i}\right]\ dx\\ &\displaystyle\quad\quad\quad\quad+2{\lVert{h}\rVert}_{\mathbf{L^{\infty}}(% \omega_{i})}\int_{f\omega_{i}}\operatorname{osc}\left[\tfrac{\xi}{f^{\prime}}% \circ f_{i}^{-1},{B}_{\epsilon}(x)\cap f\omega_{i}\right]\ dx.\end{split}$$ (4.14) Recalling (4.11) and applying the estimates of (4.12), (4.13) and (4.14) we have $$\left\lvert{{\mathscr{L}}_{\xi}h}\right\rvert_{\mathfrak{B}_{\alpha}}\leq\sup_% {\epsilon\in(0,\epsilon_{0})}\left(A_{1,\xi,h}(\epsilon)+A_{2,\xi,h}(\epsilon)% +A_{3,\xi,h}(\epsilon)+A_{4,\xi,h}(\epsilon)\right),$$ (4.15) where we have definded for convenience $$\begin{split}\displaystyle A_{1,\xi,h}(\epsilon)&\displaystyle:=\epsilon^{-% \alpha}(2+\tfrac{\delta}{4})\sum_{i\in{\mathcal{I}}}\int_{f\omega_{i}}\left% \lvert{\tfrac{\xi}{f^{\prime}}}\right\rvert\circ f_{i}^{-1}(x)\cdot% \operatorname{osc}\left[h\circ f_{i}^{-1},{B}_{\epsilon}(x)\cap f\omega_{i}% \right]\ dx\\ \displaystyle A_{2,\xi,h}(\epsilon)&\displaystyle:=2\epsilon^{-\alpha}(2+% \tfrac{\delta}{4})\sum_{i\in{\mathcal{I}}}{\lVert{h}\rVert}_{\mathbf{L^{\infty% }}(\omega_{i})}\int_{f\omega_{i}}\operatorname{osc}\left[\tfrac{\xi}{f^{\prime% }}\circ f_{i}^{-1},{B}_{\epsilon}(x)\cap f\omega_{i}\right]\ dx\\ \displaystyle A_{3,\xi,h}(\epsilon)&\displaystyle:=4\epsilon^{1-\alpha}{\lVert% {h}\rVert}_{\mathbf{L^{\infty}}(\Omega)}\sum_{i\in{\mathcal{I}}_{2}}{\lVert{1/% f^{\prime}}\rVert}_{\mathbf{L^{\infty}}(\omega_{i})}{\lVert{\xi}\rVert}_{% \mathbf{L^{\infty}}(\omega_{i})}\\ \displaystyle A_{4,\xi,h}(\epsilon)&\displaystyle:=\frac{\epsilon^{1-\alpha}}{% \epsilon_{0}}\sum_{i\in{\mathcal{I}}_{1}}\int_{f\omega_{i}}\left\lvert{\tfrac{% \xi\cdot h}{f^{\prime}}}\right\rvert\circ f_{i}^{-1}(x)\ dx.\end{split}$$ The remainder of the proof involves independently estimating each of these four terms. We start by estimating $A_{1,\xi,h}(\epsilon)$. Let $\sigma_{i}:={\lVert{1/f^{\prime}}\rVert}_{\mathbf{L^{\infty}}(\omega_{i})}\in(% 0,1)$ by assumption (4.3). Since $f_{i}^{-1}{B}_{\epsilon}(x)\subseteq{B}_{\sigma_{i}\epsilon}(f_{i}^{-1}x)$ we have that $$\begin{split}\displaystyle\operatorname{osc}\left[h\circ f_{i}^{-1},{B}_{% \epsilon}(x)\cap f\omega_{i}\right]&\displaystyle=\operatorname{osc}\left[h,f_% {i}^{-1}{B}_{\epsilon}(x)\cap\omega_{i}\right]\\ &\displaystyle\leq\operatorname{osc}\left[h,{B}_{\sigma_{i}\epsilon}(y_{i})% \cap\omega_{i}\right]\end{split}$$ (4.16) where $y_{i}:=f_{i}^{-1}x$. We change variables in the integral and so $$\begin{split}\displaystyle A_{1,\xi,h}(\epsilon)&\displaystyle\leq\epsilon^{-% \alpha}(2+\tfrac{\delta}{4})\sum_{i\in{\mathcal{I}}}\int_{\omega_{i}}\left% \lvert{{\xi}}\right\rvert(y_{i})\cdot\operatorname{osc}\left[h,{B}_{\sigma_{i}% \epsilon}(y_{i})\cap\omega_{i}\right]\ dy_{i}\\ &\displaystyle\leq\epsilon^{-\alpha}(2+\tfrac{\delta}{4}){\lVert{\xi}\rVert}_{% \mathbf{L^{\infty}}(\omega_{i})}\int_{\Omega}\operatorname{osc}\left[h,{B}_{% \sigma_{i}\epsilon}(y)\cap\Omega\right]\ dy\\ &\displaystyle\leq\sigma_{i}^{\alpha}(2+\tfrac{\delta}{4}){\lVert{\xi}\rVert}_% {\mathbf{L^{\infty}}(\omega_{i})}\left\lvert{h}\right\rvert_{\mathfrak{B}_{% \alpha}}\leq(2+\tfrac{\delta}{4})\lambda\left\lvert{h}\right\rvert_{\mathfrak{% B}_{\alpha}}.\end{split}$$ (4.17) Now we estimate $A_{2,\xi,h}(\epsilon)$. By [18, Lemma 2.2] we have the estimate $${\lVert{h}\rVert}_{\mathbf{L^{\infty}}(\omega_{i})}\leq\epsilon_{0}^{-1}\int_{% \omega_{i}}\operatorname{osc}\left[h,{B}_{\epsilon_{0}}(x)\right]\ dx+\left% \lvert{\omega_{i}}\right\rvert^{-1}{\lVert{h}\rVert}_{\mathbf{L^{1}}(\omega_{i% })}.$$ By assumption (4.5) we know that $\operatorname{osc}\left[\smash{\tfrac{\xi}{f^{\prime}}},{B}_{\sigma_{i}% \epsilon}(y_{i})\cap\omega_{i}\right]\leq 2H_{\xi}\sigma_{i}^{\alpha}\epsilon^% {\alpha}$ and so, changing variables as per (4.16), we have $$\int_{f\omega_{i}}\operatorname{osc}\left[\tfrac{\xi}{f^{\prime}}\circ f_{i}^{% -1},{B}_{\epsilon}(x)\cap f\omega_{i}\right]\ dx\leq\int_{\omega_{i}}% \operatorname{osc}\left[\tfrac{\xi}{f^{\prime}},{B}_{\sigma_{i}\epsilon}(y_{i}% )\cap\omega_{i}\right]\ dy_{i}\leq 2\left\lvert{\omega_{i}}\right\rvert H_{\xi% }\sigma_{i}^{\alpha}\epsilon^{\alpha}.$$ Combining the above two estimates means that $$A_{2,\xi,h}(\epsilon)\leq 4(2+\tfrac{\delta}{4})H_{\xi}\sum_{i\in{\mathcal{I}}% }\sigma_{i}^{\alpha}\left(\epsilon_{0}^{-(1-\alpha)}\left\lvert{\omega_{i}}% \right\rvert\epsilon_{0}^{-\alpha}\int_{\omega_{i}}\operatorname{osc}\left[h,{% B}_{\epsilon_{0}}(x)\right]\ dx+{\lVert{h}\rVert}_{\mathbf{L^{1}}(\omega_{i})}% .\right)$$ By the expanding assumption (4.3) and by (4.10) we know that $\left\lvert{\omega_{i}}\right\rvert\leq\sigma_{i}\left\lvert{f\omega_{i}}% \right\rvert\leq 2\sigma_{i}\epsilon_{0}\Gamma$. Using also (4.9) this means that for all $i\in{\mathcal{I}}$ $$\begin{split}\displaystyle 4(2+\tfrac{\delta}{4})H_{\xi}\sigma^{\alpha}% \epsilon_{0}^{-(1-\alpha)}\left\lvert{\omega_{i}}\right\rvert&\displaystyle% \leq 8(2+\tfrac{\delta}{4})H_{\xi}{\lVert{1/f^{\prime}}\rVert}_{\mathbf{L^{% \infty}}(\Omega)}^{1+\alpha}\epsilon_{0}^{\alpha}\Gamma\\ &\displaystyle\leq\tfrac{\delta}{4}\lambda.\end{split}$$ Consequently we have shown that $$\begin{split}\displaystyle A_{2,\xi,h}(\epsilon)&\displaystyle\leq\tfrac{% \delta}{4}\lambda\left\lvert{h}\right\rvert_{\mathfrak{B}_{\alpha}}+4(2+\tfrac% {\delta}{4})H_{\xi}{\lVert{1/f^{\prime}}\rVert}_{\mathbf{L^{\infty}}(\Omega)}^% {\alpha}{\lVert{h}\rVert}_{\mathbf{L^{1}}(\Omega)}.\end{split}$$ (4.18) Now we estimate $A_{3,\xi,h}(\epsilon)$. Using again [18, Lemma 2.2] we have the estimate $${\lVert{h}\rVert}_{\mathbf{L^{\infty}}(\Omega)}\leq\epsilon_{0}^{-(1-\alpha)}% \left\lvert{h}\right\rvert_{\mathfrak{B}_{\alpha}}+\left\lvert{\Omega}\right% \rvert^{-1}{\lVert{h}\rVert}_{\mathbf{L^{1}}(\Omega)}.$$ (4.19) This means that $$A_{3,\xi,h}(\epsilon)\leq 4\left(\left\lvert{h}\right\rvert_{\mathfrak{B}_{% \alpha}}+\frac{\epsilon_{0}^{1-\alpha}}{\left\lvert{\Omega}\right\rvert}{% \lVert{h}\rVert}_{\mathbf{L^{1}}(\Omega)}\right)\sum_{i\in{\mathcal{I}}_{2}}{% \lVert{1/f^{\prime}}\rVert}_{\mathbf{L^{\infty}}(\omega_{i})}{\lVert{\xi}% \rVert}_{\mathbf{L^{\infty}}(\omega_{i})}.$$ By (4.7) we know that $\sum_{i\in{\mathcal{I}}_{2}}{\lVert{1/f^{\prime}}\rVert}_{\mathbf{L^{\infty}}(% \omega_{i})}{\lVert{\xi}\rVert}_{\mathbf{L^{\infty}}(\omega_{i})}\leq\frac{% \lambda\delta}{16}$ and so $$A_{3,\xi,h}(\epsilon)\leq\tfrac{\delta}{4}\lambda\left\lvert{h}\right\rvert_{% \mathfrak{B}_{\alpha}}+\left(\frac{\epsilon_{0}^{1-\alpha}\delta}{4\left\lvert% {\Omega}\right\rvert}\lambda\right){\lVert{h}\rVert}_{\mathbf{L^{1}}(\Omega)}.$$ (4.20) Now we estimate $A_{4,\xi,h}(\epsilon)$. Using again the assumption (4.4) we may choose a finite set ${\mathcal{I}}_{3}\subset{\mathcal{I}}_{1}$ such that $$\sum_{i\in{\mathcal{I}}_{4}}{\lVert{1/f^{\prime}}\rVert}_{\mathbf{L^{\infty}}(% \omega_{i})}{\lVert{\xi}\rVert}_{\mathbf{L^{\infty}}(\omega_{i})}\leq\frac{% \delta\epsilon_{0}}{4}\lambda,$$ where ${\mathcal{I}}_{4}:={\mathcal{I}}_{1}\setminus{\mathcal{I}}_{3}$. We therefore estimate, using also a change of variables $y_{i}:=f_{i}^{-1}x$, $$\begin{split}\displaystyle A_{4,\xi,h}(\epsilon)&\displaystyle\leq\epsilon_{0}% ^{-\alpha}\sum_{i\in{\mathcal{I}}_{1}}\int_{f\omega_{i}}\left\lvert{\tfrac{\xi% \cdot h}{f^{\prime}}}\right\rvert\circ f_{i}^{-1}(x)\ dx\\ &\displaystyle\leq\epsilon_{0}^{-\alpha}\sum_{i\in{\mathcal{I}}_{3}}\int_{% \omega_{i}}\left\lvert{{\xi\cdot h}}\right\rvert(y_{i})\ dy_{i}+\epsilon_{0}^{% -\alpha}\sum_{i\in{\mathcal{I}}_{4}}{\lVert{\frac{\xi}{f^{\prime}}}\rVert}_{% \mathbf{L^{\infty}}(\omega_{i})}{\lVert{h}\rVert}_{\mathbf{L^{\infty}}(\Omega)% }\end{split}$$ Using (4.19) to estimate ${\lVert{h}\rVert}_{\mathbf{L^{\infty}}(\Omega)}$, this means that for all $\epsilon\in(0,\epsilon_{0})$ we have $$A_{4,\xi,h}(\epsilon)\leq\frac{\delta}{4}\lambda\left\lvert{h}\right\rvert_{% \mathfrak{B}_{\alpha}}+\left(\epsilon_{0}^{-\alpha}\left\lvert{\Omega}\right% \rvert^{-1}\sup_{i\in{\mathcal{I}}_{3}}{\lVert{\xi}\rVert}_{\mathbf{L^{\infty}% }(\omega_{i})}\right){\lVert{h}\rVert}_{\mathbf{L^{1}}(\Omega)}.$$ (4.21) Summing the estimates of (4.17), (4.18), (4.20), and (4.21) we have shown that $$\left\lvert{{\mathscr{L}}_{\xi}h}\right\rvert_{\mathfrak{B}_{\alpha}}\leq(2+% \delta)\lambda\left\lvert{h}\right\rvert_{\mathfrak{B}_{\alpha}}+C_{\delta}{% \lVert{h}\rVert}_{\mathbf{L^{1}}(\Omega)}$$ for all $h\in\mathfrak{B}_{\alpha}$, where $$C_{\delta}:=4(2+\tfrac{\delta}{4})H_{\xi}{\lVert{\tfrac{1}{f^{\prime}}}\rVert}% _{\mathbf{L^{\infty}}(\omega_{i})}^{\alpha}+\frac{1}{\epsilon_{0}^{\alpha}% \left\lvert{\Omega}\right\rvert}\sup_{i\in{\mathcal{I}}_{3}}{\lVert{\xi}\rVert% }_{\mathbf{L^{\infty}}(\omega_{i})}+\frac{\lambda\epsilon_{0}^{1-\alpha}\delta% }{4\left\lvert{\Omega}\right\rvert}.$$ This completes the proof of Theorem 4.2. 5. Twisted Transfer Operators In this section we follow the standard “twisted transfer operator” approach to studying flows. We will take steps to allow the transfer operator results of the previous section to be applied to the original problem of the meromorphic extension of the correlation function. Throughout this section we suppose that we are given a suspension semiflow $(\Omega,f,\tau)$ which satisfies the assumptions of the Main Theorem, in particular satisfying the assumptions (2.2), (2.3), and (2.4). First we show that a condition named exponential tails in [5] holds also in this setting. Lemma 5.1. $\int_{\Omega}e^{\sigma\tau(x)}\ dx<\infty$. Proof. We estimate $\int_{\Omega}e^{\sigma\tau(x)}\ dx\leq\sum_{i\in{\mathcal{I}}}\left\lvert{% \omega_{i}}\right\rvert e^{\sigma{\lVert{\tau}\rVert}_{\mathbf{L^{\infty}}(% \omega_{i})}}$. Since also we have that $\left\lvert{\omega_{i}}\right\rvert\leq{\lVert{1/f^{\prime}}\rVert}_{\mathbf{L% ^{\infty}}(\omega_{i})}\left\lvert{\Omega}\right\rvert$ then the supposition (2.4) implies the lemma. ∎ For all $t\geq 0$ let $A_{t}:=\{(x,s)\in\Omega_{\tau}:s+t\geq\tau(x)\}$ and $B_{t}:=\Omega_{\tau}\setminus A_{t}$. Hence we may write $$\mu(u\cdot v\circ\Phi^{t})=\mu(u\cdot v\circ\Phi^{t}\cdot\mathbf{1}_{A_{t}})+% \mu(u\cdot v\circ\Phi^{t}\cdot\mathbf{1}_{B_{t}}).$$ (5.1) Exponential decay for the second term is simple to estimate. Lemma 5.2. Exists $C<\infty$ such that $\left\lvert{\mu(u\cdot v\circ\Phi^{t}\cdot\mathbf{1}_{B_{t}})}\right\rvert\leq C% \left\lvert{u}\right\rvert_{\infty}\left\lvert{v}\right\rvert_{\infty}e^{-% \sigma t}$ for all $u,v:\Omega_{\tau}\to{\mathbb{C}}$ bounded and $t\geq 0$. Proof. Since $\mu$ is given by a formula in terms of the measure $\nu$ which is absolutely continuous with respect to Lebesgue there exists $C<\infty$ such that, letting $D_{t}:=\{x\in\Omega:\tau(x)-t>0\}$, we have $$\left\lvert{\mu(u\cdot v\circ\Phi^{t}\cdot\mathbf{1}_{B_{t}})}\right\rvert\leq C% \left\lvert{u}\right\rvert_{\infty}\left\lvert{v}\right\rvert_{\infty}\int_{% \Omega}(\tau(x)-t)\cdot\mathbf{1}_{D_{t}}(x)\ dx$$ (5.2) for all $t\geq 0$. For $y\in{\mathbb{R}}$ we define $k(y)$ equal to $y$ if $y\geq 0$ and equal to $0$ otherwise. This definition means that $(\tau(x)-t)\cdot\mathbf{1}_{D_{t}}(x)\leq k(\tau(x)-t)$. Since $\ln y\leq y$ for all $y>0$ it follows that $\ln(\sigma y)=\ln\sigma+\ln y\leq\sigma y$ and so $y\leq\sigma^{-1}e^{\sigma y}$ for all $y>0$. The case $y\leq 0$ is simple and so we have shown that $k(y)\leq\sigma^{-1}e^{\sigma y}$ for all $y\in{\mathbb{R}}$. This means that $$(\tau(x)-t)\cdot\mathbf{1}_{D_{t}}(x)\leq\sigma^{-1}e^{\sigma(\tau(x)-t)},% \quad\text{ for all $x\in\Omega$}.$$ We conclude using the above with (5.2) since $\int e^{\sigma\tau(x)}\ dx<\infty$ by Lemma 5.1. ∎ In order to proceed we must estimate the other term in (5.1) and so it is convenient to define $$\rho(t):=\mu(u\cdot v\circ\Phi^{t}\cdot\mathbf{1}_{A_{t}}).$$ (5.3) Note that $\left\lvert{\mu(u\cdot v\circ\Phi^{t}\cdot\mathbf{1}_{A_{t}})}\right\rvert\leq% \left\lvert{u}\right\rvert_{\infty}\left\lvert{v}\right\rvert_{\infty}$ for all $t\geq 0$. For all $z\in{\mathbb{C}}$ such that $\Re(z)>0$ we consider the Laplace transform of the above function $$\hat{\rho}(z):=\int_{0}^{\infty}e^{-zt}\rho(t)\ dt.$$ (5.4) Additionally for any $u:\Omega_{\tau}\to{\mathbb{C}}$ and $z\in{\mathbb{C}}$ let $$\hat{u}_{z}(x):=\int_{0}^{\infty}e^{-zs}u(x,s)\ ds$$ (5.5) for all $x\in\Omega$. Furthermore for all $n\in{\mathbb{N}}$ let $\tau_{n}:=\sum_{k=0}^{n-1}\tau\circ f^{k}$. Since the invariant measure $\nu$ is absolutely continuous with respect to Lebesgue there exists a density $h_{0}\in\mathbf{L^{1}}(\Omega)$ such that $\mu(\eta)=\int_{\Omega}\int_{0}^{\tau(x)}\eta(x,s)\ ds\ h_{0}(x)\ dx$ for all bounded $\eta:\Omega_{\tau}\to{\mathbb{C}}$. As in [26, 27, 7, 5] we have the following representation of the Laplace transform in terms of an infinite sum. Lemma 5.3. For all $z\in{\mathbb{C}}$ such that $\Re(z)>0$ and all $\left\lvert{u}\right\rvert_{\infty}<\infty$, $\left\lvert{v}\right\rvert_{\infty}<\infty$ $$\hat{\rho}(z)=\sum_{n=1}^{\infty}\int_{\Omega}(h_{0}\cdot\hat{u}_{-z}\cdot e^{% -z\tau_{n}}\cdot\hat{v}_{z}\circ f^{n})(x)\ dx.$$ Proof. Recall that $h_{0}\in\mathbf{L^{1}}(\Omega)$ is the density of the $f$-invariant measure $\nu$. For all $\Re(z)>0$ $$\begin{split}\displaystyle\hat{\rho}(z)&\displaystyle=\int_{0}^{\infty}\int_{% \Omega}\int_{0}^{\tau(x)}e^{-zt}u(x,s)v\circ\Phi^{t}(x,s)\mathbf{1}_{A_{t}}(x,% s)h_{0}(x)\ ds\ dx\ dt\\ &\displaystyle=\sum_{n=1}^{\infty}\int_{\Omega}\int_{0}^{\tau(x)}\int_{\tau_{n% }(x)-s}^{\tau_{n+1}(x)-s}e^{-zt}u(x,s)v\circ\Phi^{t}(x,s)h_{0}(x)\ dt\ ds\ dx.% \end{split}$$ We change variables letting $t^{\prime}=t-\tau_{n}(x)+s$ and note that when $t\in[\tau_{n}(x)-s,\tau_{n+1}(x)-s]$ then $\Phi^{t}(x,s)=(f^{n}x,t-\tau_{n}(x)+s)$. This means that $$\displaystyle\hat{\rho}(z)=\sum_{n=1}^{\infty}\int_{\Omega}e^{-z\tau_{n}(x)}% \left(\int_{0}^{\tau(x)}e^{zs}u(x,s)\ ds\right)\\ \displaystyle\times\left(\int_{0}^{\tau(f^{n}x)}e^{-zt^{\prime}}v(f^{n}x,t^{% \prime})\ dt^{\prime}\right)h_{0}(x)\ dx.$$ Recalling the definition (5.5) for $\hat{u}_{-z}$ and $\hat{v}_{z}$ we conclude. ∎ We now relate the sum given by Lemma 5.3 to the twisted transfer operators. For all $z\in{\mathbb{C}}$ such that $\Re(z)\in[-\sigma,0]$ let $\xi_{z}:\Omega\to{\mathbb{C}}$ be defined as $$\xi_{z}:={e^{-z\tau}}.$$ (5.6) We consider the map $f:\Omega\to\Omega$ with the weighting $\xi_{z}$. It is immediate that the assumptions imposed on the semiflow imply that the pair $f$ and $\xi_{z}$ satisfy the assumptions of Theorem 4.1. Consequently the transfer operator ${\mathscr{L}}_{z}:\mathfrak{B}_{\alpha}\to\mathfrak{B}_{\alpha}$ (for convenience we now write ${\mathscr{L}}_{z}$ for ${\mathscr{L}}_{\xi_{z}}$) and which is given by the formula $${\mathscr{L}}_{z}h(x):=\sum_{i\in{\mathcal{I}}}\left(\frac{e^{-z\tau}\cdot h}{% f^{\prime}}\right)\circ f_{i}^{-1}(x)\cdot\mathbf{1}_{f\omega_{i}}(x).$$ has essential spectral radius strictly less than $1$. Let ${\mathscr{B}}(\mathfrak{B}_{\alpha},\mathfrak{B}_{\alpha})$ denote the space of bounded linear operators mapping $\mathfrak{B}_{\alpha}$ to $\mathfrak{B}_{\alpha}$. Lemma 5.4. The operator valued function $z\mapsto(\mathbf{id}-{\mathscr{L}}_{z})^{-1}\in{\mathscr{B}}(\mathfrak{B}_{% \alpha},\mathfrak{B}_{\alpha})$ is meromorphic on the set $\{z\in{\mathbb{C}}:\Re(z)\in[-\sigma,0]\}$. Proof. We know that ${\mathscr{L}}_{z}\in{\mathscr{B}}(\mathfrak{B}_{\alpha},\mathfrak{B}_{\alpha})$ has essential spectral radius less than $1$ for all $\Re(z)\in[-\sigma,0]$ and so is of the form ${\mathscr{L}}_{z}={\mathscr{K}}_{z}+{\mathscr{A}}_{z}$ where ${\mathscr{K}}_{z}$ is compact, the spectral radius of ${\mathscr{A}}_{z}$ is strictly less than $1$ and ${\mathscr{K}}_{z}{\mathscr{A}}_{z}=0$. Furthermore both $z\mapsto{\mathscr{K}}_{z}\in{\mathscr{B}}(\mathfrak{B}_{\alpha},\mathfrak{B}_{% \alpha})$ and $z\mapsto{\mathscr{A}}_{z}\in{\mathscr{B}}(\mathfrak{B}_{\alpha},\mathfrak{B}_{% \alpha})$ are holomorphic operator-valued functions. Note that $$(\mathbf{id}-{\mathscr{L}}_{z})=(\mathbf{id}-{\mathscr{K}}_{z})(\mathbf{id}-{% \mathscr{A}}_{z}).$$ and that $(\mathbf{id}-{\mathscr{A}}_{z})$ is invertible. By The Analytic Fredholm Theorem $z\mapsto(\mathbf{id}-{\mathscr{K}}_{z})^{-1}$ is meromorphic on the set $\{z\in{\mathbb{C}}:\Re(z)\in[-\sigma,0]\}$. ∎ Lemma 5.5. The operator valued function $z\mapsto\sum_{n=1}^{\infty}{\mathscr{L}}_{z}^{n}\in{\mathscr{B}}(\mathfrak{B}_% {\alpha},\mathfrak{B}_{\alpha})$ is meromorphic on the set $\{z\in{\mathbb{C}}:\Re(z)\in[-\sigma,0]\}$. Proof. We note that $\sum_{n=1}^{\infty}{\mathscr{L}}_{z}^{n}=(\mathbf{id}-{\mathscr{L}}_{z})^{-1}{% \mathscr{L}}_{z}$ and apply Lemma 5.4. ∎ Proof of The Main Theorem. By Lemma 5.2 it suffices to know that $\hat{\rho}$ admits the relevant meromorphic extension. Since, as usual for transfer operators, we have that $$\int_{\Omega}{\mathscr{L}}_{z}^{n}h_{1}(x)\cdot h_{2}(x)\ dx=\int_{\Omega}h_{1% }(x)\cdot h_{2}\circ e^{-z\tau_{n}(x)}\circ f^{n}(x)\ dx$$ the formula for $\hat{\rho}(z)$ given by Lemma 5.3 means that $$\hat{\rho}(z)=\sum_{n=1}^{\infty}\int_{\Omega}{\mathscr{L}}_{z}^{n}(h_{0}\hat{% u}_{-z})(x)\cdot\hat{v}_{z}(x)\ dx.$$ This equality was shown to hold for all $\Re(z)>0$. But since the right hand side is meromorphic on the set $\{z\in{\mathbb{C}}:\Re(z)\in[-\sigma,0]\}$ we have shown that the left hand side admits such an extension. ∎ References [1] V. S. Afraĭmovič, V. V. Bykov, and L. P. Silnikov. The origin and structure of the Lorenz attractor. Dokl. Akad. Nauk SSSR, 234(2):336–339, 1977. [2] V. Araújo, S. Galatolo, and M. J. Pacifico. Decay of correlations for maps with uniformly contracting fibers and logarithm law for singular hyperbolic attractors. preprint: arXiv:1204.0703. [3] V. Araújo, M. Pacifico, E. Pujals, and M. Viana. Singular-hyperbolic attractors are chaotic. Trans. Amer. Math. Soc., 361(5):2431–2485, 2009. [4] V. Araújo and P. Varandas. Robust exponential decay of correlations for singular-flows. Commun. Math. Phys., 311:215–246, 2012. [5] A. Avila, S. Gouëzel, and J. Yoccoz. Exponential mixing for the Teichmüller flow. Publications Mathématiques de L’IHÉS, 104(1):143–211, 2005. [6] V. Baladi and C. Liverani. Exponential decay of correlations for piecewise cone hyperbolic contact flows. Commun. Math. Phys., 314:689–773, 2012. [7] V. Baladi and B. Vallée. Exponential decay of correlations for surface semi-flows without finite Markov partitions. Proc. Amer. Math. Soc, 133(3):865–874, 2005. [8] O. Butterley. An alternative approach to generalised BV and the application to expanding interval maps. to appear in Discrete Contin. Dyn. Syst. [9] O. Butterley and C. Liverani. Smooth Anosov flows: Correlation spectra and stability. Journal of Modern Dynamics, 1(2):301–322, 2007. [10] N. Chernov and R. Markarian. Chaotic billiards, volume 127 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2006. [11] D. Dolgopyat. On decay of correlations in Anosov flows. Ann. of Math., 147:357–390, 1998. [12] S. Galatolo and M. J. Pacifico. Lorenz-like flows: exponential decay of correlations for the Poincaré map, logarithm law, quantitative recurrence. Ergodic Theory Dynam. Systems, 30(6):1703–1737, 2010. [13] P. Giulietti, C. Liverani, and M. Pollicott. Anosov flows and dynamical zeta functions. to appear in Ann. of Math. [14] J. Guckenheimer and R. F. Williams. Structural stability of Lorenz attractors. Inst. Hautes Études Sci. Publ. Math., (50):59–72, 1979. [15] S. Hayashi. Connecting invariant manifolds and the solution of the $C^{1}$ stability and $\Omega$-stability conjectures for flows. Ann. of Math. (2), 145(1):81–137, 1997. [16] S. Hayashi. Correction to: “Connecting invariant manifolds and the solution of the $C^{1}$ stability and $\Omega$-stability conjectures for flows” [Ann. of Math. (2) 145 (1997), no. 1, 81–137; MR1432037 (98b:58096)]. Ann. of Math. (2), 150(1):353–356, 1999. [17] M. Holland and I. Melbourne. Central limit theorems and invariance principles for Lorenz attractors. Journal of the London Mathematical Society, 76(2):345, 2007. [18] G. Keller. Generalized bounded variation and applications to piecewise monotonic transformations. Probability Theory and Related Fields, 69(3):461–478, 1985. [19] C. Liverani. On contact Anosov flows. Ann. of Math., 159:1275–1312, 2004. [20] C. Liverani. A footnote on expanding maps. preprint (arXiv 1207.3982), 2012. [21] E. D. Lorenz. Deterministic non-periodic flow. J. Atmosph. Sci., 20:130–141, 1963. [22] S. Luzzatto, I. Melbourne, and F. Paccaut. The Lorenz attractor is mixing. Comm. Math. Phys., 260(2):393–401, 2005. [23] C. Morales, M. Pacifico, and E. Pujals. Singular hyperbolic systems. Proc. Amer. Math. Soc, 127:3393–3402, 1999. [24] C. A. Morales, M. J. Pacífico, and E. R. Pujals. On $C^{1}$ robust singular transitive sets for three-dimensional flows. C. R. Acad. Sci. Paris Sér. I Math., 326(1):81–86, 1998. [25] C. A. Morales, M. J. Pacifico, and E. R. Pujals. Robust transitive singular sets for 3-flows are partially hyperbolic attractors or repellers. Ann. of Math. (2), 160(2):375–432, 2004. [26] M. Pollicott. On the rate of mixing of Axiom A flows. Invent. Math., 81:413–426, 1985. [27] M. Pollicott. On the mixing of Axiom A attracting flows and a conjecture of Ruelle. Ergod. Th. and Dynam. Sys., 19:535–548, 1999. [28] B. Saussol. Absolutely continuous invariant measures for multidimensional expanding maps. Israel J. Math., 116:223–248, 2000. [29] D. Thomine. A spectral gap for transfer operators of piecewise expanding maps. Discrete Contin. Dyn. Syst., 30(3):917–944, 2011. [30] M. Tsujii. Quasi-compactness of transfer operators for contact Anosov flows. Nonlinearity, 23(7):1495–1545, 2010. [31] M. Tsujii. Contact Anosov flows and the Fourier–Bros–Iagolnitzer transform. Ergodic Theory and Dynamical Systems, 2011. [32] W. Tucker. A rigorous ODE solver and Smale’s 14th problem. Found. Comput. Math., 2:53–117, 2002. [33] R. F. Williams. The structure of Lorenz attractors. Inst. Hautes Études Sci. Publ. Math., (50):73–99, 1979.
Covering Uncertain Points in a Tree††thanks: A preliminary version of this paper will appear in the Proceedings of the 15th Algorithms and Data Structures Symposium (WADS 2017). This research was supported in part by NSF under Grant CCF-1317143. Haitao Wang Department of Computer Science Utah State University, Logan, UT 84322, USA 11email: haitao.wang@usu.edu,jingruzhang@aggiemail.usu.edu    Jingru Zhang Department of Computer Science Utah State University, Logan, UT 84322, USA 11email: haitao.wang@usu.edu,jingruzhang@aggiemail.usu.edu Abstract In this paper, we consider a coverage problem for uncertain points in a tree. Let $T$ be a tree containing a set $\mathcal{P}$ of $n$ (weighted) demand points, and the location of each demand point $P_{i}\in\mathcal{P}$ is uncertain but is known to appear in one of $m_{i}$ points on $T$ each associated with a probability. Given a covering range $\lambda$, the problem is to find a minimum number of points (called centers) on $T$ to build facilities for serving (or covering) these demand points in the sense that for each uncertain point $P_{i}\in\mathcal{P}$, the expected distance from $P_{i}$ to at least one center is no more than $\lambda$. The problem has not been studied before. We present an $O(|T|+M\log^{2}M)$ time algorithm for the problem, where $|T|$ is the number of vertices of $T$ and $M$ is the total number of locations of all uncertain points of $\mathcal{P}$, i.e., $M=\sum_{P_{i}\in\mathcal{P}}m_{i}$. In addition, by using this algorithm, we solve a $k$-center problem on $T$ for the uncertain points of $\mathcal{P}$. 1 Introduction Data uncertainty is very common in many applications, such as sensor databases, image resolution, facility location services, and it is mainly due to measurement inaccuracy, sampling discrepancy, outdated data sources, resource limitation, etc. Problems on uncertain data have attracted considerable attention, e.g., [1, 2, 3, 13, 14, 17, 25, 26, 35, 36, 37]. In this paper, we study a problem of covering uncertain points on a tree. The problem is formally defined as follows. Let $T$ be a tree. We consider each edge $e$ of $T$ as a line segment of a positive length so that we can talk about “points” on $e$. Formally, we specify a point $x$ of $T$ by an edge $e$ of $T$ that contains $x$ and the distance between $x$ and an incident vertex of $e$. The distance of any two points $p$ and $q$ on $T$, denoted by $d(p,q)$, is defined as the sum of the lengths of all edges on the simple path from $p$ to $q$ in $T$. Let $\mathcal{P}=\{P_{1},\ldots,P_{n}\}$ be a set of $n$ uncertain (demand) points on $T$. Each $P_{i}\in\mathcal{P}$ has $m_{i}$ possible locations on $T$, denoted by $\{p_{i1},p_{i2},\cdots,p_{im_{i}}\}$, and each location $p_{ij}$ of $P_{i}$ is associated with a probability $f_{ij}\geq 0$ for $P_{i}$ appearing at $p_{ij}$ (which is independent of other locations), with $\sum_{j=1}^{m_{i}}f_{ij}=1$; e.g., see Fig. 1. In addition, each $P_{i}\in\mathcal{P}$ has a weight $w_{i}\geq 0$. For any point $x$ on $T$, the (weighted) expected distance from $x$ to $P_{i}$, denoted by $\mathsf{E}\mathrm{d}(x,P_{i})$, is defined as $$\mathsf{E}\mathrm{d}(x,P_{i})=w_{i}\cdot\sum_{j=1}^{m_{i}}f_{ij}\cdot d(x,p_{% ij}).$$ Given a value $\lambda\geq 0$, called the covering range, we say that a point $x$ on $T$ covers an uncertain point $P_{i}$ if $\mathsf{E}\mathrm{d}(x,P_{i})\leq\lambda$. The center-coverage problem is to compute a minimum number of points on $T$, called centers, such that every uncertain point of $\mathcal{P}$ is covered by at least one center (hence we can build facilities on these centers to “serve” all demand points). To the best of our knowledge, the problem has not been studied before. Let $M$ denote the total number of locations all uncertain points, i.e., $M=\sum_{i=1}^{n}m_{i}$. Let $|T|$ be the number of vertices of $T$. In this paper, we present an algorithm that solves the problem in $O(|T|+M\log^{2}M)$ time, which is nearly linear as the input size of the problem is $\Theta(|T|+M)$. As an application of our algorithm, we also solve a dual problem, called the $k$-center problem, which is to compute a number of $k$ centers on $T$ such that the covering range is minimized. Our algorithm solves the $k$-center problem in $O(|T|+n^{2}\log n\log M+M\log^{2}M\log n)$ time. 1.1 Related Work Two models on uncertain data have been commonly considered: the existential model [3, 25, 26, 35, 36, 41] and the locational model [1, 2, 14, 37]. In the existential model an uncertain point has a specific location but its existence is uncertain while in the locational model an uncertain point always exists but its location is uncertain and follows a probability distribution function. Our problems belong to the locational model. In fact, the same problems under existential model are essentially the weighted case for “deterministic” points (i.e., each $P_{i}\in\mathcal{P}$ has a single “certain” location), and the center-coverage problem is solvable in linear time [28] and the $k$-center problem is solvable in $O(n\log^{2}n)$ time [15, 32]. If $T$ is a path, both the center-coverage problem and the $k$-center problem on uncertain points have been studied [39], but under a somewhat special problem setting where $m_{i}$ is the same for all $1\leq i\leq n$. The two problems were solved in $O(M+n\log k)$ and $O(M\log M+n\log k\log n)$ time, respectively. If $T$ is tree, an $O(|T|+M)$ time algorithm was given in [40] for the one-center problem under the above special problem setting. As mentioned above, the “deterministic” version of the center-coverage problem is solvable in linear time [28], where all demand points are on the vertices. For the $k$-center problem, Megiddo and Tamir [32] presented an $O(n\log^{2}n\log\log n)$ time algorithm ($n$ is the size of the tree), which was improved to $O(n\log^{2}n)$ time by Cole [15]. The unweighted case was solved in linear time by Frederickson [18]. Very recently, Li and Huang [23] considered the same $k$-center problem under the same uncertain model as ours but in the Euclidean space, and they gave an approximation algorithm. Facility location problems in other uncertain models have also been considered. For example, Löffler and van Kreveld [30] gave algorithms for computing the smallest enclosing circle for imprecise points each of which is contained in a planar region (e.g., a circle or a square). Jørgenson et al. [24] studied the problem of computing the distribution of the radius of the smallest enclosing circle for uncertain points each of which has multiple locations in the plane. de Berg et al. [16] proposed algorithms for dynamically maintaining Euclidean $2$-centers for a set of moving points in the plane (the moving points are considered uncertain). See also the problems for minimizing the maximum regret, e.g., [5, 6, 38]. Some coverage problems in various geometric settings have also been studied. For example, the unit disk coverage problem is to compute a minimum number of unit disks to cover a given set of points in the plane. The problem is NP-hard and a polynomial-time approximation scheme was known [22]. The discrete case where the disks must be selected from a given set was also studied [34]. See [9, 11, 20, 29] and the references therein for various problems of covering points using squares. Refer to a survey [4] for more geometric coverage problems. 1.2 Our Techniques We first discuss our techniques for solving the center-coverage problem. For each uncertain point $P_{i}\in\mathcal{P}$, we find a point $p^{*}_{i}$ on $T$ that minimizes the expected distance $\mathsf{E}\mathrm{d}(p_{i},P_{i})$, and $p^{*}_{i}$ is actually the weighted median of all locations of $P_{i}$. We observe that if we move a point $x$ on $T$ away from $p_{i}^{*}$, the expected distance $\mathsf{E}\mathrm{d}(x,P_{i})$ is monotonically increasing. We compute the medians $p_{i}^{*}$ for all uncertain points in $O(M\log M)$ time. Then we show that there exists an optimal solution in which all centers are in $T_{m}$, where $T_{m}$ is the minimum subtree of $T$ that connects all medians $p_{i}^{*}$ (so every leaf of $T_{m}$ is a median $p_{i}^{*}$). Next we find centers on $T_{m}$. To this end, we propose a simple greedy algorithm, but the challenge is on developing efficient data structures to perform certain operations. We briefly discuss it below. We pick an arbitrary vertex $r$ of $T_{m}$ as the root. Starting from the leaves, we consider the vertices of $T_{m}$ in a bottom-up manner and place centers whenever we “have to”. For example, consider a leaf $v$ holding a median $p_{i}^{*}$ and let $u$ be the parent of $v$. If $\mathsf{E}\mathrm{d}(u,P_{i})>\lambda$, then we have to place a center $c$ on the edge $e(u,v)$ in order to cover $P_{i}$. The location of $c$ is chosen to be at a point of $e(u,v)$ with $\mathsf{E}\mathrm{d}(c,P_{i})=\lambda$ (i.e., on the one hand, $c$ covers $P_{i}$, and on the other hand, $c$ is as close to $u$ as possible in the hope of covering other uncertain points as many as possible). After $c$ is placed, we find and remove all uncertain points that are covered by $c$. Performing this operation efficiently is a key difficulty for our approach. We solve the problem in an output-sensitive manner by proposing a dynamic data structure that also supports the remove operations. We also develop data structures for other operations needed in the algorithm. For example, we build a data structure in $O(M\log M)$ time that can compute the expected distance $\mathsf{E}\mathrm{d}(x,P_{i})$ in $O(\log M)$ time for any point $x$ on $T$ and any $P_{i}\in\mathcal{P}$. These data structures may be of independent interest. We should point out that our algorithm is essentially different from the one in our previous work [40]. Indeed, our algorithm here is a greedy algorithm while the one in [40] uses the prune-and-search technique. Also our algorithm relies heavily on some data structures as mentioned above while the algorithm in [40] does not need any of these data structures. For solving the $k$-center problem, by observations, we first identify a set of $O(n^{2})$ “candidate” values such that the covering range in the optimal solution must be in the set. Subsequently, we use our algorithm for the center-coverage problem as a decision procedure to find the optimal covering range in the set. Note that although we have assumed $\sum_{j=1}^{m_{i}}f_{ij}=1$ for each $P_{i}\in\mathcal{P}$, it is quite straightforward to adapt our algorithm to the general case where the assumption does not hold. The rest of the paper is organized as follows. We introduce some notation in Section 2. In Section 3, we describe our algorithmic scheme for the center-coverage problem but leave the implementation details in the subsequent two sections. Specifically, the algorithm for computing all medians $p_{i}^{*}$ is given in Section 4, and in the same section we also propose a connector-bounded centroid decomposition of $T$, which is repeatedly used in the paper and may be interesting in its own right. The data structures used in our algorithmic scheme are given in Section 5. We finally solve the $k$-center problem in Section 6. 2 Preliminaries Note that the locations of the uncertain points of $\mathcal{P}$ may be in the interior of the edges of $T$. A vertex-constrained case happens if all locations of $\mathcal{P}$ are at vertices of $T$ and each vertex of $T$ holds at least one location of $\mathcal{P}$ (but the centers we seek can still be in the interior of edges). As in [40], we will show later in Section 5.4 that the general problem can be reduced to the vertex-constrained case in $O(|T|+M)$ time. In the following, unless otherwise stated, we focus our discussion on the vertex-constrained case and assume our problem on $\mathcal{P}$ and $T$ is a vertex-constrained case. For ease of exposition, we further make a general position assumption that every vertex of $T$ has only one location of $\mathcal{P}$ (we explain in Section 5.4 that our algorithm easily extends to the degenerate case). Under this assumption, it holds that $|T|=M\geq n$. Let $e(u,v)$ denote the edge of $T$ incident to two vertices $u$ and $v$. For any two points $p$ and $q$ on $T$, denote by $\pi(p,q)$ the simple path from $p$ to $q$ on $T$. Let $\pi$ be any simple path on $T$ and $x$ be any point on $\pi$. For any location $p_{ij}$ of an uncertain point $P_{i}$, the distance $d(x,p_{ij})$ is a convex (and piecewise linear) function as $x$ changes on $\pi$ [31]. As a sum of multiple convex functions, $\mathsf{E}\mathrm{d}(x,P_{i})$ is also convex (and piecewise linear) on $\pi$, that is, in general, as $x$ moves on $\pi$, $\mathsf{E}\mathrm{d}(x,P_{i})$ first monotonically decreases and then monotonically increases. In particular, for each edge $e$ of $T$, $\mathsf{E}\mathrm{d}(x,P_{i})$ is a linear function for $x\in e$. For any subtree $T^{\prime}$ of $T$ and any $P_{i}\in\mathcal{P}$, we call the sum of the probabilities of the locations of $P_{i}$ in $T^{\prime}$ the probability sum of $P_{i}$ in $T^{\prime}$. For each uncertain point $P_{i}$, let $p_{i}^{*}$ be a point $x\in T$ that minimizes $\mathsf{E}\mathrm{d}(x,P_{i})$. If we consider $w_{i}\cdot f_{ij}$ as the weight of $p_{ij}$, $p_{i}^{*}$ is actually the weighted median of all points $p_{ij}\in P_{i}$. We call $p_{i}^{*}$ the median of $P_{i}$. Although $p_{i}^{*}$ may not be unique (e.g., when there is an edge $e$ dividing $T$ into two subtrees such that the probability sum of $P_{i}$ in either subtree is exactly $0.5$), $P_{i}$ always has a median located at a vertex $v$ of $T$, and we let $p_{i}^{*}$ refer to such a vertex. Recall that $\lambda$ is the given covering range for the center-coverage problem. If $\mathsf{E}\mathrm{d}(p_{i}^{*},P_{i})>\lambda$ for some $i\in[1,n]$, then there is no solution for the problem since no point of $T$ can cover $P_{i}$. Henceforth, we assume $\mathsf{E}\mathrm{d}(p_{i}^{*},P_{i})\leq\lambda$ for each $i\in[1,n]$. 3 The Algorithmic Scheme In this section, we describe our algorithmic scheme for the center-coverage problem, and the implementation details will be presented in the subsequent two sections. We start with computing the medians $p_{i}^{*}$ of all uncertain points of $\mathcal{P}$. We have the following lemma, whose proof is deferred to Section 4.2. Lemma 1 The medians $p_{i}^{*}$ of all uncertain points $P_{i}$ of $\mathcal{P}$ can be computed in $O(M\log M)$ time. 3.1 The Medians-Spanning Tree $T_{m}$ Denote by $P^{*}$ the set of all medians $p_{i}^{*}$. Let $T_{m}$ be the minimum connected subtree of $T$ that spans/connects all medians. Note that each leaf of $T_{m}$ must hold a median. We pick an arbitrary median as the root of $T$, denoted by $r$. The subtree $T_{m}$ can be easily computed in $O(M)$ time by a post-order traversal on $T$ (with respect to the root $r$), and we omit the details. The following lemma is based on the fact that $\mathsf{E}\mathrm{d}(x,P_{i})$ is convex for $x$ on any simple path of $T$ and $\mathsf{E}\mathrm{d}(x,P_{i})$ minimizes at $x=p_{i}^{*}$. Lemma 2 There exists an optimal solution for the center-coverage problem in which every center is on $T_{m}$. Proof Consider an optimal solution and let $C$ be the set of all centers in it. Assume there is a center $c\in C$ that is not on $T_{m}$. Let $v$ be the vertex of $T_{m}$ that holds a median and is closest to $c$. Then $v$ decomposes $T$ into two subtrees $T_{1}$ and $T_{2}$ with the only common vertex $v$ such that $c$ is in one subtree, say $T_{1}$, and all medians are in $T_{2}$. If we move a point $x$ from $c$ to $v$ along $\pi(c,v)$, then $\mathsf{E}\mathrm{d}(x,P_{i})$ is non-increasing for each $i\in[1,n]$. This implies that if we move the center $c$ to $v$, we can obtain an optimal solution in which $c$ is in $T_{m}$. If $C$ has other centers that are not on $T_{m}$, we do the same as above to obtain an optimal solution in which all centers are on $T_{m}$. The lemma thus follows. ∎ Due to Lemma 2, we will focus on finding centers on $T_{m}$. We also consider $r$ as the root of $T_{m}$. With respect to $r$, we can talk about ancestors and descendants of the vertices in $T_{m}$. Note that for any two vertices $u$ and $v$ of $T_{m}$, $\pi(u,v)$ is in $T_{m}$. We reindex all medians and the corresponding uncertain points so that the new indices will facilitate our algorithm, as follows. Starting from an arbitrary child of $r$ in $T_{m}$, we traverse down the tree $T_{m}$ by always following the leftmost child of the current node until we encounter a leaf, denoted by $v^{*}$. Starting from $v^{*}$ (i.e., $v^{*}$ is the first visited leaf), we perform a post-order traversal on $T_{m}$ and reindex all medians of $P^{*}$ such that $p_{1}^{*},p_{2}^{*},\ldots,p^{*}_{n}$ is the list of points of $P^{*}$ visited in order in the above traversal. Recall that the root $r$ contains a median, which is $p^{*}_{n}$ after the reindexing. Accordingly, we also reindex all uncertain points of $\mathcal{P}$ and their corresponding locations on $T$, which can be done in $O(M)$ time. In the following paper, we will always use the new indices. For each vertex $v$ of $T_{m}$, we use $T_{m}(v)$ to represent the subtree of $T_{m}$ rooted at $v$. The reason we do the above reindexing is that for any vertex $v$ of $T_{m}$, the new indices of all medians in $T_{m}(v)$ must form a range $[i,j]$ for some $1\leq i\leq j\leq n$, and we use $R(v)$ to denote the range. It will be clear later that this property will facilitate our algorithm. 3.2 The Algorithm Our algorithm for the center-coverage problem works as follows. Initially, all uncertain points are “active”. During the algorithm, we will place centers on $T_{m}$, and once an uncertain point $P_{i}$ is covered by a center, we will “deactivate” it (it then becomes “inactive”). The algorithm visits all vertices of $T_{m}$ following the above post-order traversal of $T_{m}$ starting from leaf $v^{*}$. Suppose $v$ is currently being visited. Unless $v$ is the root $r$, let $u$ be the parent of $v$. Below we describe our algorithm for processing $v$. There are two cases depending on whether $v$ is a leaf or an internal node, although the algorithm for them is essentially the same. 3.2.1 The Leaf Case If $v$ is a leaf, then it holds a median $p_{i}^{*}$. If $P_{i}$ is inactive, we do nothing; otherwise, we proceed as follows. We compute a point $c$ (called a candidate center) on the path $\pi(v,r)$ closest to $r$ such that $\mathsf{E}\mathrm{d}(c,P_{i})\leq\lambda$. Note that if we move a point $x$ from $v$ to $r$ along $\pi(v,r)$, $\mathsf{E}\mathrm{d}(x,P_{i})$ is monotonically increasing. By the definition of $c$, if $\mathsf{E}\mathrm{d}(r,P_{i})\leq\lambda$, then $c=r$; otherwise, $\mathsf{E}\mathrm{d}(c,P_{i})=\lambda$. If $c$ is in $\pi(u,r)$, then we do nothing and finish processing $v$. Below we assume that $c$ is not in $\pi(u,r)$ and thus is in $e(u,v)\setminus\{u\}$ (i.e., $c\in e(u,v)$ but $c\neq u$). In order to cover $P_{i}$, by the definition of $c$, we must place a center in $e(u,v)\setminus\{u\}$. Our strategy is to place a center at $c$. Indeed, this is the best location for placing a center since it is the location that can cover $P_{i}$ and is closest to $u$ (and thus is closest to every other active uncertain point). We use a candidate-center-query to compute $c$ in $O(\log n)$ time, whose details will be discussed later. Next, we report all active uncertain points that can be covered by $c$, and this is done by a coverage-report-query in output-sensitive $O(\log M\log n+k\log n)$ amortized time, where $k$ is the number of uncertain points covered by $c$. The details for the operation will be discussed later. Further, we deactivate all these uncertain points. We will show that deactivating each uncertain point $P_{j}$ can be done in $O(m_{j}\log M\log n)$ amortized time. This finishes processing $v$. 3.2.2 The Internal Node Case If $v$ is an internal node, since we process the vertices of $T_{m}$ following a post-order traversal, all descendants of $v$ have already been processed. Our algorithm maintains an invariant that if the subtree $T_{m}(v)$ contains any active median $p_{i}^{*}$ (i.e., $P_{i}$ is active), then $\mathsf{E}\mathrm{d}(v,P_{i})\leq\lambda$. When $v$ is a leaf, this invariant trivially holds. Our way of processing a leaf discussed above also maintains this invariant. To process $v$, we first check whether $T_{m}(v)$ has any active medians. This is done by a range-status-query in $O(\log n)$ time, whose details will be given later. If $T_{m}(v)$ does not have any active median, then we are done with processing $v$. Otherwise, by the algorithm invariant, for each active median $p_{i}^{*}$ in $T_{m}(v)$, it holds that $\mathsf{E}\mathrm{d}(v,P_{i})\leq\lambda$. If $v=r$, we place a center at $v$ and finish the entire algorithm. Below, we assume $v$ is not $r$ and thus $u$ is the parent of $v$. We compute a point $c$ on $\pi(v,r)$ closest to $r$ such that $\mathsf{E}\mathrm{d}(c,P_{i})\leq\lambda$ for all active medians $p_{i}^{*}\in T_{m}(v)$, and we call $c$ the candidate center. By the definition of $c$, if $\mathsf{E}\mathrm{d}(r,P_{i})\leq\lambda$ for all active medians $p_{i}^{*}\in T_{m}(v)$, then $c=r$; otherwise, $\mathsf{E}\mathrm{d}(c,P_{i})=\lambda$ for some active median $p_{i}^{*}\in T_{m}(v)$. As in the leaf case, finding $c$ is done in $O(\log n)$ time by a candidate-center-query. If $c$ is on $\pi(u,r)$, then we finish processing $v$. Note that this implies $\mathsf{E}\mathrm{d}(u,P_{i})\leq\lambda$ for each active median $p_{i}^{*}\in T_{m}(v)$, which maintains the algorithm invariant for $u$. If $c\not\in\pi(u,r)$, then $c\in e(u,v)\setminus\{u\}$. In this case, by the definition of $c$, we must place a center in $e(u,v)\setminus\{u\}$ to cover $P_{i}$. As discussed in the leaf case, the best location for placing a center is $c$ and thus we place a center at $c$. Then, by using a coverage-report-query, we find all active uncertain points covered by $c$ and deactivate them. Note that by the definition of $c$, $c$ covers $P_{j}$ for all medians $p_{j}^{*}\in T_{m}(v)$. This finishes processing $v$. Once the root $r$ is processed, the algorithm finishes. 3.3 The Time Complexity To analyze the running time of the algorithm, it remains to discuss the three operations: range-status-queries, coverage-report-queries, and candidate-center-queries. For answering range-status-queries, it is trivial, as shown in Lemma 3. Lemma 3 We can build a data structure in $O(M)$ time that can answer each range-status-query in $O(\log n)$ time. Further, once an uncertain point is deactivated, we can remove it from the data structure in $O(\log n)$ time. Proof Initially we build a balanced binary search tree $\Phi$ to maintain all indices $1,2,\ldots,n$. If an uncertain point $P_{i}$ is deactivated, then we simply remove $i$ from the tree in $O(\log n)$ time. For each range-status-query, we are given a vertex $v$ of $T_{m}$, and the goal is to decide whether $T_{m}(v)$ has any active medians. Recall that all medians in $T_{m}(v)$ form a range $R(v)=[i,j]$. As preprocessing, we compute $R(v)$ for all vertices $v$ of $T_{m}$, which can be done in $O(|T_{m}|)$ time by the post-order traversal of $T_{m}$ starting from leaf $v^{*}$. Note that $|T_{m}|=O(M)$. During the query, we simply check whether $\Phi$ still contains any index in the range $R(v)=[i,j]$, which can be done in $O(\log n)$ time by standard approaches (e.g., by finding the successor of $i$ in $\Phi$). ∎ For answering the coverage-report-queries and the candidate-center-queries, we have the following two lemmas. Their proofs are deferred to Section 5. Lemma 4 We can build a data structure $\mathcal{A}_{1}$ in $O(M\log^{2}M)$ time that can answer in $O(\log M\log n+k\log n)$ amortized time each coverage-report-query, i.e., given any point $x\in T$, report all active uncertain points covered by $x$, where $k$ is the output size. Further, if an uncertain point $P_{i}$ is deactivated, we can remove $P_{i}$ from $\mathcal{A}_{1}$ in $O(m_{i}\cdot\log M\cdot\log n)$ amortized time. Lemma 5 We can build a data structure $\mathcal{A}_{2}$ in $O(M\log M+n\log^{2}M)$ time that can answer in $O(\log n)$ time each candidate-center-query, i.e., given any vertex $v\in T_{m}$, find the candidate center $c$ for the active medians of $T_{m}(v)$. Further, if an uncertain point $P_{i}$ is deactivated, we can remove $P_{i}$ from $\mathcal{A}_{2}$ in $O(\log n)$ time. Using these results, we obtain the following. Theorem 3.1 We can find a minimum number of centers on $T$ to cover all uncertain points of $\mathcal{P}$ in $O(M\log^{2}M)$ time. Proof First of all, the total preprocessing time of Lemmas 3, 4, and 5 is $O(M\log^{2}M)$. Computing all medians takes $O(M\log M)$ time by Lemma 1. Below we analyze the total time for computing centers on $T_{m}$. The algorithm processes each vertex of $T_{m}$ exactly once. The processing of each vertex calls each of the following three operations at most once: coverage-report-queries, range-status-queries, and candidate-center-queries. Since each of the last two operations runs in $O(\log n)$ time, the total time of these two operations in the entire algorithm is $O(M\log n)$. For the coverage-report-queries, each operation runs in $O(\log M\log n+k\log n)$ amortized time. Once an uncertain point $P_{i}$ is reported by it, $P_{i}$ will be deactivated by removing it from all three data structures (i.e., those in Lemmas 3, 4, and 5) and $P_{i}$ will not become active again. Therefore, each uncertain point will be reported by the coverage-report-query operations at most once. Hence, the total sum of the value $k$ in the entire algorithm is $n$. Further, notice that there are at most $n$ centers placed by the algorithm. Hence, there are at most $n$ coverage-report-query operations in the algorithm. Therefore, the total time of the coverage-report-queries in the entire algorithm is $O(n\log M\log n)$. In addition, since each uncertain point $P_{i}$ will be deactivated at most once, the total time of the remove operations for all three data structures in the entire algorithm is $O(M\log M\log n)$ time. As $n\leq M$, the theorem follows. ∎ In addition, Lemma 6 will be used to build the data structure $\mathcal{A}_{2}$ in Lemma 5, and it will also help to solve the $k$-center problem in Section 6. Its proof is given in Section 5. Lemma 6 We can build a data structure $\mathcal{A}_{3}$ in $O(M\log M)$ time that can compute the expected distance $\mathsf{E}\mathrm{d}(x,P_{i})$ in $O(\log M)$ time for any point $x\in T$ and any uncertain point $P_{i}\in\mathcal{P}$. 4 A Tree Decomposition and Computing the Medians In this section, we first introduce a decomposition of $T$, which will be repeatedly used in our algorithms (e.g., for Lemmas 1, 4, 6). Later in Section 4.2 we will compute the medians with the help of the decomposition. 4.1 A Connector-Bounded Centroid Decomposition We propose a tree decomposition of $T$, called a connector-bounded centroid decomposition, which is different from the centroid decompositions used before, e.g., [19, 28, 32, 33] and has certain properties that can facilitate our algorithms. A vertex $v$ of $T$ is called a centroid if $T$ can be represented as a union of two subtrees with $v$ as their only common vertex and each subtree has at most $\frac{2}{3}$ of the vertices of $T$ [28, 33], and we say the two subtrees are decomposed by $v$. Such a centroid always exists and can be found in linear time [28, 33]. For convenience of discussion, we consider $v$ to be contained in only one subtree but an “open vertex” in the other subtree (thus, the location of $\mathcal{P}$ at $v$ only belongs to one subtree). Our decomposition of $T$ corresponds to a decomposition tree, denoted by $\Upsilon$ and defined recursively as follows. Each internal node of $\Upsilon$ has two, three, or four children. The root of $\Upsilon$ corresponds to the entire tree $T$. Let $v$ be a centroid of $T$, and let $T_{1}$ and $T_{2}$ be the subtrees of $T$ decomposed by $v$. Note that $T_{1}$ and $T_{2}$ are disjoint since we consider $v$ to be contained in only one of them. Further, we call $v$ a connector in both $T_{1}$ and $T_{2}$. Correspondingly, in $\Upsilon$, its root has two children corresponding to $T_{1}$ and $T_{2}$, respectively. In general, consider a node $\mu$ of $\Upsilon$. Let $T(\mu)$ represent the subtree of $T$ corresponding to $\mu$. We assume $T(\mu)$ has at most two connectors (initially this is true when $\mu$ is the root). We further decompose $T(\mu)$ into subtrees that correspond to the children of $\mu$ in $\Upsilon$, as follows. Let $v$ be the centroid of $T(\mu)$ and let $T_{1}(\mu)$ and $T_{2}(\mu)$ respectively be the two subtrees of $T(\mu)$ decomposed by $v$. We consider $v$ as a connector in both $T_{1}(\mu)$ and $T_{2}(\mu)$. If $T(\mu)$ has at most one connector, then each of $T_{1}(\mu)$ and $T_{2}(\mu)$ has at most two connectors. In this case, $\mu$ has two children corresponding to $T_{1}(\mu)$ and $T_{2}(\mu)$, respectively. If $T(\mu)$ has two connectors but each of $T_{1}(\mu)$ and $T_{2}(\mu)$ still has at most two connectors (with $v$ as a new connector), then $\mu$ has two children corresponding to $T_{1}(\mu)$ and $T_{2}(\mu)$, respectively. Otherwise, one of them, say, $T_{2}(\mu)$, has three connectors and the other $T_{1}(\mu)$ has only one connector (e.g., see Fig. 2). In this case, $\mu$ has a child in $\Upsilon$ corresponding to $T_{1}(\mu)$, and we further perform a connector-reducing decomposition on $T_{2}(\mu)$, as follows (this is the main difference between our decomposition and the traditional centroid decomposition used before [19, 28, 32, 33]). Depending on whether the three connectors of $T_{2}(\mu)$ are in a simple path, there are two cases. 1. If they are in a simple path, without loss of generality, we assume $v$ is the one between the other two connectors in the path. We decompose $T_{2}(\mu)$ into two subtrees at $v$ such that they contain the two connectors respectively. In this way, each subtree contains at most two connectors. Correspondingly, $\mu$ has another two children corresponding the two subtrees of $T_{2}(\mu)$, and thus $\mu$ has three children in total. 2. Otherwise, there is a unique vertex $v^{\prime}$ in $T_{2}(\mu)$ that decomposes $T_{2}(\mu)$ into three subtrees that contain the three connectors respectively (e.g., see Fig. 2). Note that $v^{\prime}$ and the three subtrees can be easily found in linear time by traversing $T_{2}(\mu)$. Correspondingly, $\mu$ has another three children corresponding to the above three subtrees of $T_{2}(\mu)$, respectively, and thus $\mu$ has four children in total. Note that we consider $v^{\prime}$ as a connector in each of the above three subtrees. Thus, each subtree contains at most two connectors. We continue the decomposition until each subtree $T(\mu)$ of $\mu\in\Upsilon$ becomes an edge $e(v_{1},v_{2})$ of $T$. According to our decomposition, both $v_{1}$ and $v_{2}$ are connectors of $T(\mu)$, but they may only open vertices of $T(\mu)$. If both $v_{1}$ and $v_{2}$ are open vertices of $T(\mu)$, then we will not further decompose $T(\mu)$, so $\mu$ is a leaf of $\Upsilon$. Otherwise, we further decompose $T(\mu)$ into an open edge and a closed vertex $v_{i}$ if $v_{i}$ is contained in $T(\mu)$ for each $i=1,2$. Correspondingly, $\mu$ has either two or three children that are leaves of $\Upsilon$. In this way, for each leaf $\mu$ of $\Upsilon$, $T(\mu)$ is either an open edge or a closed vertex of $T$. In the former case, $T(\mu)$ has two connectors that are its incident vertices, and in the latter case, $T(\mu)$ has one connector that is itself. This finishes the decomposition of $T$. A major difference between our decomposition and the traditional centroid decomposition [19, 28, 32, 33] is that the subtree in our decomposition has at most two connectors. As will be clear later, this property is crucial to guarantee the runtime of our algorithms. Lemma 7 The height of $\Upsilon$ is $O(\log M)$ and $\Upsilon$ has $O(M)$ nodes. The connector-bounded centroid decomposition of $T$ can be computed in $O(M\log M)$ time. Proof Consider any node $\mu$ of $\Upsilon$. Let $T(\mu)$ be the subtree of $T$ corresponding to $\mu$. According to our decomposition, $|T(\mu)|=O(M\cdot(\frac{2}{3})^{t})$, where $t$ is the depth of $\mu$ in $\Upsilon$. This implies that the height of $\Upsilon$ is $O(\log M)$. Since each leaf of $\Upsilon$ corresponds to either a vertex or an open edge of $T$, the number of leaves of $\Upsilon$ is $O(M)$. Since each internal node of $\Upsilon$ has at least two children, the number of internal nodes is no more than the number of leaves. Hence, $\Upsilon$ has $O(M)$ nodes. According to our decomposition, all subtrees of $T$ corresponding to all nodes in the same level of $\Upsilon$ (i.e., all nodes with the same depth) are pairwise disjoint, and thus the total size of all these subtrees is $O(M)$. Decomposing each subtree can be done in linear time (e.g., finding a centroid takes linear time). Therefore, decomposing all subtrees in each level of $\Upsilon$ takes $O(M)$ time. As the height of $\Upsilon$ is $O(\log M)$, the total time for computing the decomposition of $T$ is $O(M\log M)$. ∎ In the following, we assume our decomposition of $T$ and the decomposition tree $\Upsilon$ have been computed. In addition, we introduce some notation that will be used later. For each node $\mu$ of $\Upsilon$, we use $T(\mu)$ to represent the subtree of $T$ corresponding to $\mu$. If $y$ is a connector of $T(\mu)$, then we use $T(y,\mu)$ to represent the subtree of $T$ consisting of all points $q$ of $T\setminus T(\mu)$ such that $\pi(q,p)$ contains $y$ for any point $p\in T(\mu)$ (i.e., $T(y,\mu)$ is the “outside world” connecting to $T(\mu)$ through $y$; e.g., see Fig. 3). By this definition, if $y$ is the only connector of $T(\mu)$, then $T=T(\mu)\cup T(y,\mu)$; if $T(\mu)$ has two connectors $y_{1}$ and $y_{2}$, then $T=T(\mu)\cup T(y_{1},\mu)\cup T(y_{2},\mu)$. 4.2 Computing the Medians In this section, we compute all medians. It is easy to compute the median $p_{i}^{*}$ for a single uncertain point $P_{i}$ in $O(M)$ time by traversing the tree $T$. Hence, a straightforward algorithm can compute all $n$ medians in $O(nM)$ time. Instead, we present an $O(M\log M)$ time algorithm, which will prove Lemma 1. For any vertex $v$ (e.g., the centroid) of $T$, let $T_{1}$ and $T_{2}$ be two subtrees of $T$ decomposed by $v$ (i.e., $v$ is their only common vertex and $T=T_{1}\cup T_{2}$), such that $v$ is contained in only one subtree and is an open vertex in the other. The following lemma can be readily obtained from Kariv and Hakimi [27], and similar results were also given in [40]. Lemma 8 For any uncertain point $P_{i}$ of $\mathcal{P}$, we have the following. 1. If the probability sum of $P_{i}$ in $T_{j}$ is greater than $0.5$ for some $j\in\{1,2\}$, then the median $p_{i}^{*}$ must be in $T_{j}$. 2. The vertex $v$ is $p_{i}^{*}$ if the probability sum of $P_{i}$ in $T_{j}$ is equal to $0.5$ for some $j\in\{1,2\}$. Consider the connector-bounded centroid decomposition $\Upsilon$ of $T$. Starting from the root of $\Upsilon$, our algorithm will process the nodes of $\Upsilon$ in a top-down manner. Suppose we are processing a node $\mu$. Then, we maintain a sorted list of indices for $\mu$, called the index list of $\mu$ and denoted by $L(\mu)$, which consists of all indices $i\in[1,n]$ such that $p_{i}^{*}$ is not found yet but is known to be in the subtree $T(\mu)$. Since each index $i$ of $L(\mu)$ essentially refers to $P_{i}$, for convenience, we also say that $L(\mu)$ is a set of uncertain points. Let $F[1\cdots n]$ be an array, which will help to compute the probability sums in our algorithm. 4.2.1 The Root Case Initially, $\mu$ is the root and we process it as follows. We present our algorithm in a way that is consistent with that for the general case. Since $\mu$ is the root, we have $T(\mu)=T$ and $L(\mu)=\{1,2,\ldots,n\}$. Let $\mu_{1}$ and $\mu_{2}$ be the two children of $\mu$. Let $v$ be the centroid of $T$ that is used to decompose $T(\mu)$ into $T(\mu_{1})$ and $T(\mu_{2})$ (e.g., see Fig. 3). We compute in $O(|T(\mu)|)$ time the probability sums of all uncertain points of $L(\mu)$ in $T(\mu_{1})$ by using the array $F$ and traversing $T(\mu_{1})$. Specifically, we first perform a reset procedure on $F$ to reset $F[i]$ to $0$ for each $i\in L(\mu)$, by scanning the list $L(\mu)$. Then, we traverse $T(\mu_{1})$, and for each visited vertex, which holds some uncertain point location $p_{ij}$, we update $F[i]=F[i]+f_{ij}$. After the traversal, for each $i\in L(\mu)$, $F[i]$ is equal to the probability sum of $P_{i}$ in $T(\mu_{1})$. By Lemma 8, if $F[i]=0.5$, then $p_{i}^{*}$ is $v$ and we report $p_{i}^{*}=v$; if $F[i]>0.5$, then $p_{i}^{*}$ is in $T_{1}(\mu)$ and we add $i$ to the end of the index list $L(\mu_{1})$ for $\mu_{1}$ (initially $L(\mu_{1})=\emptyset$); if $F[i]<0.5$, then $p_{i}^{*}$ is in $T_{2}(\mu)$ and add $i$ to the end of $L(\mu_{2})$ for $\mu_{2}$. The above has correctly computed the index lists for $\mu_{1}$ and $\mu_{2}$. Recall that $v$ is a connector in both $T(\mu_{1})$ and $T(\mu_{2})$. In order to efficiently compute medians in $T(\mu_{1})$ and $T(\mu_{2})$ recursively, we compute a probability list $L(v,\mu_{j})$ at $v$ for $\mu_{j}$ for each $j=1,2$. We discuss $L(v,\mu_{1})$ first. The list $L(v,\mu_{1})$ is the same as $L(\mu_{1})$ except that each index $i\in L(v,\mu_{1})$ is also associated with a value, denoted by $F(i,v,\mu_{1})$, which is the probability sum of $P_{i}$ in $T(v,\mu_{1})$ (recall the definition of $T(v,\mu_{1})$ at the end of Section 4.1; note that $T(v,\mu_{1})=T(\mu_{2})$ in this case). The list $L(v,\mu_{1})$ can be built in $O(|T(\mu)|)$ time by traversing $T(\mu_{2})$ and using the array $F$. Specifically, we scan the list $L(\mu_{1})$, and for each index $i\in L(\mu_{1})$, we reset $F[i]=0$. Then, we traverse the subtree $T(\mu_{2})$, and for each location $p_{ij}$ in $T(\mu_{2})$, we update $F[i]=F[i]+f_{ij}$ (if $i$ is not in $L(\mu_{1})$, this step is actually redundant but does not affect anything). After the traversal, for each index $i\in L(\mu_{1})$, we copy it to $L(v,\mu_{1})$ and set $F(i,v,\mu_{1})=F[i]$. Similarly, we compute the probability list $L(v,\mu_{2})$ at $v$ for $\mu_{2}$ in $O(|T(\mu)|)$ time by traversing $T(\mu_{1})$. This finishes the processing of the root $\mu$. The total time is $O(|T(\mu)|)$ since $|L(\mu)|\leq|T(\mu)|$. Note that our algorithm guarantees that for each $i\in L(\mu_{1})$, $P_{i}$ must have at least one location in $T(\mu_{1})$, and thus $|L(\mu_{1})|\leq|T(\mu_{1})|$. Similarly, for each $i\in L(\mu_{2})$, $P_{i}$ must have at least one location in $T(\mu_{2})$, and thus $|L(\mu_{2})|\leq|T(\mu_{2})|$. 4.2.2 The General Case Let $\mu$ be an internal node of $\Upsilon$ such that the ancestors of $\mu$ have all been processed. Hence, we have a sorted index list $L(\mu)$. If $L(\mu)=\emptyset$, then we do not need to process $\mu$ and any of its descendants. We assume $L(\mu)\neq\emptyset$. Thus, for each $i\in L(\mu)$, $p_{i}^{*}$ is in $T(\mu)$ and $P_{i}$ has at least one location in $T(\mu)$ (and thus $|L(\mu)|\leq|T(\mu)|$). Further, for each connector $y$ of $T(\mu)$, the algorithm maintains a probability list $L(y,\mu)$ that is the same as $L(\mu)$ except that each index $i\in L(y,\mu)$ is associated with a value $F(i,y,\mu)$, which is the probability sum of $P_{i}$ in the subtree $T(y,\mu)$. Our processing algorithm for $\mu$ works as follows, whose total time is $O(|T(\mu)|)$. According to our decomposition, $T(\mu)$ has at most two connectors and $\mu$ may have two, three, or four children. We first discuss the case where $\mu$ has two children, and other cases can be handled similarly. Let $\mu_{1}$ and $\mu_{2}$ be the two children of $\mu$, respectively. Let $v$ be the centroid of $T(\mu)$ that is used to decompose it. We discuss the subtree $T(\mu_{1})$ first, and $T(\mu_{2})$ is similar. Since $v$ is a connector of $T(\mu_{1})$ and $T(\mu_{1})$ has at most two connectors, $T(\mu_{1})$ has at most one connector $y$ other than $v$. We consider the general situation where $T(\mu_{1})$ has such a connector $y$ (the case where such a connector does not exist can be handled similarly but in a simpler way). Note that $y$ must be a connector of $T(\mu)$. We first compute the probability sums of $P_{i}$’s for all $i\in L(\mu)$ in the subtree $T(\mu_{1})\cup T(y,\mu)$ (e.g., see Fig. 3), which can be done in $O(|T(\mu)|)$ time by traversing $T(\mu_{1})$ and using the array $F$ and the probability list $L(y,\mu)$ at $y$, as follows. We scan the list $L(\mu)$ and for each index $i\in L(\mu)$, we reset $F[i]=0$. Then, we traverse $T(\mu_{1})$ and for each location $p_{ij}$, we update $F[i]=F[i]+f_{ij}$ (it does not matter if $i\not\in L(\mu)$). When the traversal visits $y$, we scan the list $L(y,\mu)$ and for each index $i\in L(y,\mu)$, we update $F[i]=F[i]+F(i,y,\mu)$. After the traversal, for each $i\in L(\mu)$, $F[i]$ is the probability sum of $P_{i}$ in $T(\mu_{1})\cup T(y,\mu)$. For each $i\in L(\mu)$, if $F[i]=0.5$, we report $p_{i}^{*}=v$; if $F[i]>0.5$, we add $i$ to $L(\mu_{1})$; if $F[i]<0.5$, we add $i$ to $L(\mu_{2})$. This builds the two lists $L(\mu_{1})$ and $L(\mu_{2})$, which are initially $\emptyset$. Note that since for each $i\in L(\mu)$, $P_{i}$ has at least one location in $T(\mu)$, the above way of computing $L(\mu_{1})$ (resp., $L(\mu_{2})$) guarantees that for each $i$ in $L(\mu_{1})$ (resp., $L(\mu_{2})$), $P_{i}$ has at least one location in $T(\mu_{1})$ (resp., $T(\mu_{2})$), which implies $|L(\mu_{1})|\leq|T(\mu_{1})|$ (resp., $|L(\mu_{2})|\leq|T(\mu_{2})|$). Next we compute the probability lists for the connectors of $T(\mu_{1})$. Note that $T(\mu_{1})$ has two connectors $v$ and $y$. For $v$, we compute the probability list $L(v,\mu_{1})$ that is the same as $L(\mu_{1})$ except that each $i\in L(v,\mu_{1})$ is associated with a value $F(i,v,\mu_{1})$, which is the probability sum of $P_{i}$ in the subtree $T(v,\mu_{1})$. To compute $L(v,\mu_{1})$, we first reset $F[i]=0$ for each $i\in L(\mu_{1})$. Then we traverse $T(\mu_{2})$ and for each location $p_{ij}\in T(\mu_{2})$, we update $F[i]=F[i]+f_{ij}$. If $T(\mu_{2})$ has a connector $y^{\prime}$ other than $v$, then $y^{\prime}$ is also a connector of $T(\mu)$ (note that there is at most one such connector); we scan the probability list $L(y^{\prime},\mu)$ and for each $i\in L(y^{\prime},\mu)$, we update $F[i]=F[i]+F(i,y^{\prime},\mu)$. Finally, we scan $L(\mu_{1})$ and for each $i\in L(\mu_{1})$, we copy it to $L(v,\mu_{1})$ and set $F(i,v,\mu_{1})=F[i]$. This computes the probability list $L(v,\mu_{1})$. Further, we also need to compute the probability list $L(y,\mu_{1})$ at $y$ for $T(\mu_{1})$. The list $L(y,\mu_{1})$ is the same as $L(\mu_{1})$ except that each $i\in L(y,\mu_{1})$ also has a value $F(i,y,\mu_{1})$, which is the probability sum of $P_{i}$ in $T(y,\mu_{1})$. To compute $L(y,\mu_{1})$, we first copy all indices of $L(\mu_{1})$ to $L(y,\mu_{1})$, and then compute the values $F(i,y,\mu_{1})$, as follows. Note that $T(y,\mu_{1})$ is exactly $T(y,\mu)$ (e.g., see Fig. 3). Recall that as a connector of $T(\mu)$, $y$ has a probability list $L(y,\mu)$ in which each $i\in L(y,\mu)$ has a value $F(i,y,\mu)$. Notice that $L(y,\mu_{1})\subseteq L(y,\mu)$. Due to $T(y,\mu_{1})=T(y,\mu)$, for each $i\in L(y,\mu_{1})$, $F(i,y,\mu_{1})$ is equal to $F(i,y,\mu)$. Since indices in each of $L(y,\mu_{1})$ and $L(y,\mu)$ are sorted, we scan $L(y,\mu_{1})$ and $L(y,\mu)$ simultaneously (like merging two sorted lists) and for each $i\in L(y,\mu_{1})$, if we encounter $i$ in $L(y,\mu)$, then we set $F(i,y,\mu_{1})=F(i,y,\mu)$. This computes the probability list $L(y,\mu_{1})$ at $y$ for $T(\mu_{1})$. The above has processed the subtree $T(\mu_{1})$. Using the similar approach, we can process $T(\mu_{2})$ and we omit the details. This finishes the processing of $\mu$ for the case where $\mu$ has two children. The total time is $O(|T(\mu)|)$. To see this, the algorithm traverses $T(\mu)$ for a constant number of times. The algorithm also visits the list $L(\mu)$ and the probability list of each connector of $T(\mu)$ for a constant number of times. Recall that $|L(\mu)|\leq|T(\mu)|$ and $|L(\mu)|=|L(\mu,y)|$ for each connector $y$ of $T(\mu)$. Also recall that $T(\mu)$ has at most two connectors. Thus, the total time for processing $\mu$ is $O(|T(\mu)|)$. Remark. If the number of connectors of $T(\mu)$ were not bounded by a constant, then we could not bound the processing time for $\mu$ as above. This is one reason our decomposition on $T$ requires each subtree $T(\mu)$ to have at most two connectors. If $\mu$ has three children, $\mu_{1},\mu_{2},\mu_{3}$, then $T(\mu)$ is decomposed into three subtrees $T(\mu_{j})$ for $j=1,2,3$. In this case, $T(\mu)$ has two connectors. To process $\mu$, we apply the above algorithm for the two-children case twice. Specifically, we consider the procedure of decomposing $T(\mu)$ into three subtrees consisting of two “intermediate decomposition steps”. According to our decomposition, $T(\mu)$ was first decomposed into two subtrees by its centroid such that one subtree $T_{1}(\mu)$ contains at most two connectors while the other one $T_{2}(\mu)$ contains three connectors, and we consider this as the first intermediate step. The second intermediate step is to further decompose $T_{2}(\mu)$ into two subtrees each of which contains at most two connectors. To process $\mu$, we apply our two-children case algorithm on the first intermediate step and then on the second intermediate step. The total time is still $O(|T(\mu)|)$. We omit the details. Similarly, if $\mu$ has four children, then the decomposition can be considered as consisting of three intermediate steps (e.g., in Fig. 3, the first step is to decompose $T(\mu)$ into $T_{1}(\mu)$ and $T_{2}(\mu)$, and then decomposing $T_{2}(\mu)$ into three subtrees can be considered as consisting of two steps each of which decomposes a subtree into two subtrees), and we apply our two-children case algorithm three times. The total processing time for $\mu$ is also $O(|T(\mu)|)$. The above describes the algorithm for processing $\mu$ when $\mu$ is an internal node of $\Upsilon$. If $\mu$ is a leaf, then $T(\mu)$ is either a vertex or an open edge of $T$. If $T(\mu)$ is an open edge, the index list $L(\mu)$ must be empty since our algorithm only finds medians on vertices. Otherwise, $T(\mu)$ is a vertex $v$ of $T$. If $L(\mu)$ is not empty, then for each $i\in L(\mu)$, we simply report $p_{i}^{*}=v$. The running time of the entire algorithm is $O(M\log M)$. To see this, processing each node $\mu$ of $\Upsilon$ takes $O(|T(\mu)|)$ time. For each level of $\Upsilon$, the total sum of $|T(\mu)|$ of all nodes $\mu$ in the level is $O(|T|)$. Since the height of $\Upsilon$ is $O(\log M)$, the total time of the algorithm is $O(M\log M)$. This proves Lemma 1. 5 The Data Structures $\mathcal{A}_{1}$, $\mathcal{A}_{2}$, and $\mathcal{A}_{3}$ In this section, we present the three data structures $\mathcal{A}_{1}$, $\mathcal{A}_{2}$, and $\mathcal{A}_{3}$, for Lemmas 4, 5, and 6, respectively. In particular, $\mathcal{A}_{3}$ will be used to build $\mathcal{A}_{2}$ and it will also be needed for solving the $k$-center problem in Section 6. Our connector-bounded centroid decomposition $\Upsilon$ will play an important role in constructing both $\mathcal{A}_{1}$ and $\mathcal{A}_{3}$. In the following, we present them in the order of $\mathcal{A}_{1},\mathcal{A}_{3}$, and $\mathcal{A}_{2}$. 5.1 The Data Structure $\mathcal{A}_{1}$ The data structure $\mathcal{A}_{1}$ is for answering the coverage-report-queries, i.e., given any point $x\in T$, find all active uncertain points that are covered by $x$. Further, it also supports the operation of removing an uncertain point once it is deactivated. Consider any node $\mu\in\Upsilon$. If $\mu$ is the root, let $L(\mu)=\emptyset$; otherwise, define $L(\mu)$ to be the sorted list of all indices $i\in[1,n]$ such that $P_{i}$ does not have any locations in the subtree $T(\mu)$ but has at least one location in $T(\mu^{\prime})$, where $\mu^{\prime}$ is the parent of $\mu$. Let $y$ be any connector of $T(\mu)$. Let $L(y,\mu)$ be an index list the same as $L(\mu)$ and each index $i\in L(y,\mu)$ is associated with two values: $F(i,y,\mu)$, which is the probability sum of $P_{i}$ in the subtree $T(y,\mu)$, and $D(i,y,\mu)$, which is the expected distance from $y$ to the locations of $P_{i}$ in $T(y,\mu)$, i.e., $D(i,y,\mu)=w_{i}\cdot\sum_{p_{ij}\in T(y,\mu)}f_{ij}\cdot d(p_{ij},y)$. We refer to $L(\mu)$ and $L(y,\mu)$ for each connector $y\in T(\mu)$ as the information lists of $\mu$. Lemma 9 Suppose $L(\mu)\neq\emptyset$ and the information lists of $\mu$ are available. Let $t_{\mu}$ be the number of indices in $L(\mu)$. Then, we can build a data structure of $O(t_{\mu})$ size in $O(|T(\mu)|+t_{\mu}\log t_{\mu})$ time on $T(\mu)$, such that given any point $x\in T(\mu)$, we can report all indices $i$ of $L(\mu)$ such that $P_{i}$ is covered by $x$ in $O(\log n+k\log n)$ amortized time, where $k$ is the output size; further, if $P_{i}$ is deactivated with $i\in L(\mu)$, then we can remove $i$ from the data structure and all information lists of $\mu$ in $O(\log n)$ amortized time. Proof As $L(\mu)\neq\emptyset$, $\mu$ is not the root. Thus, $T(\mu)$ has one or two connectors. We only discuss the most general case where $T(\mu)$ has two connectors since the other case is similar but easier. Let $y_{1}$ and $y_{2}$ denote the two connectors of $T(\mu)$, respectively. So the lists $L(y_{1},\mu)$ and $L(y_{2},\mu)$ are available. Note that for any two points $p$ and $q$ in $T(\mu)$, $\pi(p,q)$ is also in $T(\mu)$ since $T(\mu)$ is connected. Consider any point $x\in T(\mu)$. Suppose we traverse on $T(\mu)$ from $x$ to $y_{1}$, and let $q_{x}$ be the first point on $\pi(y_{1},y_{2})$ we encounter (e.g. see Fig 4; so $q_{x}$ is $x$ if $x\in\pi(y_{1},y_{2})$). Let $a_{x}=d(x,q_{x})$ and $b_{x}=d(q_{x},y_{1})$. Thus, $d(y_{1},x)=a_{x}+b_{x}$ and $d(y_{2},x)=a_{x}+d(y_{1},y_{2})-b_{x}$. For any $i\in L(\mu)$, since $P_{i}$ does not have any location in $T(\mu)$, we have $F(i,y_{1},\mu)+F(i,y_{2},\mu)=1$, and thus the following holds for $\mathsf{E}\mathrm{d}(x,P_{i})$: $$\begin{split}&\displaystyle\mathsf{E}\mathrm{d}(x,P_{i})=w_{i}\cdot\sum_{p_{ij% }\in T}f_{ij}\cdot d(x,p_{ij})\\ &\displaystyle=w_{i}\cdot\sum_{p_{ij}\in T(y_{1},\mu)}f_{ij}\cdot d(x,p_{ij})+% w_{i}\cdot\sum_{p_{ij}\in T(y_{2},\mu)}f_{ij}\cdot d(x,p_{ij})\\ &\displaystyle=w_{i}\cdot[F(i,y_{1},\mu)\cdot(a_{x}+b_{x})+D(i,y_{1},\mu)]+w_{% i}\cdot[F(i,y_{2},\mu)\cdot(a_{x}+d(y_{1},y_{2})-b_{x})+D(i,y_{2},\mu)]\\ &\displaystyle=w_{i}\cdot[a_{x}+(F(i,y_{1},\mu)-F(i,y_{2},\mu))\cdot b_{x}+D(i% ,y_{1},\mu)+D(i,y_{2},\mu)+F(i,y_{2},\mu)\cdot d(y_{1},y_{2})].\end{split}$$ Notice that for any $x\in T(\mu)$, all above values are constant except $a_{x}$ and $b_{x}$. Therefore, if we consider $a_{x}$ and $b_{x}$ as two variables of $x$, $\mathsf{E}\mathrm{d}(x,P_{i})$ is a linear function of them. In other words, $\mathsf{E}\mathrm{d}(x,P_{i})$ defines a plane in $\mathbb{R}^{3}$, where the $z$-coordinates correspond to the values of $\mathsf{E}\mathrm{d}(x,P_{i})$ and the $x$- and $y$-coordinates correspond to $a_{x}$ and $b_{x}$ respectively. In the following, we also use $\mathsf{E}\mathrm{d}(x,P_{i})$ to refer to the plane defined by it in $\mathbb{R}^{3}$. Remark. This nice property for calculating $\mathsf{E}\mathrm{d}(x,P_{i})$ is due to that $\mu$ has at most two connectors. This is another reason our decomposition requires every subtree $T(\mu)$ to have at most two connectors. Recall that $x$ covers $P_{i}$ if $\mathsf{E}\mathrm{d}(x,P_{i})\leq\lambda$. Consider the plane $H_{\lambda}:z=\lambda$ in $\mathbb{R}^{3}$. In general the two planes $\mathsf{E}\mathrm{d}(x,P_{i})$ and $H_{\lambda}$ intersect at a line $l_{i}$ and we let $h_{i}$ represent the closed half-plane of $H_{\lambda}$ bounded by $l_{i}$ and above the plane $\mathsf{E}\mathrm{d}(x,P_{i})$. Let $x_{\lambda}$ be the point $(a_{x},b_{x})$ in the plane $H_{\lambda}$. An easy observation is that $\mathsf{E}\mathrm{d}(x,P_{i})\leq\lambda$ if and only if $x_{\lambda}\in h_{i}$. Further, we say that $l_{i}$ is an upper bounding line of $h_{i}$ if $h_{i}$ is below $l_{i}$ and a lower bounding line otherwise. Observe that if $l_{i}$ is an upper bounding line, then $\mathsf{E}\mathrm{d}(x,P_{i})\leq\lambda$ if and only if $x_{\lambda}$ is below $l_{i}$; if $l_{i}$ is a lower bounding line, then $\mathsf{E}\mathrm{d}(x,P_{i})\leq\lambda$ if and only if $x_{\lambda}$ is above $l_{i}$. Given any query point $x\in T(\mu)$, our goal for answering the query is to find all indices $i\in L(\mu)$ such that $P_{i}$ is covered by $x$. Based on the above discussions, we do the following preprocessing. After $d(y_{1},y_{2})$ is computed, by using the information lists of $y_{1}$ and $y_{2}$, we compute all functions $\mathsf{E}\mathrm{d}(x,P_{i})$ for all $i\in L(\mu)$ in $O(t_{\mu})$ time. Then, we obtain a set $U$ of all upper bounding lines and a set of all lower bounding lines on the plane $H_{\lambda}$ defined by $\mathsf{E}\mathrm{d}(x,P_{i})$ for all $i\in L(\mu)$. In the following, we first discuss the upper bounding lines. Let $S_{U}$ denote the indices $i\in L(\mu)$ such that $P_{i}$ defines an upper bounding line in $U$. Given any point $x\in T(\mu)$, we first compute $a_{x}$ and $b_{x}$. This can be done in constant time after $O(|T(\mu)|)$ time preprocessing, as follows. In the preprocessing, for each vertex $v$ of $T(\mu)$, we compute the vertex $q_{v}$ (defined in the similar way as $q_{x}$ with respect to $x$) as well as the two values $a_{v}$ and $b_{v}$ (defined similarly as $a_{x}$ and $b_{x}$, respectively). This can be easily done in $O(|T(\mu)|)$ time by traversing $T(\mu)$ and we omit the details. Given the point $x$, which is specified by an edge $e$ containing $x$, let $v$ be the incident vertex of $e$ closer to $y_{1}$ and let $\delta$ be the length of $e$ between $v$ and $x$. Then, if $e$ is on $\pi(y_{1},y_{2})$, we have $a_{x}=0$ and $b_{x}=b_{v}+\delta$. Otherwise, $a_{x}=a_{v}+\delta$ and $b_{x}=b_{v}$. After $a_{x}$ and $b_{x}$ are computed, the point $x_{\lambda}=(a_{x},b_{x})$ on the plane $H_{\lambda}$ is also obtained. Then, according to our discussion, all uncertain points of $S_{U}$ that are covered by $x$ correspond to exactly those lines of $U$ above $x_{\lambda}$. Finding the lines of $U$ above $x_{\lambda}$ is actually the dual problem of half-plane range reporting query in $\mathbb{R}^{2}$. By using the dynamic convex hull maintenance data structure of Brodal and Jacob [10], with $O(|U|\log|U|)$ time and $O(|U|)$ space preprocessing, for any point $x_{\lambda}$, we can easily report all lines of $U$ above $x_{\lambda}$ in $O(\log|U|+k\log|U|)$ amortized time (i.e., by repeating $k$ deletions), where $k$ is the output size, and deleting a line from $U$ can be done in $O(\log|U|)$ amortized time. Clearly, $|U|\leq t_{\mu}$. On the set of all lower bounding lines, we do the similar preprocessing, and the query algorithm is symmetric. Hence, the total preprocessing time is $O(|T(\mu)|+t_{\mu}\log t_{\mu})$ time. Each query takes $O(\log t_{\mu}+k\log^{2}t_{\mu})$ amortized time and each remove operation can be performed in $O(\log t_{\mu})$ amortized time. Note that $t_{\mu}\leq n$. The lemma thus follows. ∎ The preprocessing algorithm for our data structure $\mathcal{A}_{1}$ consists of the following four steps. First, we compute the information lists for all nodes $\mu$ of $\Upsilon$. Second, for each node $\mu\in\Upsilon$, we compute the data structure of Lemma 9. Third, for each $i\in[1,n]$, we compute a node list $L_{\mu}(i)$ containing all nodes $\mu\in\Upsilon$ such that $i\in L(\mu)$. Fourth, for each leaf $\mu$ of $\Upsilon$, if $T(\mu)$ is a vertex $v$ of $T$ holding a location $p_{ij}$, then we maintain at $\mu$ the value $\mathsf{E}\mathrm{d}(v,P_{i})$. Before giving the details of the above processing algorithm, we first assume the preprocessing work has been done and discuss the algorithm for answering the coverage-report-queries. Given any point $x\in T$, we answer the coverage-report-query as follows. Note that $x$ is in $T(\mu_{x})$ for some leaf $\mu_{x}$ of $\Upsilon$. For each node $\mu$ in the path of $\Upsilon$ from the root to $\mu_{x}$, we apply the query algorithm in Lemma 9 to report all indices $i\in L(\mu)$ such that $x$ covers $P_{i}$. In addition, if $T(\mu_{x})$ is a vertex of $T$ holding a location $p_{ij}$ such that $P_{i}$ is active, then we report $i$ if $\mathsf{E}\mathrm{d}(v,P_{i})$, which is maintained at $v$, is at most $\lambda$. The following lemma proves the correctness and the performance of our query algorithm. Lemma 10 Our query algorithm correctly finds all active uncertain points that are covered by $x$ in $O(\log M\log n+k\log n)$ amortized time, where $k$ is the output size. Proof Let $\pi_{x}$ represent the path of $\Upsilon$ from the root to the leaf $\mu_{x}$. To show the correctness of the algorithm, we argue that for each active uncertain point $P_{i}$ that is covered by $x$, $i$ will be reported by our query algorithm. Indeed, if $T(\mu_{x})$ is a vertex $v$ of $T$ holding a location $p_{ij}$ of $P_{i}$, then the leaf $\mu_{x}$ maintains the value $\mathsf{E}\mathrm{d}(v,P_{i})$, which is equal to $\mathsf{E}\mathrm{d}(x,P_{i})$ as $x=v$. Hence, our algorithm will report $i$ when it processes $\mu_{x}$. Otherwise, no location of $P_{i}$ is in $T(\mu_{x})$. Since $P_{i}$ has locations in $T$, if we go from the root to $\mu_{x}$ along $\pi$, we will eventually meet a node $\mu$ such that $T(\mu)$ does not have any location of $P_{i}$ while $T(\mu^{\prime})$ has at least one location of $P_{i}$, where $\mu^{\prime}$ is the parent of $\mu$. This implies that $i$ is in $L(\mu)$, and consequently, our query algorithm will report $i$ when it processes $\mu$. This establishes the correctness of our query algorithm. For the runtime, as the height of $\Upsilon$ is $O(\log M)$, we make $O(\log M)$ calls on the query algorithm in Lemma 9. Further, notice that each $i$ will be reported at most once. This is because if $i$ is in $L(\mu)$ for some node $\mu$, then $i$ cannot be in $L(\mu^{\prime})$ for any ancestor $\mu^{\prime}$ of $\mu$. Therefore, the total runtime is $O(\log M\log n+k\log n)$. ∎ If an uncertain point $P_{i}$ is deactivated, then we scan the node list $L_{\mu}(i)$ and for each node $\mu\in L_{\mu}(i)$, we remove $i$ from the data structure by Lemma 9. The following lemma implies that the total time is $O(m_{i}\log M\log n)$. Lemma 11 For each $i\in[1,n]$, the number of nodes in $L_{\mu}(i)$ is $O(m_{i}\log M)$. Proof Let $\alpha$ denote the number of nodes of $L_{\mu}(i)$. Our goal is to argue that $i$ appears in $L(\mu)$ for $O(m_{i}\log M)$ nodes $\mu$ of $\Upsilon$. Recall that if $i$ is in $L(\mu)$ for a node $\mu\in\Upsilon$, then $P_{i}$ has at least one location in $T(\mu^{\prime})$, where $\mu^{\prime}$ is the parent of $\mu$. Since each node of $\Upsilon$ has at most four children, if $N$ is the total number of nodes $\mu^{\prime}$ such that $P_{i}$ has at least one location in $T(\mu^{\prime})$, then it holds that $\alpha\leq 4N$. Below we show that $N=O(m_{i}\log M)$, which will prove the lemma. Consider any location $p_{ij}$ of $P_{i}$. According to our decomposition, the subtrees $T(\mu)$ for all nodes $\mu$ in the same level of $\Upsilon$ are pairwise disjoint. Let $v$ be the vertex of $T$ that holds $p_{ij}$, and let $\mu_{v}$ be the leaf of $\Upsilon$ with $T(\mu_{v})=v$. Observe that for any node $\mu\in\Upsilon$, $p_{ij}$ appears in $T(\mu)$ if and only if $\mu$ is in the path of $\Upsilon$ from $\mu_{v}$ to the root. Hence, there are $O(\log M)$ nodes $\mu\in\Upsilon$ such that $p_{ij}$ appears in $T(\mu)$. As $P_{i}$ has $m_{i}$ locations, we obtain $N=O(m_{i}\log M)$. ∎ The following lemma gives our preprocessing algorithm for building $\mathcal{A}_{1}$. Lemma 12 $\sum_{\mu\in\Upsilon}t_{\mu}=O(M\log M)$, and the preprocessing time for constructing the data structure $\mathcal{A}_{1}$ excluding the second step is $O(M\log M)$. Proof We begin with the first step of the preprocessing algorithm for $\mathcal{A}_{1}$, i.e., computing the information lists for all nodes $\mu$ of $\Upsilon$. In order to do so, for each node $\mu\in\Upsilon$, we will also compute a sorted list $L^{\prime}(\mu)$ of all such indices $i\in[1,n]$ that $P_{i}$ has at least one location in $T(\mu)$, and further, for each connector $y$ of $T(\mu)$, we will compute a list $L^{\prime}(y,\mu)$ that is the same as $L^{\prime}(\mu)$ except that each $i\in L^{\prime}(y,\mu)$ is associated with two values: $F(i,y,\mu)$, which is equal to the probability sum of $P_{i}$ in the subtree $T(y,\mu)$, and $D(i,y,\mu)$, which is equal to the expected distance from $y$ to the locations of $P_{i}$ in $T(y,\mu)$, i.e., $D(i,y,\mu)=w_{i}\cdot\sum_{p_{ij}\in T(y,\mu)}f_{ij}\cdot d(y,p_{ij})$. With a little abuse of notation, we call all above the information lists of $\mu$ (including its original information lists). In the following, we describe our algorithm for computing the information lists of all nodes $\mu$ of $\Upsilon$. Let $F[1\cdots n]$ and $D[1\cdots n]$ be two arrays that we are going to use in our algorithm (they will mostly be used to compute the $F$ values and $D$ values of the information lists of connectors). Initially, if $\mu$ is the root of $\Upsilon$, we have $L^{\prime}(\mu)=\{1,2,\ldots,n\}$ and $L(\mu)=\emptyset$. Since $T(\mu)$ does not have any connectors, we do not need to compute the information lists for connectors. Consider any internal node $\mu$. We assume all information lists for $\mu$ has been computed (i.e., $L(\mu)$, $L^{\prime}(\mu)$, and $L^{\prime}(y,\mu)$, $L(y,\mu)$ for each connector $y$ of $T(\mu)$). In the following we present our algorithm for processing $\mu$, which will compute the information lists of all children of $\mu$ in $O(|T(\mu)|)$ time. We first discuss the case where $\mu$ has two children, denoted by $\mu_{1}$ and $\mu_{2}$, respectively. Let $v$ be the centroid of $T(\mu)$ that is used to decompose $T(\mu)$ into $T(\mu_{1})$ and $T(\mu_{2})$ (e.g., see Fig. 5). We first compute the information lists of $\mu_{1}$, as follows. We begin with computing the two lists $L(\mu_{1})$ and $L^{\prime}(\mu_{1})$. Initially, we set both of them to $\emptyset$. We scan the list $L^{\prime}(\mu)$ and for each $i\in L^{\prime}(\mu)$, we reset $F[i]=0$. Then, we scan the subtree $T(\mu_{1})$, and for each location $p_{ij}$, we set $F[i]=1$ as a flag showing that $P_{i}$ has locations in $T(\mu_{1})$. Afterwards, we scan the list $L^{\prime}(\mu)$ again, and for each $i\in L^{\prime}(\mu)$, if $F[i]=1$, then we add $i$ to $L^{\prime}(\mu_{1})$; otherwise, we add $i$ to $L(\mu_{1})$. This computes the two index lists $L(\mu_{1})$ and $L^{\prime}(\mu_{1})$ for $\mu_{1}$. The running time is $O(|T(\mu)|)$ since the size of $L^{\prime}(\mu)$ is no more than $|T(\mu)|$. We proceed to compute the information lists for the connectors of $T(\mu_{1})$. Recall that $v$ is a connector of $T(\mu_{1})$. So we need to compute the two lists $L(v,\mu_{1})$ and $L^{\prime}(v,\mu_{1})$, such that each index $i$ in either list is associated with the two values $F(i,v,\mu_{1})$ and $D(i,v,\mu_{1})$. We first copy all indices of $L(\mu_{1})$ to $L(v,\mu_{1})$ and copy all indices of $L^{\prime}(\mu_{1})$ to $L^{\prime}(v,\mu_{1})$. Next we compute their $F$ and $D$ values as follows. We first scan $L^{\prime}(\mu)$ and for each $i\in L^{\prime}(\mu)$, we reset $F[i]=0$ and $D[i]=0$. Next, we traverse $T(\mu_{2})$ and for each location $p_{ij}$, we update $F[i]=F[i]+f_{ij}$ and $D[i]=D[i]+w_{i}\cdot f_{ij}\cdot d(v,p_{ij})$ ($d(v,p_{ij})$ can be computed in constant time after $O(T(\mu))$-time preprocessing that computes $d(v,v^{\prime})$ for every vertex $v^{\prime}\in T(\mu)$ by traversing $T(\mu)$). Further, if $T(\mu_{2})$ has a connector $y$ other than $v$, then $y$ must be a connector of $T(\mu)$ (e.g., see Fig. 5; there exists at most one such connector $y$); we scan the list $L^{\prime}(y,\mu_{2})$, and for each $i\in L^{\prime}(y,\mu_{2})$, we update $F[i]=F[i]+F(i,y,\mu)$ and $D[i]=D[i]+D(i,y,\mu)+w_{i}\cdot d(v,y)\cdot F(i,y,\mu)$ ($d(v,y)$ is already computed in the preprocessing discussed above). Finally, we scan $L(v,\mu_{1})$ (resp., $L^{\prime}(v,\mu_{1})$) and for each index $i$ in $L(v,\mu_{1})$ (resp., $L^{\prime}(v,\mu_{1})$), we set $F(i,v,\mu_{1})=F[i]$ and $D(i,v,\mu_{1})=D[i]$. This computes the two information lists $L(v,\mu_{1})$ and $L^{\prime}(v,\mu_{1})$. The total time is $O(|T(\mu)|)$. In addition, if $T(\mu_{1})$ has a connector $y$ other than $v$, then $y$ must be a connector of $T(\mu)$ (e.g., see Fig. 3; there is only one such connector), and we further compute the two information lists $L(y,\mu_{1})$ and $L^{\prime}(y,\mu_{1})$. To do so, we first copy all indices of $L(\mu_{1})$ to $L(y,\mu_{1})$ and copy all indices of $L^{\prime}(\mu_{1})$ to $L^{\prime}(y,\mu_{1})$. Observe that $L(y,\mu_{1})$ and $L^{\prime}(y,\mu_{1})$ form a partition of the indices of $L^{\prime}(y,\mu)$. For each index $i$ in $L(y,\mu_{1})$ (resp., $L^{\prime}(y,\mu_{1})$), we have $F(i,y,\mu_{1})=F(i,y,\mu)$ and $D(i,y,\mu_{1})=D(i,y,\mu)$. Therefore, the $F$ and $D$ values for $L^{\prime}(y,\mu_{1})$ and $L(y,\mu_{1})$ can be obtained from $L^{\prime}(y,\mu)$ by scanning the three lists $L^{\prime}(y,\mu_{1})$, $L(y,\mu_{1})$, and $L^{\prime}(y,\mu)$ simultaneously, as they are all sorted lists. The above has computed the information lists for $\mu_{1}$ and the total time is $O(|T(\mu)|)$. Using the similar approach, we can compute the information lists for $\mu_{2}$, and we omit the details. This finishes the algorithm for processing $\mu$ where $\mu$ has two children. If $\mu$ has three children, then $T(\mu)$ is decomposed into three subtrees in our decomposition. As discussed in Section 4.2 on the algorithm for Lemma 1, we can consider the decomposition of $T(\mu)$ consisting of two intermediate decomposition steps each of which decompose a subtree into two subtrees. For each intermediate step, we apply the above processing algorithm for the two-children case. In this way, we can compute the information lists for all three children of $\mu$ in $O(|T(\mu)|)$ time. If $\mu$ has four children, then similarly there are four intermediate decomposition steps and we apply the two-children case algorithm three times. The total processing time for $\mu$ is still $O(|T(\mu)|)$. Once all internal nodes of $\Upsilon$ are processed, the information lists of all nodes are computed. Since processing each node $\mu$ of $\Upsilon$ takes $O(|T(\mu)|)$ time, the total time of the algorithm is $O(M\log M)$. This also implies that the total size of the information lists of all nodes of $\Upsilon$ is $O(M\log M)$, i.e., $\sum_{\mu\in\Upsilon}t_{\mu}=O(M\log M)$. This above describes the first step of our preprocessing algorithm for $\mathcal{A}_{1}$. For the third step, the node lists $L_{\mu}(i)$ can be built during the course of the above algorithm. Specifically, whenever an index $i$ is added to $L(\mu)$ for some node $\mu$ of $\Upsilon$, we add $\mu$ to the list $L_{\mu}(i)$. This only introduces constant extra time each. Therefore, the overall algorithm has the same runtime asymptotically as before. For the fourth step, for each leaf $\mu$ of $\Upsilon$ such that $T(\mu)$ is a vertex $v$ of $T$, we do the following. Let $p_{ij}$ be the uncertain point location at $v$. Based on our above algorithm, we have $L^{\prime}(\mu)=\{i\}$. Since $v$ is a connector, we have a list $L^{\prime}(v,\mu)$ consisting of $i$ itself and two values $F(i,v,\mu)$ and $D(i,v,\mu)$. Notice that $\mathsf{E}\mathrm{d}(v,P_{i})=D(i,v,\mu)$. Hence, once the above algorithm finishes, the value $\mathsf{E}\mathrm{d}(v,P_{i})$ is available. As a summary, the preprocessing algorithm for $\mathcal{A}_{1}$ except the second step runs in $O(M\log M)$ time. The lemma thus follows. ∎ For the second step of the preprocessing of $\mathcal{A}_{1}$, since $\sum_{\mu\in\Upsilon}t_{\mu}=O(M\log M)$ by Lemma 12, applying the preprocessing algorithm of Lemma 9 on all nodes of $\Upsilon$ takes $O(M\log^{2}M)$ time and $O(M\log M)$ space in total. Hence, the total preprocessing time of $\mathcal{A}_{1}$ is $O(M\log^{2}M)$ and the space is $O(M\log M)$. This proves Lemma 4. 5.2 The Data Structure $\mathcal{A}_{3}$ In this section, we present the data structure $\mathcal{A}_{3}$. Given any point $x$ and any uncertain point $P_{i}$, $\mathcal{A}_{3}$ is used to compute the expected distance $\mathsf{E}\mathrm{d}(x,P_{i})$. Note that we do not need to consider the remove operations for $\mathcal{A}_{3}$. We follow the notation defined in Section 5.1. As preprocessing, for each node $\mu\in\Upsilon$, we compute the information lists $L(\mu)$ and $L(y,\mu)$ for each connector $y$ of $T(\mu)$. This is actually the first step of the preprocessing algorithm of $\mathcal{A}_{1}$ in Section 5.1. Further, we also preform the fourth step of the preprocessing algorithm for $\mathcal{A}_{1}$. The above can be done in $O(M\log M)$ time by Lemma 12. Consider any node $\mu\in\Upsilon$ with $L(\mu)\neq\emptyset$. Given any point $x\in T(\mu)$, we have shown in the proof of Lemma 9 that $\mathsf{E}\mathrm{d}(x,P_{i})$ is a function of two variables $a_{x}$ and $b_{x}$. As preprocessing, we compute these functions for all $i\in L(\mu)$, which takes $O(t_{\mu})$ time as shown in the proof of Lemma 9. For each $i\in L(\mu)$, we store the function $\mathsf{E}\mathrm{d}(x,P_{i})$ at $\mu$. The total preprocessing time for $\mathcal{A}_{3}$ is $O(M\log M)$. Consider any query on a point $x\in T$ and $P_{i}\in\mathcal{P}$. Note that $x$ is specified by an edge $e$ and its distance to a vertex of $e$. Let $\mu_{x}$ be the leaf of $\Upsilon$ with $x\in T(\mu_{x})$. If $x$ is in the interior of $e$, then $T(\mu_{x})$ is the open edge $e$; otherwise, $T(\mu_{x})$ is a single vertex $v=x$. We first consider the case where $x$ is in the interior of $e$. In this case, $P_{i}$ does not have any location in $T(\mu_{x})$ since $T(\mu_{x})$ is an open edge. Hence, if we go along the path of $\Upsilon$ from the root to $\mu_{x}$, we will encounter a first node $\mu^{\prime}$ with $i\in L(\mu^{\prime})$. After finding $\mu^{\prime}$, we compute $a_{x}$ and $b_{x}$ in $T(\mu^{\prime})$, which can be done in constant time after $O(|T(\mu^{\prime})|)$ time preprocessing on $T(\mu^{\prime})$, as discussed in the proof of Lemma 9 (so the total preprocessing time for all nodes of $\Upsilon$ is $O(M\log M)$). After $a_{x}$ and $b_{x}$ are computed, we can obtain the value $\mathsf{E}\mathrm{d}(x,P_{i})$. Remark. One can verify (from the proof of Lemma 9) that as $x$ changes on $e$, $\mathsf{E}\mathrm{d}(x,P_{i})$ is a linear function of $x$ because one of $a_{x}$ and $b_{x}$ is constant and the other linearly changes as $x$ changes in $e$. Hence, the above also computes the linear function $\mathsf{E}\mathrm{d}(x,P_{i})$ for $x\in e$. To find the above node $\mu^{\prime}$, for each node $\mu$ in the path of $\Upsilon$ from the root to $\mu_{x}$, we need to determine whether $i\in L(\mu)$. If we represented the sorted index list $L(\mu)$ by a binary search tree, then we could spend $O(\log n)$ time on each node $\mu$ and thus the total query time would be $O(\log n\log M)$. To remove the $O(\log n)$ factor, we further enhance our preprocessing work by building a fractional cascading structure [12] on the sorted index lists $L(\mu)$ for all nodes $\mu$ of $\Upsilon$. The total preprocessing time for building the structure is linear in the total number of nodes of all lists, which is $O(M\log M)$ by Lemma 12. For each node $\mu$, the fractional cascading structure will create a new list $L^{*}(\mu)$ such that $L(\mu)\subseteq L^{*}(\mu)$. Further, for each index $i\in L^{*}(\mu)$, if it is also in $L(\mu)$, then we set a flag as an indicator. Setting the flags for all nodes of $\Upsilon$ can be done in $O(M\log M)$ time as well. Using the fractional cascading structure, we only need to do binary search on the list in the root and then spend constant time on each subsequent node [12], and thus the total query time is $O(\log M)$. If $x$ is a vertex $v$ of $T$, then depending on whether the location at $v$ is $P_{i}$’s or not, there are two subcases. If it is not, then we apply the same query algorithm as above. Otherwise, let $p_{ij}$ be the location at $v$. Recall that in our preprocessing, the value $\mathsf{E}\mathrm{d}(v,P_{i})$ has already been computed and stored at $\mu_{x}$ as $T(\mu_{x})=v$. Due to $v=x$, we obtain $\mathsf{E}\mathrm{d}(x,P_{i})=\mathsf{E}\mathrm{d}(v,P_{i})$. Hence, in either case, the query algorithm runs in $O(\log M)$ time. This proves Lemma 6. 5.3 The Data Structure $\mathcal{A}_{2}$ The data structure $\mathcal{A}_{2}$ is for answering candidate-center-queries: Given any vertex $v\in T_{m}$, the query asks for the candidate center $c$ for the active medians in $T_{m}(v)$, which is the subtree of $T_{m}$ rooted at $v$. Once an uncertain point is deactivated, $\mathcal{A}_{2}$ can also support the operation of removing it. Consider any vertex $v\in T_{m}$. Recall that due to our reindexing, the indices of all medians in $T_{m}(v)$ exactly form the range $R(v)$. Recall that the candidate center $c$ is the point on the path $\pi(v,r)$ closest to $r$ with $\mathsf{E}\mathrm{d}(v,P_{i})\leq\lambda$ for each active uncertain point $P_{i}$ with $i\in R(v)$. Also recall that our algorithm invariant guarantees that whenever a candidate-center-query is called at a vertex $v$, then it holds that $\mathsf{E}\mathrm{d}(v,P_{i})\leq\lambda$ for each active uncertain point $P_{i}$ with $i\in R(v)$. However, we actually give a result that can answer a more general query. Specifically, given a range $[k,j]$ with $1\leq k\leq j\leq n$, let $v_{kj}$ be the lowest common ancestor of all medians $p_{i}^{*}$ with $i\in[k,j]$ in $T_{m}$; if $\mathsf{E}\mathrm{d}(v_{kj},P_{i})>\lambda$ for some active $P_{i}$ with $i\in[k,j]$, then our query algorithm will return $\emptyset$; otherwise, our algorithm will compute a point $c$ on $\pi(v_{kj},r)$ closest to $r$ with $\mathsf{E}\mathrm{d}(c,P_{i})\leq\lambda$ for each active $P_{i}$ with $i\in[k,j]$. We refer to it as the generalized candidate-center-query. In the preprocessing, we build a complete binary search tree $\mathcal{T}$ whose leaves from left to right correspond to indices $1,2,\ldots,n$. For each node $u$ of $\mathcal{T}$, let $R(u)$ denote the set of indices corresponding to the leaves in the subtree of $\mathcal{T}$ rooted at $u$. For each median $p_{i}^{*}$, define $q_{i}$ to be the point $x$ on the path $\pi(p_{i}^{*},r)$ of $T_{m}$ closest to $r$ with $\mathsf{E}\mathrm{d}(x,P_{i})\leq\lambda$. For each node $u$ of $\mathcal{T}$, we define a node $q(u)$ as follows. If $u$ is a leaf, define $q(u)$ to be $q_{i}$, where $i$ is the index corresponding to leaf $u$. If $u$ is an internal node, let $v_{u}$ denote the vertex of $T_{m}$ that is the lowest common ancestor of the medians $p_{i}^{*}$ for all $i\in R(u)$. If $\mathsf{E}\mathrm{d}(v_{u},P_{i})\leq\lambda$ for all $i\in R(u)$ (or equivalently, $q_{i}$ is in $\pi(v_{u},r)$ for all $i\in R(u)$), then define $q(u)$ to be the point $x$ on the path $\pi(v_{u},r)$ of $T_{m}$ closest to $r$ with $\mathsf{E}\mathrm{d}(x,P_{i})\leq\lambda$ for all $i\in R(u)$; otherwise, $q(u)=\emptyset$. Lemma 13 The points $q(u)$ for all nodes $u\in\mathcal{T}$ can be computed in $O(M\log M+n\log^{2}M)$ time. Proof Assume the data structure $\mathcal{A}_{3}$ for Lemma 6 has been computed in $O(M\log M)$ time. In the following, by using $\mathcal{A}_{3}$ we compute $q(u)$ for all nodes $u\in\mathcal{T}$ in $O(M+n\log^{2}M)$ time. We first compute $q_{i}$ for all medians $p_{i}^{*}$. Consider the depth-first-search on $T_{m}$ starting from the root $r$. During the traversal, we use a stack $S$ to maintain all vertices in order along the path $\pi(r,v)$ whenever a vertex $v$ is visited. Such a stack can be easily maintained by standard techniques (i.e., push new vertices into $S$ when we go “deeper” and pop vertices out of $S$ when backtrack), without affecting the linear-time performance of the traversal asymptotically. Suppose the traversal visits a median $p_{i}^{*}$. Then, the vertices of $S$ essentially form the path $\pi(r,p_{i}^{*})$. To compute $q_{i}$, we do binary search on the vertices of $S$, as follows. We implement $S$ by using an array of size $M$. Since the order of the vertices of $S$ is the same as their order along $\pi(r,p_{i}^{*})$, the expected distances $\mathsf{E}\mathrm{d}(v,P_{i})$ of the vertices $v\in S$ along their order in $S$ are monotonically changing. Consider a middle vertex $v$ of $S$. The vertex $v$ partitions $S$ into two subarrays such that one subarray contains all vertices of $\pi(r,v)$ and the other contains vertices of $\pi(v,p_{i}^{*})$. We compute $\mathsf{E}\mathrm{d}(v,P_{i})$ by using data structure $\mathcal{A}_{3}$. Depending on whether $\mathsf{E}\mathrm{d}(v,P_{i})\leq\lambda$, we can proceed on only one subarray of $M$. The binary search will eventually locate an edge $e=(v,v^{\prime})$ such that $\mathsf{E}\mathrm{d}(v,P_{i})\leq\lambda$ and $\mathsf{E}\mathrm{d}(v^{\prime},P_{i})>\lambda$. Then, we know that $q_{i}$ is located on $e\setminus\{v^{\prime}\}$. We further pick any point $x$ in the interior of $e$ and the data structure $\mathcal{A}_{3}$ can also compute the function $\mathsf{E}\mathrm{d}(x,P_{i})$ for $x\in e$ as remarked in Section 5.2. With the function $\mathsf{E}\mathrm{d}(x,P_{i})$ for $x\in e$, we can compute $q_{i}$ in constant time. Since the binary search calls $\mathcal{A}_{3}$ $O(\log M)$ times, the total time of the binary search is $O(\log^{2}M)$. In this way, we can compute $q_{i}$ for all medians $p_{i}^{*}$ with $i\in[1,n]$ in $O(M+n\log^{2}M)$ time, where the $O(n\log^{2}M)$ time is for the binary search procedures in the entire algorithm and the $O(M)$ time is for traversing the tree $T_{m}$. Note that this also computes $q(u)$ for all leaves $u$ of $\mathcal{T}$. We proceed to compute the points $q(u)$ for all internal nodes $\mu$ of $\mathcal{T}$ in a bottom-up manner. Consider an internal node $u$ such that $q(u_{1})$ and $q(u_{2})$ have been computed, where $u_{1}$ and $u_{2}$ are the children of $u$, respectively. We compute $q(u)$ as follows. If either one of $q(u_{1})$ and $q(u_{2})$ is $\emptyset$, then we set $q(u)=\emptyset$. Otherwise, we do the following. Let $i$ (resp., $j$) be the leftmost (resp., rightmost) leaf in the subtree $\mathcal{T}(u)$ of $\mathcal{T}$ rooted at $u$. We first find the lowest comment ancestor of $p_{i}^{*}$ and $p_{j}^{*}$ in the tree $T_{m}$, denoted by $v_{ij}$. Due to our particular way of defining indices of all medians, $v_{ij}$ is the lowest common ancestor of the medians $p_{k}^{*}$ for all $k\in[i,j]$. We determine whether $q(u_{1})$ and $q(u_{2})$ are both on $\pi(r,v_{ij})$. If either one is not on $\pi(r,v_{ij})$, then we set $q(u)=\emptyset$; otherwise, we set $q(u)$ to the one of $q(u_{1})$ and $q(u_{2})$ closer to $v_{ij}$. The above for computing $q(u)$ can be implemented in $O(1)$ time, after $O(M)$ time preprocessing on $T_{m}$. Specifically, with $O(M)$ time preprocessing on $T_{m}$, given any two vertices of $T_{m}$, we can compute their lowest common ancestor in $O(1)$ time [7, 21]. Hence, we can compute $v_{ij}$ in constant time. To determine whether $q(u_{1})$ is on $\pi(r,v_{ij})$, we use the following approach. As a point on $T_{m}$, $q(u_{1})$ is specified by an edge $e_{1}$ and its distance to one incident vertex of $e_{1}$. Let $v_{1}$ be the incident vertex of $e_{1}$ that is farther from the root $r$. Observe that $q(u_{1})$ is on $\pi(r,v_{ij})$ if and only if the lowest common ancestor of $v_{1}$ and $v_{ij}$ is $v_{1}$. Hence, we can determine whether $q(u_{1})$ is on $\pi(r,v_{ij})$ in constant time by a lowest common ancestor query. Similarly, we can determine whether $q(u_{2})$ is on $\pi(r,v_{ij})$ in constant time. Assume both $q(u_{1})$ and $q(u_{2})$ are on $\pi(r,v_{ij})$. To determine which one of $q(u_{1})$ and $q(u_{2})$ is closer to $v_{ij}$, if they are on the same edge $e$ of $T_{m}$, then this can be done in constant time since both points are specified by their distances to an incident vertex of $e$. Otherwise, let $e_{1}$ be the edge of $T_{m}$ containing $q(u_{1})$ and let $v_{1}$ be the incident vertex of $e_{1}$ farther to $r$; similarly, let $e_{2}$ be the edge of $T_{m}$ containing $q(u_{2})$ and let $v_{2}$ be the incident vertex of $e_{2}$ farther to $r$. Observe that $q(u_{1})$ is closer to $v_{ij}$ if and only if the lowest common ancestor of $v_{1}$ and $v_{2}$ is $v_{2}$, which can be determined in constant time by a lowest common ancestor query. The above shows that we can compute $q(u)$ in constant time based on $q(u_{1})$ and $q(u_{2})$. Thus, we can compute $q(u)$ for all internal nodes $u$ of $\mathcal{T}$ in $O(n)$ time. The lemma thus follows. ∎ In addition to constructing the tree $\mathcal{T}$ as above, our preprocessing for $\mathcal{A}_{2}$ also includes building a lowest common ancestor query data structure on $T_{m}$ in $O(M)$ time, such that given any two vertices of $T_{m}$, we can compute their lowest common ancestor in $O(1)$ time [7, 21]. This finishes the preprocessing for $\mathcal{A}_{2}$. The total time is $O(M\log M+n\log^{2}M)$. The following lemma gives our algorithm for performing operations on $\mathcal{T}$. Lemma 14 Given any range $[k,j]$, we can answer each generalized candidate-center-query in $O(\log n)$ time, and each remove operation (i.e., deactivating an uncertain point) can be performed in $O(\log n)$ time. Proof We first describe how to perform the remove operations. Suppose an uncertain point $P_{i}$ is deactivated. Let $u_{i}$ be the leaf of $\mathcal{T}$ corresponding to the index $i$. We first set $q(u_{i})=\emptyset$. Then, we consider the path of $\mathcal{T}$ from $u_{i}$ to the root in a bottom-up manner, and for each node $u$, we update $q(u)$ based on $q(u_{1})$ and $q(u_{2})$ in constant time in exactly the same way as in Lemma 13, where $u_{1}$ and $u_{2}$ are the two children of $u$, respectively. In this way, each remove operation can be performed in $O(\log n)$ time. Next we discuss the generalized candidate-center-query on a range $[k,j]$. By standard techniques, we can locate a set $S$ of $O(\log n)$ nodes of $\mathcal{T}$ such that the descendant leaves of these nodes exactly correspond to indices in the range $[k,j]$. We find the lowest common ancestor $v_{kj}$ of $p_{k}^{*}$ and $p_{j}^{*}$ in $T_{m}$ in constant time. Then, for each node $u\in S$, we check whether $q(u)$ is on $\pi(r,v_{kj})$, which can be done in constant time by using the lowest common ancestor query in the same way as in the proof of Lemma 13. If $q(u)$ is not on $\pi(r,v_{kj})$ for some $u\in S$, then we simply return $\emptyset$. Otherwise, $q(u)$ is on $\pi(r,v_{kj})$ for every $u\in S$. We further find the point $q(u)$ that is closest to $v_{kj}$ among all $u\in S$, and return it as the answer to the candidate-center-query on $[k,j]$. Such a $q(u)$ can be found by comparing the nodes of $S$ in $O(\log n)$ time. Specifically, for each pair $u$ and $u^{\prime}$ in a comparison, we find among $q(u)$ and $q(u^{\prime})$ the one closer to $v_{kj}$, which can be done in constant time by using the lowest common ancestor query in the same way as in the proof of Lemma 13, and then we keep comparing the above closer one to the rest of the nodes in $S$. In this way, the candidate-center-query can be handled in $O(\log n)$ time. ∎ This proves Lemma 5. 5.4 Handling the Degenerate Case and Reducing the General Case to the Vertex-Constrained Case We have solved the vertex-constrained case problem, i.e., all locations of $\mathcal{P}$ are at vertices of $T$ and each vertex of $T$ contains at least one location of $\mathcal{P}$. Recall that we have made a general position assumption that every vertex of $T$ has only one location of $\mathcal{P}$. For the degenerate case, our algorithm still works in the same way as before with the following slight change. Consider a subtree $T(\mu)$ corresponding to a node $\mu$ of $\Upsilon$. In the degenerate case, since a vertex of $T(\mu)$ may hold multiple uncertain point locations of $\mathcal{P}$, we define the size $|T(\mu)|$ to be the total number of all uncertain point locations in $T(\mu)$. In this way, the algorithm and the analysis follow similarly as before. In fact, the performance of the algorithm becomes even better in the degenerate case since the height of the decomposition tree $\Upsilon$ becomes smaller (specifically, it is bounded by $O(\log t)$, where $t$ is the number of vertices of $T$, and $t<M$ in the degenerate case). The above has solved the vertex-constrained case problem (including the degenerate case). In the general case, a location of $\mathcal{P}$ may be in the interior of an edge of $T$ and a vertex of $T$ may not hold any location of $\mathcal{P}$. The following theorem solves the general case by reducing it to the vertex-constrained case. The reduction is almost the same as the one given in [40] for the one-center problem and we include it here for the completeness of this paper. Lemma 15 The center-coverage problem on $\mathcal{P}$ and $T$ is solvable in $O(\tau+M+|T|)$ time, where $\tau$ is the time for solving the same problem on $\mathcal{P}$ and $T$ if this were a vertex-constrained case. Proof We reduce the problem to an instance of the vertex-constrained case and then apply our algorithm for the vertex-constrained case. More specifically, we will modify the tree $T$ to obtain another tree $T^{\prime}$ of size $\Theta(M)$. We will also compute another set $\mathcal{P}^{\prime}$ of $n$ uncertain points on $T^{\prime}$, which correspond to the uncertain points of $\mathcal{P}$ with the same weights, but each uncertain point $P_{i}$ of $\mathcal{P}^{\prime}$ has at most $2m_{i}$ locations on $T^{\prime}$. Further, each location of $\mathcal{P}^{\prime}$ is at a vertex of $T^{\prime}$ and each vertex of $T^{\prime}$ holds at least one location of $\mathcal{P}^{\prime}$, i.e., it is the vertex-constrained case. We will show that we can obtain $T^{\prime}$ and $\mathcal{P}^{\prime}$ in $O(M+|T|)$ time. Finally, we will show that given a set of centers on $T^{\prime}$ for $\mathcal{P}^{\prime}$, we can find a corresponding set of the same number of centers on $T$ for $\mathcal{P}$ in $O(M+|T|)$ time. The details are given below. We assume that for each edge $e$ of $T$, all locations of $\mathcal{P}$ on $e$ have been sorted (otherwise we sort them first, which would introduce an additional $O(M\log M)$ time on the problem reduction). We traverse $T$, and for each edge $e$, if $e$ contains some locations of $\mathcal{P}$ in its interior, we create a new vertex in $T$ for each such location. In this way, we create at most $M$ new vertices for $T$. The above can be done in $O(M+|T|)$ time. We use $T_{1}$ to denote the new tree. Note that $|T_{1}|=O(M+|T|)$. For each vertex $v$ of $T_{1}$, if $v$ does not hold any location of $\mathcal{P}$, we call $v$ an empty vertex. Next, we modify $T_{1}$ in the following way. First, for each leaf $v$ of $T_{1}$, if $v$ is empty, then we remove $v$ from $T_{1}$. We keep doing this until each leaf of the remaining tree is not empty. Let $T_{2}$ denote the tree after the above step (e.g., see Fig. 6(b)). Second, for each internal vertex $v$ of $T_{2}$, if the degree of $v$ is $2$ and $v$ is empty, then we remove $v$ from $T_{2}$ and merge its two incident edges as a single edge whose length is equal to the sum of the lengths of the two incident edges of $v$. We keep doing this until each degree-2 vertex of the remaining tree is not empty. Let $T^{\prime}$ represent the remaining tree (e.g., see Fig. 6(c)). The above two steps can be implemented in $O(|T_{1}|)$ time, e.g., by a post-order traversal of $T_{1}$. We omit the details. Notice that every location of $\mathcal{P}$ is at a vertex of $T^{\prime}$ and every vertex of $T^{\prime}$ except those whose degrees are at least three holds a location of $\mathcal{P}$. Let $V$ denote the set of all vertices of $T^{\prime}$ and let $V_{3}$ denote the set of the vertices of $T^{\prime}$ whose degrees are at least three. Clearly, $|V_{3}|\leq|V\setminus V_{3}|$. Since each vertex in $V\setminus V_{3}$ holds a location of $\mathcal{P}$, we have $|V\setminus V_{3}|\leq M$, and thus $|V_{3}|\leq M$. To make every vertex of $T^{\prime}$ contain a location of an uncertain point, we first arbitrarily pick $m_{1}$ vertices from $V_{3}$ and remove them from $V_{3}$, and set a “dummy” location for $P_{1}$ at each of these vertices with zero probability. We keep picking next $m_{2}$ vertices from $V_{3}$ for $P_{2}$ and continue this procedure until $V_{3}$ becomes empty. Since $|V_{3}|\leq M$, the above procedure will eventually make $V_{3}$ empty before we “use up” all $n$ uncertain points of $\mathcal{P}$. We let $\mathcal{P}^{\prime}$ be the set of new uncertain points. For each $P_{i}\in\mathcal{P}$, it has at most $2m_{i}$ locations on $T^{\prime}$. Since now every vertex of $T^{\prime}$ holds a location of $\mathcal{P}^{\prime}$ and every location of $\mathcal{P}^{\prime}$ is at a vertex of $T^{\prime}$, we obtain an instance of the vertex-constrained case on $T^{\prime}$ and $\mathcal{P}^{\prime}$. Hence, we can use our algorithm for the vertex-constrained case to compute a set $C^{\prime}$ of centers on $T^{\prime}$ in $O(\tau)$ time. In the following, for each center $c^{\prime}\in C^{\prime}$, we find a corresponding center $c$ on the original tree $T$ such that $P_{i}$ is covered by $c$ on $T$ if and only if $P^{\prime}_{i}$ is covered by $c^{\prime}$ on $T^{\prime}$. Observe that every vertex $v$ of $T^{\prime}$ also exists as a vertex in $T_{1}$, and every edge $(u,v)$ of $T^{\prime}$ corresponds to the simple path in $T_{1}$ between $u$ and $v$. Suppose $c^{\prime}$ is on an edge $(u,v)$ of $T^{\prime}$ and let $\delta$ be the length of $e$ between $u$ and $c^{\prime}$. We locate a corresponding $c_{1}$ in $T_{1}$ in the simple path from $u$ to $v$ at distance $\delta$ from $u$. On the other hand, by our construction from $T$ to $T_{1}$, if an edge $e$ of $T$ does not appear in $T_{1}$, then $e$ is broken into several edges in $T_{1}$ whose total length is equal to that of $e$. Hence, every point of $T$ corresponds to a point on $T_{1}$. We find the point on $T$ that corresponds to $c_{1}$ of $T_{1}$, and let the point be $c$. Let $C$ be the set of points $c$ on $T$ corresponding to all $c^{\prime}\in C^{\prime}$ on $T$, as defined above. Let $C_{1}$ be the set of points $c_{1}$ on $T_{1}$ corresponding to all $c^{\prime}\in C^{\prime}$ on $T^{\prime}$. To compute $C$, we first compute $C_{1}$. This can be done by traversing both $T^{\prime}$ and $T_{1}$, i.e., for each edge $e$ of $T^{\prime}$ that contains centers $c^{\prime}$ of $C^{\prime}$, we find the corresponding points $c_{1}$ in the path of $T_{1}$ corresponding to the edge $e$. Since the paths of $T_{1}$ corresponding to the edges of $T^{\prime}$ are pairwise edge-disjoint, the runtime for computing $C_{1}$ is $O(|T_{1}|+|T^{\prime}|)$. Next we compute $C$, and similarly this can be done by traversing both $T_{1}$ and $T$ in $O(|T_{1}|+|T|)$ time. Hence, the total time for computing $C$ is $O(|T|+M)$ since both $|T_{1}|$ and $|T^{\prime}|$ are bounded by $O(|T|+M)$. As a summary, we can find an optimal solution for the center-coverage problem on $T$ and $\mathcal{P}$ in $O(\tau+M+|T|)$ time. The lemma thus follows. ∎ 6 The $k$-Center Problem The $k$-center problem is to find a set $C$ of $k$ centers on $T$ minimizing the value $\max_{1\leq i\leq n}d(C,P_{i})$, where $d(C,P_{i})=\min_{c\in C}d(c,P_{i})$. Let $\lambda_{opt}=\max_{1\leq i\leq n}d(C,P_{i})$ for an optimal solution $C$, and we call $\lambda_{opt}$ the optimal covering range. As the center-coverage problem, we can also reduce the general $k$-center problem to the vertex-constrained case. The reduction is similar to the one in Lemma 15 and we omit the details. In the following, we only discuss the vertex-constrained case and we assume the problem on $T$ and $\mathcal{P}$ is a vertex-constrained case. Let $\tau$ denote the running time for solving the center-coverage algorithm on $T$ and $\mathcal{P}$. To solve the $k$-center problem, the key is to compute $\lambda_{opt}$, after which we can compute $k$ centers in additional $O(\tau)$ time using our algorithm for the center-coverage problem with $\lambda=\lambda_{opt}$. To compute $\lambda_{opt}$, there are two main steps. In the first step, we find a set $S$ of $O(n^{2})$ candidate values such that $\lambda_{opt}$ must be in $S$. In the second step, we compute $\lambda_{opt}$ in $S$. Below we first compute the set $S$. For any two medians $p_{i}^{*}$ and $p_{j}^{*}$ on $T_{m}$, observe that as $x$ moves on $\pi(p_{i}^{*},p_{j}^{*})$ from $p_{i}^{*}$ to $p_{j}^{*}$, $\mathsf{E}\mathrm{d}(x,P_{i})$ is monotonically increasing and $\mathsf{E}\mathrm{d}(x,P_{j})$ is monotonically decreasing (e.g., see Fig. 7); we define $c_{ij}$ to be a point on the path $\pi(p_{i}^{*},p_{j}^{*})$ with $\mathsf{E}\mathrm{d}(c_{ij},P_{i})=\mathsf{E}\mathrm{d}(c_{ij},P_{j})$, and we let $c_{ij}=\emptyset$ if such a point does not exist on $\pi(p_{i}^{*},p_{j}^{*})$. We have the following lemma. Lemma 16 Either $\lambda_{opt}=\mathsf{E}\mathrm{d}(p_{i}^{*},P_{i})$ for some uncertain point $P_{i}$ or $\lambda_{opt}=\mathsf{E}\mathrm{d}(c_{ij},P_{i})=\mathsf{E}\mathrm{d}(c_{ij},P% _{j})$ for two uncertain points $P_{i}$ and $P_{j}$. Proof Consider any optimal solution and let $C$ be the set of all centers. For each $c\in C$, let $Q(c)$ be the set of uncertain points that are covered by $c$ with respect to $\lambda_{opt}$, i.e., for each $P_{i}\in Q(c)$, $\mathsf{E}\mathrm{d}(c,P_{i})\leq\lambda_{opt}$. Let $C^{\prime}$ be the subset of all centers $c\in C$ such that $Q(c)$ has an uncertain point $P_{i}$ with $\mathsf{E}\mathrm{d}(c,P_{i})=\lambda_{opt}$ and there is no other center $c^{\prime}\in C$ with $\mathsf{E}\mathrm{d}(c^{\prime},P_{i})<\lambda_{opt}$. For each $c\in C^{\prime}$, let $Q^{\prime}(c)$ be the set of all uncertain points $P_{i}$ such that $\mathsf{E}\mathrm{d}(c,P_{i})=\lambda_{opt}$. If there exists a center $c\in C^{\prime}$ with an uncertain point $P_{i}\in Q^{\prime}(c)$ such that $c$ is at $p_{i}^{*}$, then the lemma follows since $\lambda_{opt}=\mathsf{E}\mathrm{d}(c,P_{i})=\mathsf{E}\mathrm{d}(p_{i}^{*},P_{% i})$. Otherwise, if there exists a center $c\in C^{\prime}$ with two uncertain points $P_{i}$ and $P_{j}$ in $Q^{\prime}(c)$ such that $c$ is at $c_{ij}$, then the lemma also follows since $\lambda_{opt}=\mathsf{E}\mathrm{d}(c_{ij},P_{i})=\mathsf{E}\mathrm{d}(c_{ij},P% _{j})$. Otherwise, if we move each $c\in C^{\prime}$ towards the median $p_{j}^{*}$ for any $P_{j}\in Q^{\prime}(c)$, then $\mathsf{E}\mathrm{d}(c,P_{i})$ for every $P_{i}\in Q^{\prime}(c)$ becomes non-increasing. During the above movements of all $c\in C^{\prime}$, one of the following two cases must happen (since otherwise we would obtain another set $C^{\prime\prime}$ of $k$ centers with $\max_{1\leq i\leq n}d(C^{\prime\prime},P_{i})<\lambda_{opt}$, contradicting with that $\lambda_{opt}$ is the optimal covering range): either a center $c$ of $C^{\prime}$ arrives at a median $p_{i}^{*}$ with $\lambda_{opt}=\mathsf{E}\mathrm{d}(c,P_{i})=\mathsf{E}\mathrm{d}(p_{i}^{*},P_{% i})$ or a center $c$ of $C^{\prime}$ arrives at $c_{ij}$ for two uncertain points $P_{i}$ and $P_{j}$ with $\lambda_{opt}=\mathsf{E}\mathrm{d}(c_{ij},P_{i})=\mathsf{E}\mathrm{d}(c_{ij},P% _{j})$. In either case, the lemma follows. ∎ In light of Lemma 16, we let $S=S_{1}\cup S_{2}$ with $S_{1}=\{\mathsf{E}\mathrm{d}(p_{i}^{*},P_{i})\ |\ 1\leq i\leq n\}$ and $S_{2}=\{\mathsf{E}\mathrm{d}(c_{ij},P_{i})\ |\ 1\leq i,j\leq n\}$ (if $c_{ij}=\emptyset$ for a pair $i$ and $j$, then let $\mathsf{E}\mathrm{d}(c_{ij},P_{i})=0$). Hence, $\lambda_{opt}$ must be in $S$ and $|S|=O(n^{2})$. We assume the data structure $\mathcal{A}_{3}$ has been computed in $O(M\log M)$ time. Then, computing the values of $S_{1}$ can be done in $O(n\log M)$ time by using $\mathcal{A}_{3}$. The following lemma computes $S_{2}$ in $O(M+n^{2}\log n\log M)$ time. Lemma 17 After $O(M)$ time preprocessing, we can compute $\mathsf{E}\mathrm{d}(c_{ij},P_{i})$ in $O(\log n\cdot\log M)$ time for any pair $i$ and $j$. Proof As preprocessing, we do the following. First, we compute a lowest common ancestor query data structure on $T_{m}$ in $O(M)$ time such that given any two vertices of $T_{m}$, their lowest common ancestor can be found in $O(1)$ time [7, 21]. Second, for each vertex $v$ of $T_{m}$, we compute the length $d(v,r)$, i.e., the number of edges in the path of $T_{m}$ from $v$ to the root $r$ of $T_{m}$. Note that $d(v,r)$ is also the depth of $v$. Computing $d(v,r)$ for all vertices $v$ of $T_{m}$ can be done in $O(M)$ time by a depth-first-traversal of $T_{m}$ starting from $r$. For each vertex $v\in T_{m}$ and any integer $d\in[0,d(v,r)]$, we use $\alpha(v,d)$ to denote the ancestor of $v$ whose depth is $d$. We build a level ancestor query data structure on $T_{m}$ in $O(M)$ time that can compute $\alpha(v,d)$ in constant time for any vertex $v$ and any $d\in[0,d(v,r)]$ [8]. The total time of the above processing is $O(M)$. Consider any pair $i$ and $j$. We present an algorithm to compute $c_{ij}$ in $O(\log n\cdot\log M)$ time, after which $\mathsf{E}\mathrm{d}(c_{ij},P_{i})$ can be computed in $O(\log M)$ time by using the data structure $\mathcal{A}_{3}$. Observe that $c_{ij}\neq\emptyset$ if and only if $\mathsf{E}\mathrm{d}(p_{i}^{*},P_{i})\leq\mathsf{E}\mathrm{d}(p_{i}^{*},P_{j})$ and $\mathsf{E}\mathrm{d}(p_{j}^{*},P_{j})\leq\mathsf{E}\mathrm{d}(p_{j}^{*},P_{i})$. Using $\mathcal{A}_{3}$, we can compute the four expected distances in $O(\log M)$ time and thus determine whether $c_{ij}=\emptyset$. If yes, we simply return zero. Otherwise, we proceed as follows. Note that $c_{ij}$ is a point $x\in\pi(p_{i}^{*},p_{j}^{*})$ minimizing the value $\max\{\mathsf{E}\mathrm{d}(x,P_{i}),\mathsf{E}\mathrm{d}(x,P_{j})\}$ (e.g., see Fig. 7). To compute $c_{ij}$, by using a lowest common ancestor query, we find the lowest common ancestor $v_{ij}$ of $p_{i}^{*}$ and $p_{j}^{*}$ in constant time. Then, we search $c_{ij}$ on the path $\pi(p_{i}^{*},v_{ij})$, as follows (we will search the path $\pi(p_{j}^{*},v_{ij})$ later). To simplify the notation, let $\pi=\pi(p_{i}^{*},v_{ij})$. By using the level ancestor queries, we can find the middle edge of $\pi$ in $O(1)$ time. Specifically, we find the two vertices $v_{1}=\alpha(p_{i}^{*},k)$ and $v_{2}=\alpha(p_{i}^{*},k+1)$, where $k=\lfloor(d(p_{i}^{*},r)+d(v_{ij},r))/2\rfloor$. Note that the two values $d(p_{i}^{*},r)$ and $d(v_{ij},r)$ are computed in the preprocessing. Hence, $v_{1}$ and $v_{2}$ can be found in constant time by the level ancestor queries. Clearly, the edge $e=(v_{1},v_{2})$ is the middle edge of $\pi$. After $e$ is obtained, by using the data structure $\mathcal{A}_{3}$ and as remarked in Section 5.2, we can obtain the two functions $\mathsf{E}\mathrm{d}(x,P_{i})$ and $\mathsf{E}\mathrm{d}(x,P_{j})$ on $x\in e$ in $O(\log M)$ time, and both functions are linear in $x$ for $x\in e$. As $x$ moves in $e$ from one end to the other, one of $\mathsf{E}\mathrm{d}(x,P_{i})$ and $\mathsf{E}\mathrm{d}(x,P_{j})$ is monotonically increasing and the other is monotonically decreasing. Therefore, we can determine in constant time whether $c_{ij}$ is on $\pi_{1}$, $\pi_{2}$, or $e$, where $\pi_{1}$ and $\pi_{2}$ are the sub-paths of $\pi$ partitioned by $e$. If $c_{ij}$ is on $e$, then $c_{ij}$ can be computed immediately by the two functions and we can finish the algorithm. Otherwise, the binary search proceeds on either $\pi_{1}$ or $\pi_{2}$ recursively. For the runtime, the binary search has $O(\log n)$ iterations and each iteration runs in $O(\log M)$ time. So the total time of the binary search on $\pi(p_{i}^{*},v_{ij})$ is $O(\log n\log M)$. The binary search will either find $c_{ij}$ or determine that $c_{ij}$ is at $v_{ij}$. The latter case actually implies that $c_{ij}$ is in the path $\pi(p_{j}^{*},v_{ij})$, and thus we apply the similar binary search on $\pi(p_{j}^{*},v_{ij})$, which will eventually compute $c_{ij}$. Thus, the total time for computing $c_{ij}$ is $O(\log n\log M)$. The lemma thus follows. ∎ The following theorem summarizes our algorithm. Theorem 6.1 An optimal solution for the $k$-center problem can be found in $O(n^{2}\log n\log M+M\log^{2}M\log n)$ time. Proof Assume the data structure $\mathcal{A}_{3}$ has been computed in $O(M\log M)$ time. Computing $S_{1}$ can be done in $O(n\log M)$ time. Computing $S_{2}$ takes $O(M+n^{2}\log n\log M)$ time. After $S$ is computed, we find $\lambda_{opt}$ from $S$ as follows. Given any $\lambda$ in $S$, we can use our algorithm for the center-coverage problem to find a minimum number $k^{\prime}$ of centers with respect to $\lambda$. If $k^{\prime}\leq k$, then we say that $\lambda$ is feasible. Clearly, $\lambda_{opt}$ is the smallest feasible value in $S$. To find $\lambda_{opt}$ from $S$, we first sort all values in $S$ and then do binary search using our center-coverage algorithm as a decision procedure. In this way, $\lambda_{opt}$ can be found in $O(n^{2}\log n+\tau\log n)$ time. Finally, we can find an optimal solution using our algorithm for the covering problem with $\lambda=\lambda_{opt}$ in $O(\tau)$ time. Therefore, the total time of the algorithm is $O(n^{2}\log n\log M+\tau\log n)$, which is $O(n^{2}\log n\log M+M\log^{2}M\log n)$ by Theorem 3.1. ∎ References [1] P.K. Agarwal, S.-W. Cheng, Y. Tao, and K. Yi. Indexing uncertain data. In Proc. of the 28th Symposium on Principles of Database Systems (PODS), pages 137–146, 2009. [2] P.K. Agarwal, A. Efrat, S. Sankararaman, and W. Zhang. Nearest-neighbor searching under uncertainty. In Proc. of the 31st Symposium on Principles of Database Systems (PODS), pages 225–236, 2012. [3] P.K. Agarwal, S. Har-Peled, S. Suri, H. Yıldız, and W. Zhang. Convex hulls under uncertainty. In Proc. of the 22nd Annual European Symposium on Algorithms (ESA), pages 37–48, 2014. [4] P.K. Agarwal and M. Sharir. Efficient algorithms for geometric optimization. ACM Computing Surveys, 30(4):412–458, 1998. [5] I. Averbakh and S. Bereg. Facility location problems with uncertainty on the plane. Discrete Optimization, 2:3–34, 2005. [6] I. Averbakh and O. Berman. Minimax regret $p$-center location on a network with demand uncertainty. Location Science, 5:247–254, 1997. [7] M. Bender and M. Farach-Colton. The LCA problem revisited. In Proc. of the 4th Latin American Symposium on Theoretical Informatics, pages 88–94, 2000. [8] M.A. Bender and M. Farach-Colton. The level ancestor problem simplied. Theoretical Computer Science, 321:5–12, 2004. [9] S. Bereg, B. Bhattacharya, S. Das, T. Kameda, P.R.S. Mahapatra, and Z. Song. Optimizing squares covering a set of points. Theoretical Computer Science, in press, 2015. [10] G. Brodal and R. Jacob. Dynamic planar convex hull. In Proc. of the 43rd IEEE Symposium on Foundations of Computer Science (FOCS), pages 617–626, 2002. [11] T.M. Chan and N. Hu. Geometric red–blue set cover for unit squares and related problems. Computational Geometry, 48(5):380–385, 2015. [12] B. Chazelle and L. Guibas. Fractional cascading: I. A data structuring technique. Algorithmica, 1(1):133–162, 1986. [13] R. Cheng, J. Chen, and X. Xie. Cleaning uncertain data with quality guarantees. Proceedings of the VLDB Endowment, 1(1):722–735, 2008. [14] R. Cheng, Y. Xia, S. Prabhakar, R. Shah, and J.S. Vitter. Efficient indexing methods for probabilistic threshold queries over uncertain data. In Proc. of the 30th International Conference on Very Large Data Bases (VLDB), pages 876–887, 2004. [15] R. Cole. Slowing down sorting networks to obtain faster sorting algorithms. Journal of the ACM, 34(1):200–208, 1987. [16] M. de Berg, M. Roeloffzen, and B. Speckmann. Kinetic 2-centers in the black-box model. In Proc. of the 29th Annual Symposium on Computational Geometry (SoCG), pages 145–154, 2013. [17] X. Dong, A.Y. Halevy, and C. Yu. Data integration with uncertainty. In Proceedings of the 33rd International Conference on Very Large Data Bases, pages 687–698, 2007. [18] G.N. Frederickson. Parametric search and locating supply centers in trees. In Proc. of the 2nd International Workshop on Algorithms and Data Structures (WADS), pages 299–319, 1991. [19] G.N. Frederickson and D.B. Johnson. Finding $k$th paths and $p$-centers by generating and searching good data structures. Journal of Algorithms, 4(1):61–80, 1983. [20] T. F. Gonzalez. Covering a set of points in multidimensional space. Information Processing Letters, 40(4):181–188, 1991. [21] D. Harel and R.E. Tarjan. Fast algorithms for finding nearest common ancestors. SIAM Journal on Computing, 13:338–355, 1984. [22] D.S. Hochbaum and W. Maass. Approximation schemes for covering and packing problems in image processing and vlsi. Journal of the ACM, 32(1):130–136, 1985. [23] L. Huang and J. Li. Stochasitc $k$-center and $j$-flat-center problems. In Proc. of the 28th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 110–129, 2017. [24] A. Jørgensen, M. Löffler, and J.M. Phillips. Geometric computations on indecisive points. In Proc. of the 12nd Algorithms and Data Structures Symposium (WADS), pages 536–547, 2011. [25] P. Kamousi, T.M. Chan, and S. Suri. Closest pair and the post office problem for stochastic points. In Proc. of the 12nd International Workshop on Algorithms and Data Structures (WADS), pages 548–559, 2011. [26] P. Kamousi, T.M. Chan, and S. Suri. Stochastic minimum spanning trees in Euclidean spaces. In Proc. of the 27th Annual Symposium on Computational Geometry (SoCG), pages 65–74, 2011. [27] O. Kariv and S. Hakimi. An algorithmic approach to network location problems. II: The $p$-medians. SIAM Journal on Applied Mathematics, 37(3):539–560, 1979. [28] O. Kariv and S.L. Hakimi. An algorithmic approach to network location problems. I: The $p$-centers. SIAM J. on Applied Mathematics, 37(3):513–538, 1979. [29] S.-S. Kim, S.W. Bae, and H.-K. Ahn. Covering a point set by two disjoint rectangles. International Journal of Computational Geometry and Applications, 21:313–330, 2011. [30] M. Löffler and M. van Kreveld. Largest bounding box, smallest diameter, and related problems on imprecise points. Computational Geometry: Theory and Applications, 43(4):419–433, 2010. [31] N. Megiddo. Linear-time algorithms for linear programming in $R^{3}$ and related problems. SIAM Journal on Computing, 12(4):759–776, 1983. [32] N. Megiddo and A. Tamir. New results on the complexity of $p$-centre problems. SIAM Journal on Computing, 12(4):751–758, 1983. [33] N. Megiddo, A. Tamir, E. Zemel, and R. Chandrasekaran. An $O(n\log^{2}n)$ algorithm for the $k$-th longest path in a tree with applications to location problems. SIAM J. on Computing, 10:328–337, 1981. [34] N.H. Mustafa and S. Ray. PTAS for geometric hitting set problems via local search. In Proc. of the 25th Annual Symposium on Computational Geometry (SoCG), pages 17–22, 2009. [35] S. Suri and K. Verbeek. On the most likely voronoi diagram and nearest neighbor searching. In Proc. of the 25th International Symposium on Algorithms and Computation (ISAAC), pages 338–350, 2014. [36] S. Suri, K. Verbeek, and H. Yıldız. On the most likely convex hull of uncertain points. In Proc. of the 21st European Symposium on Algorithms (ESA), pages 791–802, 2013. [37] Y. Tao, X. Xiao, and R. Cheng. Range search on multidimensional uncertain data. ACM Transactions on Database Systems, 32, 2007. [38] H. Wang. Minmax regret 1-facility location on uncertain path networks. European Journal of Operational Research, 239:636–643, 2014. [39] H. Wang and J. Zhang. One-dimensional $k$-center on uncertain data. Theoretical Computer Science, 602:114–124, 2015. [40] H. Wang and J. Zhang. Computing the center of uncertain points on tree networks. Algorithmica, 78(1):232–254, 2017. [41] M.L. Yiu, N. Mamoulis, X. Dai, Y. Tao, and M. Vaitis. Efficient evaluation of probabilistic advanced spatial queries on existentially uncertain data. IEEE Transactions on Knowledge and Data Engineering, 21:108–122, 2009.
Layer Ensembles: A Single-Pass Uncertainty Estimation in Deep Learning for Segmentation Kaisar Kushibar Corresponding author: kaisar.kushibar@ub.edu University of Barcelona, Department of Mathematics and Computer Science, Barcelona, 08007, Spain. Víctor Manuel Campello University of Barcelona, Department of Mathematics and Computer Science, Barcelona, 08007, Spain. Lidia Garrucho Moras University of Barcelona, Department of Mathematics and Computer Science, Barcelona, 08007, Spain. Akis Linardos University of Barcelona, Department of Mathematics and Computer Science, Barcelona, 08007, Spain. Petia Radeva University of Barcelona, Department of Mathematics and Computer Science, Barcelona, 08007, Spain. Karim Lekadir University of Barcelona, Department of Mathematics and Computer Science, Barcelona, 08007, Spain. Abstract Uncertainty estimation in deep learning has become a leading research field in medical image analysis due to the need for safe utilisation of AI algorithms in clinical practice. Most approaches for uncertainty estimation require sampling the network weights multiple times during testing or training multiple networks. This leads to higher training and testing costs in terms of time and computational resources. In this paper, we propose Layer Ensembles, a novel uncertainty estimation method that uses a single network and requires only a single pass to estimate predictive uncertainty of a network. Moreover, we introduce an image-level uncertainty metric, which is more beneficial for segmentation tasks compared to the commonly used pixel-wise metrics such as entropy and variance. We evaluate our approach on 2D and 3D, binary and multi-class medical image segmentation tasks. Our method shows competitive results with state-of-the-art Deep Ensembles, requiring only a single network and a single pass. 1 Introduction Despite the success of Deep Learning (DL) methods in medical image analysis, their black-box nature makes it more challenging to gain trust from both clinicians and patients [26]. Modern DL approaches are unreliable when encountered with new situations, where a DL model silently fails or produces an overconfident wrong prediction. Uncertainty estimation can overcome these common pitfalls, increasing the reliability of models by assessing the certainty of their prediction and alerting their user about potentially erroneous reports. Several methods in the literature address uncertainty estimation in DL [1]. General approaches include: 1) Monte-Carlo Dropout (MCDropout) [7], which requires several forward passes with enabled dropout layers in the network during test time; 2) Bayesian Neural Networks (BNN) [6] that directly represent network weights as probability distributions; and 3) Deep Ensembles (DE) [13] which combines the outputs of several networks to produce uncertainty estimates. MCDropout and BNN have been argued to be unreliable in real world datasets [14]. Nonetheless, MCDropout is one of the most commonly used methods, often favourable when carefully tuned [8]. Despite their inefficiency in terms of memory and time, evidence showed DE is the most reliable uncertainty estimation method [4, 1]. There have been successful attempts that minimised the cost of training of the original DE. For example, snapshot-ensembles [12] train a single network until convergence, and further train beyond that point, storing the model weights per additional epoch to obtain $M$ different models. In doing so, the training time is reduced drastically. However, multiple networks need to be stored and the testing remains the same as in DE. Additionally, the deep sub-ensembles [23] method uses $M$ segmentation heads on top of a single model. This method is particularly similar to our proposal – Layer Ensembles (LE). However, LE, in contrast to existing methods, exhibits the following benefits: • Scalable, intuitive, simple to train and test. The number of additional parameters is small compared to BNN approaches that double the number of parameters; • Single network compared to the state-of-the-art DE; • Unlike multi-pass BNN and MCDropout approaches, uncertainties can be calculated using a single forward pass, which would greatly benefit real-time applications; • Produces global (image-level) as well as pixel-wise uncertainty measures; • Allows estimating difficulty of a target sample for segmentation (example difficulty) that could be used to detect outliers; • Similar performance to DE regarding accuracy and confidence calibration. 2 Methodology Our method is inspired by the state-of-the-art DE [13] for uncertainty estimation as well as a more recent work [3] that estimates example difficulty through prediction depth. In this section, we provide a detailed explanation of how our LE method differs from other works [13, 3], taking the best from both concepts and introducing a novel method for uncertainty estimation in DL. Furthermore, we introduce how LE can be used to obtain a single image-level uncertainty metric that is more useful for segmentation tasks compared to the commonly used pixel-wise variance, entropy, and mutual information (MI) metrics. 2.1 Prediction depth Prediction Depth (PD) [3] measures example difficulty by training k-NN classifiers using feature maps after each layer. Given a network with $N$ layers, the PD for an input image $x$ is $L\sim[0,N]$ if the k-NN prediction for the $L^{th}$ layer is different to layer at $L-1$ and the same for all posterior layer predictions. The authors demonstrated that easy samples have small PD, whereas difficult ones have high PD by linking the known phenomena in DL that early layers converge faster [15] and networks learn easy data first [22]. Using PD for estimating example difficulty is appealing, however, it requires training additional classifiers on top of a pre-trained network. Moreover, using the traditional Machine Learning classifiers (e.g. k-NN) for a segmentation task is not trivial. We extend the idea of PD to a more efficient segmentation method. Instead of k-NN classifiers, we attach a segmentation head after each layer output in the network as shown in Figure 1. We use a CNN following the U-Net [20] architecture with different modules in the decoder and encoder blocks. Specifically, we use residual connections [10] in the encoder and squeeze-and-excite attention [11] modules in the decoder blocks. Our approach is architecture agnostic and the choice of U-Net was due to its wide use and high performance on different medical image segmentation tasks. 2.2 Ensembles of networks of different depths DE has been used widely in the literature for predictive uncertainty estimation. The original method assumes a collection of $M$ networks with different initialisation trained with the same data. Then, the outputs of each of these $M$ models can be used to extract uncertainty measurements (e.g. variance). As we have shown in Figure 1, ten segmentation heads were added after each layer. Then, LE is a compound of $M$ sub-networks of different depths. Since each of the segmentation heads is randomly initialised, it is sufficient to cause each of the sub-networks to make partially independent errors [9]. The outputs from each of the segmentation heads can then be combined to produce final segmentation and estimate the uncertainties, similarly to DE. Hence, LE can be considered equivalent to DE, but using only one network model. 2.3 Layer agreement as an image-level uncertainty metric As we have stated above, LE is a combination of sub-networks of different depths. It can also be viewed as stacked networks where the parameters of a network $f_{t}$ is shared by $f_{t+1}$ for all $t\in\mathopen{[}0,N\mathclose{)}$, where $N$ is the total number of outputs. This sequential connection of $N$ sub-networks allows us to observe the progression of segmentation through the outputs of each segmentation head. We can measure the agreement between the adjacent layer outputs – e.g. using the Dice coefficient – to obtain a layer agreement curve. Depending on the network uncertainty, the agreement between layers will be low, especially in the early layers (Figure 2). We propose to use the Area Under Layer Agreement curve (AULA) as an image-level uncertainty metric. In the following sections, we demonstrate that AULA is a good uncertainty measure to detect poor segmentation quality, both in binary and multi-class problems. 3 Materials and Implementation We evaluate our proposal on two active medical image segmentation tasks: 1) Breast mass for binary 2D; and 2) Cardiac MRI for multi-class 3D segmentation. We assess LE in terms of segmentation accuracy, segmentation quality control using AULA metric, and example difficulty estimation using PD. For all the experiments, except for the example difficulty, LE is compared against the state-of-the-art DE approach and also a plain network without uncertainty estimation (referred as Plain). 3.1 Datasets We use two publicly available datasets for the selected segmentation problems. Breast Cancer Digital Repository (BCDR) [16] contains 886 MedioLateral Oblique (MLO) and CranioCaudal (CC) view mammogram images of 394 patients with manual segmentation masks for masses. Original images have a matrix size of $3328\times 4084$ or $2560\times 3328$ pixels (unknown resolution). We crop and re-sample all masses to patches of $256\times 256$ pixels with masses centred in the middle, as done in common practice [19]. We randomly split the BCDR dataset into train (576), validation (134), and test (176) sets so that images from the same patient are always in the same set. For cardiac segmentation, the M&Ms challenge (MnM) [5] dataset is utilised. We use the same split as in the original challenge – 175 training, 40 validation, and 160 testing. All the images come annotated at the End-Diastolic (ED) and End-Systolic (ES) phases for the Left Ventricle (LV), MYOcardium (MYO), and Right Ventricle (RV) heart structures in the short-axis view. In our experiments, both time-points are evaluated together. All MRI scans are kept in their original in-plane resolution varying from isotropic $0.85mm$ to $1.45mm$ and slice-thickness varying from $0.92mm$ to $10mm$. We crop the images to $128\times 128\times 10$ dimensions so that the heart structures are centred. 3.2 Training The same training routine is used for all the experiments, with only exception in batch-size: 10 for breast mass and 1 for cardiac structure segmentation. The network is trained for 200 epochs using the Adam optimiser to minimise the generalised Dice [21] and Cross-Entropy (CE) losses for breast mass and cardiac structure segmentation, respectively. For the multi-class segmentation, CE is weighted by $0.1$ for background and $0.3$ for each cardiac structure. An initial learning rate of $0.001$ is set with a decay by a factor of $0.5$ when the validation loss reaches a plateau. Common data augmentations are applied including random flip, rotation, and random swap of mini-patches of size $10\times 10$. Images are normalised to have zero-mean and unit standard deviation. A single NVIDIA GeForce RTX 2080 GPU with 8GB of memory is used. The source code with dependencies, training, and evaluation is publicly available111Github link will be presented soon.. 3.3 Evaluation Testing is done using the weights that give the best validation loss during training. The final segmentation masks are obtained by averaging the outputs of individual networks in DE ($M=5$). For LE, we tried both averaging the sub-network outputs and using the well-known Simultaneous Truth And Performance Level Estimation (STAPLE) algorithm [24] that uses weighted voting. Both results were similar and for brevity we present only the version using STAPLE. We evaluate LE and DE using the common uncertainty metrics in the literature – the pixel-wise variance, entropy, and MI, and they are summed for all pixels/voxels in cases where an image-level uncertainty is required. The AULA metric is used for LE. The network confidence calibration is evaluated using the Negative Log-Likelihood metric (NLL). It is a standard measure of a probabilistic model’s quality that penalises wrong predictions that have small uncertainty [18]. Note that AULA can also be calculated by skipping some of the initial segmentation heads. 4 Results 4.1 Segmentation performance and confidence calibration Table 1 compares the segmentation performance of LE with DE and Plain models in terms of Dice Similarity Coefficient (DSC) and Modified Hausdorff Distance (MHD). Two-sided paired t-test is used to measure statistically significant differences. In breast mass segmentation, LE performs similarly to DE and Plain model for both DSC and MHD metrics. The NLL of LE, however, is significantly better compared to others $(p<0.001)$. For cardiac structure segmentation, the combined DSCs for all methods are similar and MHD of Plain is slightly better. NLL of DE ($0.157\pm 0.33$) is significantly better than ours ($0.173\pm 0.37$) $(p<0.05)$, however, LE can achieve an NLL of $0.140\pm 0.23$ by skipping less layers without compromising segmentation performance (see Figure 3, right). In our experiments, skipping the first three and five outputs gave the best results in terms of correlation between uncertainty metrics and segmentation performance for breast mass and cardiac structure tasks, respectively (see Table 2). Skipping all but the last segmentation head in LE is equivalent to the Plain network. In terms of structure-wise DSC, all methods are similar for all the structures. Plain method has a slightly better MHD compared to DE and LE for the LV and MYO structures $(p>0.05)$, and DE is better for the RV structure compared to LE $p>0.05$. The ranking across all metrics in Table 1 are: LE (1.58), DE (2.08), and Plain (2.33), showing that on average LE is better. Overall, segmentation performance of all three are similar and LE has a better confidence calibration. 4.2 Segmentation quality control We evaluate our uncertainty estimation proposal for segmentation quality control, similarly to [17], and compare it to the state-of-the-art DE. We use the proposed AULA uncertainty metric to detect poor segmentation masks and the variance metric for DE. Figure 3 shows the fraction of remaining images with poor segmentation after a fraction of poor quality segmentation images are flagged for manual correction. We consider DSCs below $0.90$ as poor quality for both segmentation tasks. We set the threshold for the cardiac structure following the inter-operator agreement identified in [2], and use the same value on the threshold for masses. As proposed in [17], the areas under these curves can be used to compare different methods. It can be seen that LE and DE are similar in terms of detecting poor quality segmentations, with LE achieving slightly better AUC for all the cases – mass, combined and structure-wise cardiac segmentations. Table 2 supports this statement by confirming high correlation between AULA and segmentation metrics. In BCDR, both are somewhat close to the averaged ideal line. For the cardiac structure segmentation, all the curves take a steep decline, initially being also close to the averaged ideal line indicating that severe cases are detected faster. Moreover, as can be seen in Figure 4, DE’s uncertainty maps are overconfident, while LE manages to highlight the difficult areas. We believe that having such meaningful heatmaps is more helpful for the clinicians (e.g. for manual correction). More visual examples including entropy and MI are given in Appendix. 4.3 Example difficulty estimation We evaluate example difficult estimation using PD by perturbing the proportion of images in the test set. We added random Gaussian noise to MnM dataset and used Random Convolutions [25] in BCDR as the model was robust to noise in mammogram images. Examples of perturbed images are provided in Appendix. Then, for a given sample, PD is the largest $L$ corresponding to one of the $N$ segmentation heads in a network, where the agreement between $L$ and $L-1$ is smaller than a threshold that is the same as in segmentation quality control. In this sense, PD represents the minimum number of layers after which the network reaches to a consensus segmentation. Figure 5 shows how the distribution of PD shifts towards higher values as the number of corrupted images increases in the test set for both BCDR and MnM datasets. Overall, cardiac structure segmentation is more difficult than breast mass segmentation while the latter is more robust to noise. This demonstrates how PD can be used to evaluate example difficulty for the segmentation task and detect outliers. 5 Conclusions We proposed a novel uncertainty estimation approach that exhibits competitive results to the state-of-the-art DE method using only a single network. Compared to DE, our approach produces a more meaningful uncertainty heatmaps and allows estimating example difficulty in a single pass. Experimental results showed the effectiveness of the proposed AULA metric to measure an image-level uncertainty measure. The capabilities of both AULA and PD were demonstrated in segmentation and image quality control experiments. We believe that the efficient and reliable uncertainty estimation that LE demonstrates will pave the way for more trustworthy DL applications in healthcare. 6 Acknowledgements This study has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 952103. References [1] Abdar, M., Pourpanah, F., Hussain, S., Rezazadegan, D., Liu, L., Ghavamzadeh, M., Fieguth, P., Cao, X., Khosravi, A., Acharya, U.R., et al.: A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information Fusion 76, 243–297 (2021) [2] Bai, W., Sinclair, M., Tarroni, G., Oktay, O., Rajchl, M., Vaillant, G., Lee, A.M., Aung, N., Lukaschuk, E., Sanghvi, M.M., et al.: Automated cardiovascular magnetic resonance image analysis with fully convolutional networks. Journal of Cardiovascular Magnetic Resonance 20(1), 1–12 (2018) [3] Baldock, R., Maennel, H., Neyshabur, B.: Deep learning through the lens of example difficulty. Advances in Neural Information Processing Systems 34 (2021) [4] Beluch, W.H., Genewein, T., Nürnberger, A., Köhler, J.M.: The power of ensembles for active learning in image classification. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 9368–9377 (2018) [5] Campello, V.M., Gkontra, P., Izquierdo, C., Martín-Isla, C., Sojoudi, A., Full, P.M., Maier-Hein, K., Zhang, Y., He, Z., Ma, J., et al.: Multi-centre, multi-vendor and multi-disease cardiac segmentation: the M&Ms challenge. IEEE Transactions on Medical Imaging 40(12), 3543–3554 (2021) [6] Cinelli, L.P., Marins, M.A., Barros da Silva, E.A., Netto, S.L.: Bayesian neural networks. In: Variational Methods for Machine Learning with Applications to Deep Networks, pp. 65–109. Springer (2021) [7] Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In: International Conference on Machine Learning. pp. 1050–1059. PMLR (2016) [8] Gal, Y., et al.: Uncertainty in deep learning (2016) [9] Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016), http://www.deeplearningbook.org [10] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016) [11] Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 7132–7141 (2018) [12] Huang, G., Li, Y., Pleiss, G., Liu, Z., Hopcroft, J.E., Weinberger, K.Q.: Snapshot ensembles: Train 1, get M for free. In: International Conference on Learning Representations (2017), https://openreview.net/forum?id=BJYwwY9ll [13] Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems 30 (2017) [14] Liu, Y., Pagliardini, M., Chavdarova, T., Stich, S.U.: The peril of popular deep learning uncertainty estimation methods. In: Bayesian Deep Learning (BDL) Workshop at NeurIPS 2021 (2021) [15] Morcos, A., Raghu, M., Bengio, S.: Insights on representational similarity in neural networks with canonical correlation. Advances in Neural Information Processing Systems 31 (2018) [16] Moura, D.C., López, M.A.G., Cunha, P., Posada, N.G.d., Pollan, R.R., Ramos, I., Loureiro, J.P., Moreira, I.C., Araújo, B.M., Fernandes, T.C.: Benchmarking datasets for breast cancer computer-aided diagnosis (CADx). In: Iberoamerican Congress on Pattern Recognition. pp. 326–333. Springer (2013) [17] Ng, M., Guo, F., Biswas, L., Wright, G.A.: Estimating uncertainty in neural networks for segmentation quality control. In: 32nd Conf. Neural Inf. Process. Syst.(NIPS 2018), Montréal, Canada, no. Nips. pp. 3–6 (2018) [18] Quinonero-Candela, J., Rasmussen, C.E., Sinz, F., Bousquet, O., Schölkopf, B.: Evaluating predictive uncertainty challenge. In: Machine Learning Challenges Workshop. pp. 1–27. Springer (2005) [19] Rezaei, Z.: A review on image-based approaches for breast cancer detection, segmentation, and classification. Expert Systems with Applications 182, 115204 (2021) [20] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. pp. 234–241. Springer (2015) [21] Sudre, C.H., Li, W., Vercauteren, T., Ourselin, S., Jorge Cardoso, M.: Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In: Deep learning in medical image analysis and multimodal learning for clinical decision support, pp. 240–248. Springer (2017) [22] Toneva, M., Sordoni, A., des Combes, R.T., Trischler, A., Bengio, Y., Gordon, G.J.: An empirical study of example forgetting during deep neural network learning. In: International Conference on Learning Representations (2019), https://openreview.net/forum?id=BJlxm30cKm [23] Valdenegro-Toro, M.: Deep sub-ensembles for fast uncertainty estimation in image classification. In: Bayesian Deep Learning (BDL) Workshop at NeurIPS (2019) [24] Warfield, S.K., Zou, K.H., Wells, W.M.: Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation. IEEE transactions on medical imaging 23(7), 903–921 (2004) [25] Xu, Z., Liu, D., Yang, J., Raffel, C., Niethammer, M.: Robust and generalizable visual representation learning via random convolutions. In: International Conference on Learning Representations (2021), https://openreview.net/forum?id=BVSM0x3EDK6 [26] Young, A.T., Amara, D., Bhattacharya, A., Wei, M.L.: Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review. The Lancet Digital Health 3(9), e599–e611 (2021) Appendix Corrupted image examples to increase prediction depth Qualitative examples
Refueled and shielded - The early evolution of Tidal Dwarf Galaxies Bernhard Baumschlager,${}^{1,2}$ Gerhard Hensler,${}^{1}$ Patrick Steyrleithner${}^{1}$ and Simone Recchi${}^{1}$ ${}^{1}$Department of Astrophysics, University of Vienna, Türkenschanzstrasse 17, 1180 Vienna, Austria ${}^{2}$Institute of Theoretical Astrophysics, University of Oslo, Postboks 1029, Blindern, 0315 Oslo, Norway E-mail: bernhard.baumschlager@astro.uio.no (Accepted XXX. Received YYY; in original form ZZZ) Abstract We present, for the first time, numerical simulations of young tidal dwarf galaxies (TDGs), including a self-consistent treatment of the tidal arm in which they are embedded. Thereby, we do not rely on idealised initial conditions, as the initial data of the presented simulation emerge from a galaxy interaction simulation. By comparing models which are either embedded in or isolated form the tidal arm, we demonstrate its importance on the evolution of TDGs, as additional source of gas which can be accreted and is available for subsequent conversion into stars. During the initial collapse of the proto-TDG, with a duration of a few 100 Myr, the evolution of the embedded and isolated TDGs are indistinguishable. Significant differences appear after the collapse has halted and the further evolution is dominated by the possible accretion of material form the surroundings of the TDGs. The inclusion of the tidal arm in the simulation of TDGs results in roughly a doubling of the gas mass ($M_{\mathrm{gas}}$) and gas fraction ($f_{\mathrm{gas}}$), an increase in stellar mass by a factor of 1.5 and a $\sim 3$ times higher star formation rate (SFR) compared to the isolated case. Moreover, we perform a parametric study on the influence of different environmental effects, i.e. the tidal field and ram pressure. Due to the orbit of the chosen initial conditions, no clear impact of the environmental effects on the evolution of TDG candidates can be found. keywords: methods: numerical – hydrodynamics – galaxies: dwarf – galaxies: evolution ††pubyear: 2018††pagerange: Refueled and shielded - The early evolution of Tidal Dwarf Galaxies–Refueled and shielded - The early evolution of Tidal Dwarf Galaxies 1 Introduction During gravitational interactions of galaxies, long filamentary arms are drawn out of the galaxies main bodies by tidal forces (e.g. Toomre & Toomre, 1972). Within these so-called tidal arms or tails accumulations of gas and stars can be found. These clumps are also detectable through their H$\alpha$ and FUV emission, tracers of recent or ongoing star formation (SF), and harbour molecular gas, the reservoir for SF (e.g. Mirabel et al., 1991, 1992; Braine et al., 2000, 2001, 2004). If these objects are massive enough, i.e. masses above $10^{8}~{}\mathrm{M}_{\sun}$ (Duc, 2012), they are commonly referred to as tidal dwarf galaxies (TDGs). Due to the formation out of tidally stripped material and their shallow potential wells, TDGs are expected to be almost entirely free of dark matter (DM), requiring mass-to-light ratios close to one (e.g. Barnes & Hernquist, 1992; Wetzstein et al., 2007), in agreement with observations (e.g. Duc & Mirabel, 1994). Nevertheless, TDGs are sharing several properties with primordial dwarf galaxies (DGs), i.e. those formed in low mass DM halos in the standard model of cosmology. Despite their different formation mechanism, TDGs show similar sizes, masses, and properties as normal primordial DGs, with effective radii in the order of 1 kpc and masses of the stellar content between $10^{6}\leq\mathrm{M}_{\star}/\mathrm{M}_{\sun}\leq 10^{10}$ and therefore follow the same mass-radius (MR) relation (Dabringhausen & Kroupa, 2013, and references therein). The luminosity range typical for observed TDGs is $-14<M_{B}<-18$ (e.g. Mirabel et al., 1992; Duc & Mirabel, 1994; Weilbacher et al., 2003), comparable to the primordial DG population (e.g. Mo et al., 2010). As TDGs form out of preprocessed material expelled from their parent galaxies, they are expected to deviate from the mass-metallicity relation of DGs, being more metal-rich then primordial DGs of the same mass (Duc & Mirabel, 1994; Croxall et al., 2009; Sweet et al., 2014). However, Recchi et al. (2015) showed that this is only true for recently formed TDGs, because they are formed out of substantially pre-enriched material. This can als mean that the mass-metallicity relation could be an effect of different TDG-formation times. TDGs formed in the early universe, emerging from relatively metal-poor material, could constitute the low-metallicity end of this relation, as self-enrichment is the dominant process in their chemical evolution. Many TDGs show a bimodal distribution in the ages of their stellar populations, a combination of old stars expelled from the parent galaxies and a young in-situ formed population. Thereby, not more than 50% of the stars in a TDG emerge from the parent galaxies to constitute the old population. The majority of the young population of stars is born during a starburst at the time the TDG is formed. Further SF occurs with star-formation rates (SFRs) as expected for DGs of comparable size (e.g. Elmegreen et al., 1993; Hunter et al., 2000). Observationally derived SFRs of TDGs range from $10^{-4}$ to $10^{-1}~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ (e.g Duc & Mirabel, 1998; Lee-Waddell et al., 2014; Lisenfeld et al., 2016). Up to now many observations established the existence of TDGs and their properties but there still remain unanswered questions, such as: How do TDGs form within the tidal arms? What is the production rate of TDGs? How many of them survive disruptive events, like stellar feedback and the parent galaxies’ tidal field? How many of the TDGs formed during a galaxy interaction fall-back to the parent galaxies, dissolve to the field or become satellite galaxies of the merger remnant? What is the contribution of TDGs to the present day DG population? These questions can typically be addressed by numerical simulation only. The first galaxy interaction simulation which showed the possibility of DG formation within tidal arms was performed by Barnes & Hernquist (1992). The TDGs produced in these N-body/SPH simulations were formed by the accumulation of stars which form a gravitational bound structure and bind gas to them, whereas in the simulations of Elmegreen et al. (1993), the gas within the tidal arm forms Jeans unstable clouds which collapse and enables the formation star. Wetzstein et al. (2007) investigated numerical resolution effects on the formation of TDGs and concluded that, within high resolution N-body/SPH simulations, the presence of a dissipative component, i.e. gas, is required in order to form TDGs, but also that the spatial extend of the gaseous disk has to be large enough. Due to the absence of a supporting DM halo, TDGs are suspected to be very vulnerable to internal and external disruptive effects like stellar feedback, ram pressure or a tidal field (e.g. Bournaud, 2010). But both observations and simulation have shown that TDGs can indeed be long-lived objects and are able to survive for several billion years. Within a large set of galaxy merger simulation Bournaud & Duc (2006) found that half of the formed TDGs with masses greater than $10^{8}~{}\mathrm{M}_{\sun}$ can survive for a Hubble time. Recchi et al. (2007) performed 2D simulations of TDGs and found that these objects can survive an initial starburst for several 100 Myr and might turn into long-lived dwarf spheroidal galaxies. Already Hensler et al. (2004) have demonstrated by means of 1D simulations which are more vulnerable to supernova explosions that DGs remain bound even without the gravitational aid of a DM halo but become gas-free similar to dwarf spheroidal galaxies (dSph). The potentially devastating effect of ram-pressure stripping (RPS) on isolated TDGs has been investigated by Smith et al. (2013). Thereby, they found that for wind speeds in excess of $200\,\mathrm{km\,s}^{-1}$ even the stellar disk of a TDG can be heavily disrupted due to the loss of gas caused by RPS. As the gas gets removed from the TDGs disk, the gravitational potential gets weakened and more and more stars become unbound. Observationally, the oldest confirmed TDGs have been found by Duc et al. (2014) with approximate ages of about 4 billion years. More recent detailed chemodynamical simulations on the survivability of TDGs have shown that, despite their lack of a supporting DM halo, TDGs can survive disruptive events like starbursts or the tidal field of their parent galaxies and can reach ages of more than 3 Gyr (Ploeckinger et al., 2014, 2015). Recently, Ploeckinger et al. (2018) demonstrated the possibility of identifying TDGs within the high-resolution cosmological simulation runs of the EAGLE suite (Schaye et al., 2015), without any further conclusions about the TDG formation rate or survivability, due to the limited temporal and spatial resolution. The fraction of TDGs among the total present-day DG population is estimated to $5-10\%$, depending on the detailed assumption of the TDG production and survival rates (e.g. Bournaud, 2010; Wen et al., 2012; Kaviraj et al., 2012). The most extreme estimates, considering higher production rates during gas-rich merges events in the early universe (Okazaki & Taniguchi, 2000), or based on the structural similarities of TDGs and DGs (Dabringhausen & Kroupa, 2013) predict that the whole DG population is of tidal origin. All the previous theoretical and numerical work on TDGs either focuses on the formation during galaxy interactions, their production rate and long-term survivability, or the evolution of TDGs as individual objects decoupled from the tidal tail. In all these studies the influence of the tidal tail as a dynamically developing live environment itself is neglected or it has not been taken into account at first place. In contrast to this, we include for the first time a self-consistent treatment of the tidal arm in detailed numerical studies of TDGs, in order to study TDGs in the dynamical course of the tidal arms. This paper is structured as followed. In Section 2, a short overview of the simulation code and the implemented physical processes is provided. Section 3 describes the different models and the setup of the simulation runs. The results are presented in Section 4 and a final discussion is given in Section 5. 2 The Code The simulations presented in this work make use of an extended version of the Adaptive Mesh-Refinement (AMR) code flash 3.3 (Fryxell et al., 2000), applying the Monotonic Upstream-centred Scheme for Conservation Laws (MUSCL) Hancock scheme in combination with the HLLC Riemann solver and van Leer slope limiter for hydrodynamics. The simulations are advanced on the minimal timestep according to the CFL condition (Courant et al., 1967), throughout this work the CFL constant is set to 0.1 for stability reasons. In simulations including the effect of RPS, the high density and velocity contrast at the interface between tidal arm and circumgalactic medium (CGM) can lead to numerical instabilities resulting in unphysical negative densities. In addition, timescales of cooling and SF are temporarily shorter than the dynamical timescale and, by this, these processes are insufficiently resolved in time. All these pitfalls are avoided by choosing a lower value for the CFL factor than in pure hydrodynamics, but make the numerical modeling time expensive. Self-gravity within the simulation box is solved using the Multigrid Poisson solver (Ricker, 2008) provided by flash. The code was extended by Ploeckinger et al. (2014, 2015) in order to account for a metal dependent cooling of gas, SF, stellar feedback, a tidal field and RPS. A short overview of the extensions and their models is provided below, while for a more detailed description the reader is referred to Ploeckinger et al. (2014, 2015). 2.1 Cooling In order to account for a composition dependent radiative cooling the abundances of H, He, C, N, O, Ne, Mg, Si, S and Fe of each grid cell are traced. The cooling function is split into two different temperature regimes at $10^{4}\,\mathrm{K}$. For gas temperatures above $10^{4}\,\mathrm{K}$ cooling functions from Boehringer & Hensler (1989) for an optically thin plasma in collisional ionisation equilibrium, taking H, He, C, N, O, Ne, Mg, Si, S and Fe into account, are used. At temperatures below $10^{4}\,\mathrm{K}$ the cooling functions from Schure et al. (2009); Dalgarno & McCray (1972) are evaluated for C, N, O, Si, S and Fe. The energy loss due to radiative cooling is calculated at every time-step solving an implicit equation by means of the Newton-Raphson method. 2.2 Stars Within the presented simulations stars are represented by particles, which are advanced in the potential of the TDG by a variable-timestep leapfrog algorithm. Thereby one stellar particle represents a whole star cluster, instead of a single star, with its underlying stellar population described by an invariant initial mass function (IMF). 2.2.1 Star formation The SF is fully self-regulated following the description of Koeppen et al. (1995), where the SFR is determined by the stellar birth function: $$\psi(\rho,\mathrm{T})=C_{2}\,\rho^{2}\,e^{T/\mathrm{T}_{\mathrm{s}}},$$ (1) where $\rho$ and $T$ are the gas density and temperature, $\mathrm{T}_{\mathrm{s}}~{}=~{}1000\,\mathrm{K}$ and $C_{2}=2.575\times 10^{8}\,\mathrm{cm}^{3}\,\mathrm{g}^{-1}\,\mathrm{s}^{-1}$. Thereby, the exponential term acts as a temperature dependent efficiency function which allows for a smooth transition between a high SFR at low temperatures and a low SFR at temperatures above $10^{4}\,\mathrm{K}$. This formulation only depends on the local properties of a grid cell, i.e. the gas density and temperature and was derived as self-regulated recipe. Therefore, this approach does not require any global scaling relation and was successfully applied to massive spirals by Samland et al. (1997) and to dwarf irregular galaxies by Hensler & Rieschick (2002) leading to realistic SFRs and chemical abundances. In parallel studies we compare this self-regulation recipe with that of density and temperature thresholds as described by e.g. Stinson et al. (2006) (Keuhtreiber et al. in preparation), and in others with respect to the initial mass function at low SFRs (Hensler et al., 2017, Steyrleithner et al. in preparation). In principle Equation 1 allows for very low SFRs in hot and low density regions, which would result in large numbers of very low mass star clusters. To avoid these low mass stellar particles a threshold on the stellar birth function is applied in the form of $$\psi_{\mathrm{thresh}}=\theta_{\mathrm{sf}}\;\frac{3\,M_{\mathrm{min}}}{4\,\pi% \,r^{3}_{\mathrm{GMC}}\,\tau_{\mathrm{cl}}},$$ (2) where $M_{\mathrm{min}}=100\,\mathrm{M}_{\sun}$ is the minimal cluster mass, $\tau_{\mathrm{cl}}~{}=~{}1\,\mathrm{Myr}$ the cluster formation time, $r_{\mathrm{GMC}}=160\,\mathrm{pc}$ the size of a giant molecular cloud and $\theta_{\mathrm{sf}}$ is a dimensionless factor. For $\theta_{\mathrm{sf}}=1$ - what is used here - and assuming a constant density and temperature during $\tau_{\mathrm{cl}}$, this threshold translates into a requirement on the minimal star cluster mass, i.e. $M_{\mathrm{min}}=100\,\mathrm{M}_{\sun}$ for the chosen parameters. Whenever the SF criterion, in form of Equations 1 and 2, is fulfilled and no other actively star-forming particle is located within $r_{\mathrm{GMC}}$ a new star cluster is spawned. From this time on a star cluster accumulates material from its surrounding according to Equation 1, until either the formation time $\tau_{\mathrm{cl}}$ expires or the IMF is filled. As soon as a cluster is completed stellar feedback sets in, influencing the nearby grid cells. If the SF criteria are still fulfilled within a grid cell, after an already existing cluster was closed for SF, a new particle is created within $r_{\mathrm{GMC}}$, while the feedback from the existing cluster already influences the stellar birth function as the temperature within a grid cell is increasing. 2.2.2 Initial Mass Function The underlying stellar population of one stellar particle is described by an invariant multi-part power-law IMF (Kroupa, 2001) $$\xi(m)=k\;m^{-\alpha}$$ (3) where $k$ is a normalisation constant, $m$ the mass of a star and $\alpha$ the mass dependent power-law slope of the IMF $$\alpha=\begin{cases}0.3~{}~{}~{}\dots~{}~{}~{}0.01\leq m/\mathrm{M}_{\sun}<0.0% 8\\ 1.3~{}~{}~{}\dots~{}~{}~{}0.08\leq m/\mathrm{M}_{\sun}<0.5\\ 2.3~{}~{}~{}\dots~{}~{}~{}0.5\leq m/\mathrm{M}_{\sun}<100.\end{cases}$$ (4) The number of stars $N$ and the mass $M_{\mathrm{cl}}$ of a star cluster can be calculated by: $$N=\int\limits_{m_{\mathrm{min}}}^{m_{\mathrm{max}}}\xi(m)\;dm$$ (5) $$M_{\mathrm{cl}}=\int\limits_{m_{\mathrm{min}}}^{m_{\mathrm{max}}}m\;\xi(m)\;dm$$ (6) For the presented simulations the IMF is split into 64 equally-spaced logarithmic mass bins in the mass range $0.1\leq m/\mathrm{M}_{\sun}\leq 120$. All stars within one mass bin are represented by the average mass of the corresponding mass bin. The IMF of all star clusters, regardless of the cluster mass, is always filled to the highest mass bin, even with unphysical fractions of high-mass stars. This treatment of the IMF in combination with the short cluster formation time resembles the maximum feedback case of Ploeckinger et al. (2015). 2.2.3 Stellar feedback For stars more massive than $8~{}\mathrm{M}_{\sun}$ the feedback of type II supernovae (SNII), stellar winds and ionizing UV radiation is considered. Below $8~{}\mathrm{M}_{\sun}$ the release of energy and metals from type Ia supernovae (SNIa) and the enrichment of the interstellar medium (ISM) by Asymptotic Giant Branch (AGB) stars are included. The different feedback mechanisms are considered at the end of a stars’ lifetimes, except for ionizing UV radiation emitted by massive stars, which is continuously calculated at every timestep as long as a star cluster hosts stars above $8~{}\mathrm{M}_{\sun}$. The released energy of all these processes is injected into the gas as thermal energy. The metallicity-dependent lifetimes of stars are taken from Portinari et al. (1998) and the stellar yields are combined from Marigo et al. (1996), Portinari et al. (1998) and for SNIa from Travaglio et al. (2004, W7 model). Stellar Wind: During their evolution, high-mass stars lose a fraction of their mass due to radiation-driven winds, which heat and enrich the surrounding ISM. The metal-dependent mass-loss rate due to stellar winds of OB star is given by (Hensler, 1987; Theis et al., 1992) $$\dot{m}=10^{-15}\,\left(\frac{Z}{\mathrm{Z}_{\odot}}\right)^{1/2}\left(\frac{L% }{\mathrm{L}_{\odot}}\right)^{1.6}~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1},$$ (7) where $L$ is the stellar luminosity, derived from the mass-luminosity relation of Maeder (1996). The heating of the ISM by stellar winds is determined by the winds’ kinetic power $$\dot{E}_{\mathrm{kin}}=\frac{1}{2}\,\dot{m}\,v^{2}_{\infty},$$ (8) with the stellar mass ($m$) dependent final wind velocity $$v_{\infty}=3\times 10^{3}\left(\frac{m}{\mathrm{M}_{\sun}}\right)^{0.15}\left(% \frac{Z}{\mathrm{Z}_{\odot}}\right)^{0.08}\,.$$ (9) The total wind energy is then given by $$E_{\mathrm{wind}}=\epsilon_{\mathrm{wind}}\,\dot{E}_{\mathrm{kin}}\,\tau_{% \mathrm{cl}}\,N_{\star}\,,$$ (10) where $\epsilon_{\mathrm{wind}}=5\%$ is the wind efficiency, $\tau_{\mathrm{cl}}$ is the age of a stellar particle and $N_{\star}$ is the number of stars in a mass-bin. The kinetic heating of the ISM by stellar winds is typically one to two orders of magnitude lower compared to the heating by ionizing UV radiation, therefore it is only considered at the end of a stars lifetime. The additional energy injected into the ISM by UV radiation of massive stars is treated separately. Stellar Radiation: Although the applied recipe of SF by Koeppen et al. (1995) already includes the self-regulation by stellar energy feedback, in recent simulations Ploeckinger et al. (2015) we have implemented HII regions as additional effect on SF self-regulation. This recipe can also be used for momentum transfer and radiation-pressure driven turbulence (Renaud et al., 2013). On galaxy scales this treatment (as a similar feedback scheme like ours) also produces galactic outflows (Bournaud et al., 2014). The ionizing UV radiation of massive stars ($m\geq 8~{}\mathrm{M}_{\sun}$) is considered throughout their lifetime. Therefore, it is treated separately from the winds driven by these stars. Using the approach of a Strömgren sphere, the mass of ionised H around a single star in dependency of the stellar mass is calculated by $$M_{\mathrm{\ion{H}{II}}}\left(m\right)=\frac{S_{\star}\left(m\right)\mu^{2}\,m% _{\mathrm{H}}^{2}}{\beta_{2}\,\rho\,f_{\mathrm{H}}^{2}}\,,$$ (11) where $\rho$ and $\mu$ are the density and mean molecular weight of the ambient ISM, $m_{\mathrm{H}}$ is the atomic mass of hydrogen, $\beta_{2}$ is the recombination coefficient, $f_{\mathrm{H}}$ is the mass fraction of H in the ISM and the ionizing photon flux is derived as (Ploeckinger et al., 2015) $$S_{\star}=3.6\times 10^{42}\left(\frac{m}{\mathrm{M}_{\sun}}\right)^{4}.$$ (12) The total mass of hydrogen ionized by one stellar particle is obtained by summing over all Strömgren spheres of the underlying high-mass population. The temperature of the Stömgren sphere is set to $2\times 10^{4}\,\mathrm{K}$ and the temperature of a grid cell is calculated as mass-weighted average of all Strömgren sphere temperature and the temperature of the remaining mass in the grid cell. Furthermore, the amount of ionized gas within a grid cell is tracked and excluded from SF until the last star more massive than 8 $\mathrm{M}_{\sun}$ contained in a grid cell exploded as SNII. This ansatz is aimed only at reducing the cold gas mass that is capable for SF. Since the Strömgren equation inherently includes the heating and cooling balance, neither its spatial resolution nor its contribution to the gas cooling must be considered. Type II Supernovae: When a star more massive than $8~{}\mathrm{M}_{\sun}$ ends its life it will explode as core-collapse supernova, thereby heating the surrounding ISM and enriches it with heavy elements. The energy injected into the ISM, by the stars inhabiting one IMF mass bin, is given by $$E_{\mathrm{SNII,tot}}=\epsilon_{\mathrm{SNII}}\,E_{\mathrm{SNII}}\,N_{\star},$$ (13) where $\epsilon_{\mathrm{SNII}}$ is the SNII efficiency, $E_{\mathrm{SNII}}=10^{51}\,\mathrm{erg}$ the energy of one SNII, $N_{\star}$ the number of stars in one mass bin. Due to the lack of a value reliably derived on theoretical basis or from numerical studies (Thornton et al., 1998) simplified 1D supernova explosion models) and although it must depend on the environmental state and therefore vary temporarily (Recchi, 2014) $\epsilon_{\mathrm{SNII}}$ is set to 5%. The mass of the remaining neutron star or black hole is locked in the stellar particle and is not available for further SF. The remnant mass for stars in the mass range $8\leq\mathrm{m}/\mathrm{M}_{\sun}\leq 120$ with solar metallicities range between $1.3$ and $2.1\,\mathrm{M}_{\sun}$. Type Ia Supernovae: Based on the SNIa rate equations of Matteucci & Recchi (2001) and Recchi et al. (2009) the probability of a star to be the secondary in a binary system is calculated as (Ploeckinger, 2014) $$P(m_{2})=2^{1+\gamma}\left(1+\gamma\right)A\sum\limits_{i=i(m_{2})}^{\mathrm{% nimf}}\frac{\xi(m_{i})}{\xi(m_{2})}\mu_{i}^{\gamma}\Delta\mu\,,$$ (14) where $A=0.09$ is a normalisation constant, $\mu_{i}=m_{2}/(m_{2}+m_{i})$ is the binary mass ratio, $\xi$ is the IMF for the stellar masses $m_{2}$ and $m_{i}$ both in the mass range of $1.4-8~{}\mathrm{M}_{\sun}$. The distribution function of binary masses $$f(\mu)=2^{1+\gamma}\left(1+\gamma\right)\mu^{\gamma}$$ (15) with $\gamma=2$, favouring equal mass systems, is incorporated in Equation 14. The number of SNIa per IMF mass bin is then $N_{\mathrm{SNIa}}=P(m_{2})\,N_{\star}$ and each SN is injecting $10^{51}~{}\mathrm{erg}$ with an efficiency of $\epsilon_{\mathrm{SNIa}}=5\%$ into the ISM. As the main source of iron, SNIa also contribute significantly to the chemical evolution of a galaxy. Asymptotic Giant Branch Stars: During the AGB phase stars suffer a strong mass loss, this is accounted for by the release of metals back to the ISM. This enrichment of the ISM is considered at the end of an AGB-star’s lifetime. However, the energy injected into the ISM is neglected for this kind of stars. According to the treatment of SNIa a fraction of these stars will eventually end their life as SNIa. 2.3 Environmental effects In contrast to isolated DGs, TDGs form and evolve in the vicinity of their interacting parent galaxies, therefore, they feel their gravitational potential and the resulting tidal forces. As the tidally stripped material expands and orbits around the host galaxy, it moves relative to the CGM and is therefore exposed to ram pressure. For simplicity the CGM is assumed at rest with respect to the merger. 2.3.1 Tidal field Although the mass distribution of merging galaxies varies with time, a time constant gravitational potential of a point mass is assumed for the representation of the interacting host galaxies. As the simulations are carried out in the rest frame of the TDG, the simulation box moves along an orbit around the interacting galaxies. Therefore, the net-effect of the tidal field ($\textbf{{a}}_{\mathrm{tidal}}$) on a given grid cell is calculated as the difference between the gravitational acceleration exerted by the external potential onto the grid cell ($\textbf{{g}}_{\mathrm{cell}}$) and the acceleration felt by the simulation box ($\textbf{{g}}_{\mathrm{SB}}$), i.e. $$\textbf{{a}}_{\mathrm{tidal}}=\textbf{{g}}_{\mathrm{cell}}-\textbf{{g}}_{% \mathrm{SB}}.$$ (16) 2.3.2 Ram Pressure The simulations are carried out in the rest frame of the TDG, therefore a wind tunnel-like configuration is present. In this setup the TDG stays at rest and the ambient CGM flows around it. The resulting relative velocity of the CGM equals the negative orbital velocity of the TDG, e.g. $v_{\mathrm{rp}}=-v_{\mathrm{orb}}$, and is updated at every timestep. The velocity of a grid cell is only modified to account for the varying velocity of the ambient medium when the fraction of material originating from the tidal arm is less then $10^{-4}$. 3 The Models 3.1 Initial Conditions The initial conditions for the presented simulations emerge form a series of galaxy interaction simulations, which are presented in Hammer et al. (2010); Fouquet et al. (2012); Hammer et al. (2013); Yang et al. (2014). Already 1.5 Gyr after the first pericentre passage, the time at which the initial data are extracted, numerous forming TDG candidates can be identified. Out of the 15 TDG candidates of these simulation, we present follow up zoom-in simulations of two of the TDG candidates, which are highlighted by blue boxes and denoted by 2 and 3 in Figure 1. These TDG candidates were already included in a comparative study of all cadidates by Ploeckinger et al. (2015) with respect to their orientation and kinematics and were denoted there as TDG-c (3) and TDG-k (2). 3.2 Data extraction Out of the many possible TDG candidates of the original Fouquet et al. (2012) simulation the candidate to test is selected according to the following criteria: (1) An apparent over-density in the gaseous and stellar component along the tidal arm. (2) A minimal gas mass within 2.5 kpc around the centre of the TDG of $M_{\mathrm{g,min}}\ga 10^{8}~{}\mathrm{M}_{\sun}$. (3) No other candidate within a distance of 15 kpc from the TDGs centre, to ensure that no overlap of the edge of the simulation box with a neighbouring TDG candidate can occur. The mass centre of the TDG candidate is calculated iteratively. Starting from an initial guess of its position the mass centre within a sphere with $r_{\mathrm{TDG}}=4\,\mathrm{kpc}$ calculated by $$\textbf{{R}}_{\mathrm{M}}=\frac{1}{\sum\limits_{i}m_{i}}\sum\limits_{i}m_{i}\,% \textbf{{r}}_{i},$$ (17) where $\textbf{{R}}_{\mathrm{M}}$ is the mass centre measured from the interacting galaxies, $m_{i}$ and $\textbf{{r}}_{i}$ are the mass and position vector of the $i$-th SPH particle. This position is then used as centre of a new sphere, within which the mass centre is again calculated, the process is repeated until convergence. The orbital velocity of the TDG, respectively the simulation box is calculated as mass-weighted average of all stellar and gas SPH-particles within $r_{\mathrm{TDG}}$. This results in initial positions of the TDGs relative to the interacting host galaxies of $(X,Y,Z)~{}=~{}(59.9,-133.9,69.8)~{}\mathrm{and}~{}(30.2,289.4,114.1)\,\mathrm{kpc}$ and orbital velocities of $\textbf{{v}}_{\mathrm{orb}}~{}=~{}(104.7,-29.6,12.2)$ and $(-35.2,117.8,64.8)\,\mathrm{km\,s}^{-1}$ for the candidates 2 and 3, respectively. All particles within a box of 22 kpc side length, centred at the mass centre of the TDG, are then selected as the initial data. To ensure a correct data mapping at the edges of the simulation box, this box is 10% lager then the later simulation box. 3.3 Initial gas distribution As the initial conditions for the presented simulations originate from an SPH-simulation, the gas properties are mapped to the AMR grid of flash  via the standard kernel function of gadget2 (Springel, 2005): $$W(r,h)=\frac{8}{\pi~{}h^{3}}\times\begin{cases}1-6\left(\frac{r}{h}\right)^{2}% +6\left(\frac{r}{h}\right)^{3}&\dots~{}0\leq\frac{r}{h}\leq 0.5\\ 2\left(1-\frac{r}{h}\right)^{3}&\dots~{}0.5\leq\frac{r}{h}\leq 1\\ 0&\dots~{}\frac{r}{h}>1,\end{cases}$$ (18) where $r$ is the distance of a particle to a given point and $h$ is the smoothing length. Throughout this work $h$ is defined as the distance of the 50th nearest particle to the centre of a given cell, except in cases where more than 50 particles would fall within a grid cell, then all particles within this cell are mapped onto it. For example, the initial density of a grid cell $\rho_{\mathrm{cell}}$ is then calculated by $$\rho_{\mathrm{cell}}=\sum\limits_{i=1}^{\mathrm{N}}m_{i}W(r_{i},h),$$ (19) where $m_{i}$ is the mass of an SPH gas particle $r_{i}$ is its distance from the cell centre and $N$ is the number of considered particles. To account for the pre-enrichment of the ISM the initial metallicity of the TDG and the tidal arm, is set to $Z=0.3~{}\mathrm{Z}_{\odot}$ with solar element abundance ratios. 3.4 Initial stellar population Within the presented simulations star clusters are treated as particles, therefore the pre-existing stellar population is initialised with the positions, velocities and masses of the stellar SPH particles. The stellar particles within the original simulation have a constant mass of $27500~{}\mathrm{M}_{\sun}$ each. Therefore, the 338 stellar particles within the simulation box of candidate 2 provide a total initial stellar mass of $9.295\times 10^{6}~{}\mathrm{M}_{\sun}$ and $2.20\times 10^{6}~{}\mathrm{M}_{\sun}$ for the 80 stellar particles within the simulation box of candidate 3. As for the ISM the metallicity of the old stellar population, already formed within the interacting galaxies, is set to $Z=0.3~{}\mathrm{Z}_{\odot}$ with solar element abundance ratios, because the tidal arms stretch from the outermost stellar disk with the lowest disk abundance. 3.5 Simulation setup The initial gas distribution is mapped, according to the description of Section 3.3, to the AMR-grid of the 20 kpc cubed simulation box. Thereby, we used 5 levels of refinement, resulting in a maximal resolution of 78 pc. Depending on the considered TDG candidate and the extent of the corresponding section of the tidal arm this process can lead to an almost completely refined simulation box. In order to avoid a fully refined simulation box, already at the start of the simulation runs, the density distribution is truncated at a lower density $\rho_{\mathrm{cut}}$. The resulting gas distribution is then embedded in a homogeneous CGM. Thereby, the CGM parameters are chosen so that they approximately resemble the Milky Ways hot X-ray halo with $T\approx 10^{6}\,\mathrm{K}$ and $n\approx 10^{-4}\,\mathrm{cm}^{-3}$, e.g. Gupta et al. (2012). Keeping the CGM temperature fixed at $10^{6}\,\mathrm{K}$, the density is adjusted so that the pressure within the tidal arm is slightly higher then the pressure within the CGM. This prevents additional forces onto the tidal arm which could trigger its collapse. The cutoff density $\rho_{\mathrm{cut}}$ and the density of the CGM $\rho_{\mathrm{CGM}}$ are listed in Table 1 for the different simulation runs. The effect of the different density cuts on the mass distribution can be seen in Figure 2. As boundary conditions flash’s standard outflow boundaries are used. These boundary conditions are insensitive to the flow direction across the boundary, therefore they also allow for a flow into the simulation box. This inflow is used to mimic the flow along the tidal arm across the physical boundaries of the simulation box and, by this, for the first time the TDG evolution within the tidal arm is self-consistently coupled to the gas and stellar dynamics of the tidal-arm environment. In a series of six simulation runs for each TDG candidate we expose the TDGs to different combinations of environmental effects to study their impact on the formation and evolution of TDGs. In three of these runs the TDG is embedded in the tidal arm, indicated by model names starting with e, the remaining three runs are consider to be isolated from the tidal arm, denoted by i. Throughout this work the term isolated refers to the non-existence of the tidal arm rather than the spatial isolation from any neighbouring galaxy. The inclusion of the different environmental effects is indicated by the suffix of the model name, where t indicates the presence of the tidal field and r the activity of ram pressure. Among the models of candidate 2, e2tr is the most realistic model of a young TDG, as it includes the tidal arm and the effects of the tidal field and ram pressure. Contrary to that, the model i2 can be considered as a model of a DM free and fully isolated DG, without an additional gas reservoir or any environmental influences. An overview of the different models and some of their initial parameters and settings is provided in Table 1. The discrepancies in the CGM density between the e and i models is caused by the necessity to keep the isolated DGs with the same central density and mass bound by the external gas pressure. This is not necessary for the embedded models due to the tidal-arm gas dynamics. 4 Results 4.1 Mass assembly The presented simulations start $1.5\,\mathrm{Gyr}$ after the first close encounter of the parent galaxies, at this time already a Jeans unstable gas cloud embedded in the tidal arm has formed (see Figure 2). During the first few tens of Myr a high density core is developing within the cloudy distribution of gas, resulting in a short phase of low SF. As the core accumulates more and more gas, that condenses dissipatively, it forces the surrounding matter to collapse. During this phase arm-like structures are created which effectively funnel the gas to the centre of the TDG. This leads to very high central densities, a high SFR, and a very short gas consumption time. After the collapse-like mass assembly, which ends at a simulation time of $t_{\mathrm{sim}}=300\,\mathrm{Myr}$, the majority of gas is converted into stars. Further mass accumulation during the rest of the simulation time is caused by accretion along the tidal arm, if it is taken into account in the simulation run (e models). At this time the isolated models are reaching their total final mass. Figures 3 and 4 show the evolution of the relative mass fractions, normalised to the total initial mass, within spheres of 1.0 (left panels) and 2.5 kpc (right panels) radius around the TDGs centre for the candidate 2 and 3, respectively. During the collapse the total mass within 2.5 kpc increases to approximately $1.8-2.3$ times the initial mass for the candidate 2 and $1.1-1.5$ times for candidate 3, depending on the presence of the tidal arm. During the initial collapse the central regions ($r\leq 1.0\,\mathrm{kpc}$) are growing by factors of $\sim 18$ for the embedded and $\sim 15$ for the isolated models of candidate 2. Due to the less extended tidal arm surrounding candidate 3, these models only increase their central mass by a factor of $4-6$. The transition from a gas to a stellar mass dominated system is indicated by the intersection of the dashed (gas) and dot-dashed (stars) lines at $t_{\mathrm{sim}}\approx 150\,\mathrm{Myr}$. The influence of the tidal arm for the mass assembly can clearly be seen as the embedded models continue to increase their mass until the end of the simulation time. The embedded models of candidate 2 increase their mass within the inner 1.0 kpc of radius by $20-25$ times the initial mass. The accretion rates in $\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$, averaged over 10 Myrs, through spherical surfaces with radii of 1.0, 2.5 and 5.0 kpc around the TDG are shown in Figures 5. The accretion rate through the 5 kpc sphere generally remain low, below $1.0~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ in all models of candidate 2. During the initial collapse the embedded models of this candidate are reaching accretion rates of $\sim 3\,\mathrm{and}\,\sim 2~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ at distances of 1.0 and 2.5 kpc, respectively. As the tidal arm is less extended in the embedded models of candidate 3 the accretion rates are much smaller compared to the models of candidate 2. Through the outer 5 kpc sphere maximal accretion rates of $\sim 0.1~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ are reached and for the 2.5 and 1 kpc spheres accretion rates of $\sim 0.7$ and $\sim 0.2~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ are reached during the collapse phase. The isolated models behave qualitatively similar to the embedded models during this phase, with slightly lower accretion rates. For the central sphere their accretion rate drops to zero after roughly $300-500$ Myr, as, due to the absence of the tidal arm, no further material is available for accretion. The embedded models continue to grow in mass by the accumulation of matter along the tidal arm with accretion rates around $0.5~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ and $0.05~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ for the candidates 2 and 3, respectively. The dynamics within the tidal arm of the model e3t at $t_{\mathrm{sim}}=150~{}\mathrm{and}~{}300~{}\mathrm{Myr}$ is illustrated in Figure 6. At $t_{\mathrm{sim}}=150~{}\mathrm{Myr}$, the peak of the collapse phase, an almost spherical accretion is present, while at $t_{\mathrm{sim}}=300~{}\mathrm{Myr}$, close to the end of the collapse phase, streams with higher densities are formed within the tidal arm along which the gas is transported towards the TDG. The thin spikes in Figure 5 correspond to captured respectively ejected substructures like dense gas clouds and/or massive star clusters, which are formed in the close vicinity of the TDG, at distances beyond $2.5\,\mathrm{kpc}$ form their centre. These spikes occure exclusively in candidate 2 that is much more affected by gas streams and mass assembly from the tidal arm than candidate 3. That these spikes are more pronounced at the 2.5 kpc radius from the mass center but less at 5 kpc distance means that clumps fall towards the innermost region. And that the clumps form between these radii can be concluded from the total mass increase (see Figure 3). The mass distribution among gas and stars is listed in Table 2 for the final simulation time ($t_{\mathrm{sim}}=1500\,\mathrm{Myr}$). 4.2 Star formation Regardless of the environmental impact factors, i.e. the tidal field and ram pressure or the presence of the embedding tidal arm, the star-formation history (SFH) of all runs is similar and can be described by three different stages. First, an initial SF episode followed by a strong starburst triggered by the collapse of the proto-TDG and a subsequent long-living phase of self-regulated SF. These different SF episodes follow the evolutionary phases as described in the previous Section 4.1. The low initial SFR is caused by the early built-up of the TDGs core. During this phase the SFR remains low at a level of about $10^{-2}~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$. As the evolution of the TDG proceeds, the surrounding material collapses within a free-fall time, triggering the starburst at $t_{\mathrm{sim}}\approx 130\,\mathrm{Myr}$ for a period of about 100 Myr before the SFR starts to decline again. Thereby, SFRs of up to $3.5~{}\mathrm{and}~{}1.2~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ are reached for the candidate 2 and 3, respectively, Depending on the availability of gas, the SFR approaches an approximate equilibrium level after 300 to 500 Myr. In this phase, the SFR of the models embedded in the tidal arm stays around $0.5~{}\mathrm{and}~{}0.1~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ for candidate 2 and 3, respectively, whereas, the self-regulated equilibrium SFR of the isolated models, lacking further material to be accreted from the tidal arm reservoir, is roughly one half to one order of magnitude lower. In Figure 7 the SFHs of the different simulation runs are compared, sorted by the included environmental effects, from top to bottom, including the tidal field and ram pressure, the tidal field only and none of these effects. The oscillation in the SFR or the model i3tr between $700$ and $1100~{}\mathrm{Myr}$ is caused by the removal of gas due to RPS. The in-situ formed stars dominate the stellar mass of both TDG candidates, until the end of the simulation at $t_{\mathrm{sim}}~{}=~{}1500~{}\mathrm{Myr}$, the mass fraction of the pre-existing stellar population decreases to $1\,\%$ of the total stellar mass for the embedded models and to $2\,\%$ for the isolated models. For comparison the final SFRs, peak SFRs and average SFRs are compiled in Table 2. 4.3 Environmental influence Ram pressure caused by the motion of the TDG thought the CGM of the parent galaxies scales with the square of its orbital velocity and the strength of the tidal field is weakening with the cube of the distance to these galaxies. Therefore, the environmental effects are strongly affected by the actual orbit of the TDG candidate 2 around its interacting host galaxies. With a distance of $162.5~{}\mathrm{kpc}$ from the mass centre of the interacting galaxies and an orbital velocity of $109.5~{}\mathrm{km\,s}^{-1}$, at the initial simulation time, the TDG candidate 2 remains on a bound eccentric orbit around its parents, one can calculate its orbital period to amount to $8.9~{}\mathrm{Gyr}$. Until the end of the simulation time of $1500~{}\mathrm{Myr}$ this TDG is reaching a distance of $223.2~{}\mathrm{kpc}$ at a velocity of $71.4~{}\mathrm{km\,s}^{-1}$ as it is approaching its orbital apocentre. The TDG candidate 3, at an initial distance of $312.5~{}\mathrm{kpc}$ with an orbital velocity of $187.0~{}\mathrm{km\,s}^{-1}$ is on an unbound orbit, escaping from the parent galaxies. At the final simulation time it reaches a distance of $575.1~{}\mathrm{kpc}$ with a velocity of $167.9~{}\mathrm{km\,s}^{-1}$. 4.3.1 The tidal field The compressive respectively disrupting effect of tidal forces exerted on TDGs by their parent galaxies’ gravitational potential are believed to play a crucial role in the formation and evolution of TDGs (Duc et al., 2004). This effect is investigated by comparing the simulation runs e2 to e2t and i2 to i2t. The models e2 and i2 lack of any tidal accelerations and ram pressure, whereas the models e2t and i2t are exposed to the tidal field. The embedded models of candidate 2 (e2 and e2t), do not show strong differences in their SFRs (Figure 7) or gas and stellar mass fractions (Figure 3) which could be attributed to the tidal field. The differences in the two embedded models of candidate 2 emerging after $600-700\,\mathrm{Myr}$ are attributed to the different accretion histories of these models. A super star clusters with a mass in the order $10^{7}\mathrm{M}_{\sun}$ forms close to the TDG within the tidal arm at different strengths, i.e. density and streaming motion, and times and is therefore accreted at different times, corresponding to jumps in the right hand panel of Figure 3 and the thin spikes in the left panel of Figure 5. The resulting structural differences in the stellar distribution are illustrated in Figure 8 for the models e2tr and e2 in the simulation time interval $400-1000~{}\mathrm{Myr}$. The two isolated models i2 and i2t do not show any significant differences in their evolution, neither in their SFR nor in their mass composition. With $f_{\mathrm{gas}}=0.21$ and $\mathrm{SFR}=0.05$ after 1.5 Gyrs for both corresponding embedded models of candidate 3 (e3 and e3t) show an even smaller impact of the parent galaxies’ tidal field on their evolution. Due to the unbound orbit of candidate 3 the gravitational influence of the parent galaxies decreases with time. 4.3.2 Ram pressure Ram pressure stripping (RPS) is thought to be one of the main drivers of DG transformation, from gas rich to gas poor, even for DM dominated DGs. Therefore, TDGs, which are supposed to be free of DM, should be especially sensitive to RPS. This is only partly true, as young TDGs in the process of formation and during their early evolution, as those simulated in this work, are embedded in a gaseous tidal arm. Before the ram pressure can directly act on the TDG itself the tidal arm has to be dissolved and mostly decouple from the TDG. This is causing a significant delay time before a young TDG is actually affected. During this delay time, typically a few 100 Myr within the presented simulations, a large fraction of gas is already converted into stars, leaving behind a stellar dominated system even before RPS of the TDG starts. As long as the arm and the TDG are not completely separated, some of the TDGs gas is continuously stripped while simultaneously the TDG is refueled by material accreted along the tidal arm. Due to the compression of the tidal arm by ram pressure the accretion of gas along the tidal arm can be higher in these models compared to those without ram pressure. Thereby, the accretion rate can be even higher than the mass loss rate due to RPS. After the majority of low-density gas has been removed from the upwind side of the TDG, ram pressure starts to influence the high-density disk. Since the TDG’s spin vector should be oriented perpendicular to the orbital plane of the tidal arm, ram pressure is typically acting edge-on onto the TDG’s gas disk leading to its deformation and compression while some of its gas is lost. By comparison of the density slices of the two embedded models e3t (left) and e3tr (right) of Figure 9 the effect of ram pressure is exemplary illustrated. Model e3t only feels the tidal field of the parent galaxies, whereas e3tr is additionally exposed to ram pressure. During the early evolutionary phases ($t_{\mathrm{sim}}\la 500\,\mathrm{Myr}$) the rarefied gas of the tidal arm is pushed around the TDG from the upwind side and stripped away at the lee side. The high-density TDG gas remains mostly unaffected until most of the low-density gas is removed. At this stage ram pressure starts acting on the high density disk of the TDG. Due to the inclined orientation of the ram pressure a hammerhead-shaped deformation of the gaseous disk appears ($t_{\mathrm{sim}}=500\,\mathrm{Myr}$ panel of model e3tr). Further mass loss due to continuous stripping occurs throughout the whole simulation time at very low rates and is refueled by accretion along the tidal arm. The small differences in $M_{\mathrm{gas}}$, between $500\leq t_{\mathrm{sim}}/\mathrm{Myr}\leq 1500$, of the models e3t and e3tr are balanced until the end of the simulation by slightly higher accretion rates along the compressed tidal arm. As a result of ram pressure the TDGs high density disk in the e3tr model is thinner and elongated towards the negative z-direction compared to the model e3t at $t_{\mathrm{sim}}=1500\,\mathrm{Myr}$. The density peak at $z\approx 4\,\mathrm{kpc}$ in the $t_{\mathrm{sim}}~{}=~{}1500\,\mathrm{Myr}$ panel of model e3tr does not belong to the TDG itself but is part of the tidal arm. Due to ram pressure the tidal arm in this model is compressed and bent behind TDG. The differences in the evolution, both in SFR and mass distribution, of the models e2t and e2tr are attributed to the different accretion histories of these models, which is partly influenced by the compression of the tidal arm due to ram pressure. The only model severely influenced by ram pressure is the model i3tr, a model mostly comparable to a DM-free satellite DG feeling the tidal field of the host galaxy and ram pressure. This model looses a large fraction of its ISM. Until the end of the simulation time it only retains $4\times 10^{5}~{}\mathrm{M}_{\sun}$ of gas, corresponding to a gas fraction of only 4%, whereas the other isolated models of candidate 3 have gas masses of $\sim 2\times 10^{6}~{}\mathrm{M}_{\sun}$ and a corresponding gas fraction of $\sim 20\%$. Due to the lower relative velocity of the TDG with respect to the CGM in the models of the candidate 2, the impact of RPS is less pronounced then in the models of candidate 3. The model i2tr retains a gas mass of $2.3\times 10^{7}\,\mathrm{M}_{\sun}$ corresponding to a gas fraction of 5% whereas the gas mass of i2t is $3.1\times 10^{7}\,\mathrm{M}_{\sun}$, respectively the gas fraction is 7% and is only marginally larger. 4.3.3 The tidal arm Within the presented simulations, the presence of the additional gas reservoir within the tidal arm shows the strongest impact on the evolution of forming TDGs. Thereby, it influences the early evolution of TDGs in different ways. Due to the additional supply of gas, it increases the fraction of gas contained in a TDG and thus also enhances the SFR until the end of the simulation time. Additionally, it shields the TDG from its parents’ CGM, through which it is travelling and thus it is protecting the TDG from the direct influence of ram pressure. Within the embedded models it appears that the combination of the tidal arm and ram pressure can even increase the gas mass within a TDG. As the tidal arm gets compressed by ram pressure, the density within the arm is locally increased, allowing for higher accretion rates along the arm. These effects have not been taken into account in previous numerical studies of TDGs, like those by Smith et al. (2013); Yang et al. (2014); Ploeckinger et al. (2014, 2015). Initially all models of one candidate start with the same central gas distribution, deviating at densities below $5\times 10^{-26}~{}\mathrm{g~{}cm}^{-3}$, i.e. the cutoff density of the isolated models (see Table 1), resulting in almost equal gas masses within $r=2.5\,\mathrm{kpc}$. During the initial collapse the mass within that radius remains approximately equal between the embedded and isolated models. After the collapse of the proto-cloud, around $t_{\mathrm{sim}}\approx 250~{}\mathrm{and}~{}180\,\mathrm{Myr}$, for candidate 2 and 3 respectively, the gas mass of the isolated models declines significantly stronger compared to the models embedded in the tidal arm. This behaviour is caused by the lack of surrounding material which could be accreted by the TDGs. At the final simulation time ($t_{\mathrm{sim}}=1500\,\mathrm{Myr}$) the embedded models retained twice as much gas than those in isolation within $r=1.0\,\mathrm{kpc}$. Going to larger distances the deviations can become as large as a factor of 3 or 5 for $r=2.5\,\mathrm{kpc}$ and $r=5.0\,\mathrm{kpc}$, respectively. In the embedded and the isolated models, the SFR follows the available amount of gas, therefore a strong starburst associated with the collapse of the TDG is present. After the collapse the SFR of the isolated models declines due to gas consumption and the missing gas reservoir, called starvation (Larson et al., 1980). As the embedded models are constantly fed with new material along the tidal arm, their SFRs stay at higher levels throughout the remaining simulation time. This results in a three times higher SFR at the end of the simulations if the tidal arm is included, the isolated models of candidate 2 have final SFRs of about $6\times 10^{-2}~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ compared to $2\times 10^{-1}~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ if the tidal arm is included. The results for candidate 3 a similar, where the embedded models, with SFRs in the order of $5\times 10^{-2}~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$, have 2.5 times higher SFRs at the end of the simulation runs compared to $2\times 10^{-2}~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ for the isolated models. The embedded models generally double their stellar mass after a simulation time of $t_{sim}\approx 300~{}\mathrm{Myr}$ until the end of the simulations. Starting from the same time, the stellar masses of the isolated models are approaching an equilibrium level with only a marginal increase in stellar mass until $1.5~{}\mathrm{Gyr}$ (see bottom panels of Figure 10). These characteristics show the importance of the tidal arm during the evolution of TDGs, as the tidal arm provides a rich reservoir of gas available for accretion and subsequent conversion into stars. The accumulation of additional material allows the simulated TDGs to substantially increase their total mass within a radius of 2.5 kpc (see Figure 3 and 4). 5 Discussion In order to explore the evolution of TDGs by consistently involving the impact of the tidal arm, we performed for the first time chemodynamical high resolution simulations of TDGs embedded into the tidal arm and compared these simulations to simulation runs of isolated TDGs, i.e. simulations with the same properties of the proto-TDG but neglecting the extended gas distribution of the tidal arm. Moreover, the TDGs’ initial structure is taken from a large scale merger simulation instead of starting form idealized isolated and symmetric models (e.g. Ploeckinger et al., 2014; Recchi et al., 2015). As TDGs form and evolve in the vicinity of their parent galaxies, the influence of their environment has to be taken into account when studying this class of galaxies. Therefore, we expose our TDG candidates to different combinations of environmental effects, such as the tidal field of the parent galaxies, and/or ram pressure caused by the motion through the CGM. The evolution of the TDG candidates at test can be described by two principal phases. At first, the initially already Jeans unstable proto-TDGs, with masses of $M_{\mathrm{TDG}}=2.60\times 10^{8}$ and $1.12\times 10^{8}\,\mathrm{M}_{\sun}$ and Jeans masses of $M_{\mathrm{J}}=6.37\times 10^{7}$ and $6.70\times 10^{7}\,\mathrm{M}_{\sun}$ within a radius of $2.5\,\mathrm{kpc}$, for candidate 2 and 3, respectively, rapidly collapse on the scale of a free-fall time. During this evolutionary stage a strong starburst occurs with a peak SFR of $\sim 3.5\,\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ for the models of candidate 2 and $\sim 1.3\,\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ for candidate 3. After approximately $250-300\,\mathrm{Myr}$ the collapse subsides and the further evolution is determined by the amount of gas availability for accretion. As the embedded models are able to refuel their gas reservoir, these models generally retain higher gas masses. At this phase, the SFR follows nicely the available amount of gas, with a stronger decline for the isolated models (starvation) and an order of magnitude lower rates in the final simulation time compared to the embedded models. This behaviour favours the formation of TDGs out of Jeans unstable gas clouds, as already described by Elmegreen et al. (1993), rather then a clustering of stars in the tidal arm followed by gas accretion (e.g. Barnes & Hernquist, 1992). Nonetheless an unambiguous conclusion on the initial formation process of the proto-TDG might not be found from the presented work, as the unstable cloud has been formed in the Fouquet et al. (2012) simulations at a timestep well before the initial data were extracted and therefore the details of its formation cannot be explored by the same detail. At first sight, the peak SFRs in the order of $1~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ during the collapse of the proto-TDGs seem too high compared to observation, but are in agreement with the general evolution of TDGs as described by Hunter et al. (2000). Furthermore, these peak SFRs of our simulations confirm the findings of Recchi et al. (2007); Ploeckinger et al. (2014, 2015), showing that TDGs can survive strong starbursts without being disrupted, despite their lack of DM. The final self-regulated equilibrium SFRs in the range of $10^{-2}~{}\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$ agree well with observationally derived SFRs. RPS is expected to be one of the main drivers of DG’s morphological transformation, from gas-rich dwarf irregular galaxies to dEs, during the infall into galaxy clusters (e.g. Lisker et al., 2006; Boselli & Gavazzi, 2014). Due to the absence of a supporting DM halo in TDGs, this class of dwarf galaxies should be particularly more vulnerable to RPS, as it has been demonstrated by Smith et al. (2013). Thereby, they found that even the stellar component can be severely effected if the gas is removed fast enough and to a large gas fraction and to a large mass fraction. However, young TDGs in the formation process and during their early evolution are embedded in a tidal arm consisting of gas and stars. With its extended gas distribution it shields the TDG from the CGM, therefore, ram pressure is acting on the tidal arm and not directly on the TDG, at least as long as the arm is not completely dissolved. Even after the arm has vanished, ram pressure does not act face-on onto the gas disk of the TDG. In a more realistic case, coplanar rotation of the TDG and the tidal arm must be assumed, so that ram pressure would hit the TDG edge-on. Within the 15 TDG candidates of the original Fouquet et al. (2012) simulation the average angle between the TDGs orbit and internal rotation direction is $25^{\circ}\,\pm\,13^{\circ}$ and the orbital velocities are low with $v_{\mathrm{orb}}\,=\,148\,\pm\,40~{}\mathrm{km~{}s}^{-1}$ (see Ploeckinger et al., 2015, Table A.1). These numbers in combination with the presence of the tidal arm strengthen the presented results which suggest that RPS only has a minor influence on the early evolution of TDGs. Moreover, the effect of RPS might be over-estimated by the presented simulations of the embedded models due to the truncation of the tidal arm at low densities during the initialisation of the simulation runs and the related adjustment of the CGM density. The assumption of a static CGM with respect to the merger remnant poses another source of uncertainty on the impact of ram pressure on the evolution of embedded TDGs. As RPS and the tidal field only have minor influence on the early evolution of TDGs, the presence of the tidal arm has the largest impact on their juvenile evolution, i.e. after the initial collapse until the decoupling from the arm. It provides a large reservoir of gas which can be accreted after the initial collapse and the related star burst. The refuelling during the juvenile formation phase leads to an almost doubling of their stellar mass, and up to 3 times higher SFRs while still retaining higher gas masses and fraction throughout most of the simulation time, compared to the isolated models. Under the assumption of a static CGM with respect to the merger remnant, the tidal arm further shields the TDGs from the CGM and protects them against RPS caused by the motion along their orbits around the host galaxies. The extended gas distribution of the tidal arm has to dissolve or be removed before ram pressure can act on a young TDG directly, leading to a significant delay before RPS sets in. This allows TDGs to further accumulate matter, grow in mass and to stabilise themselves against the parents tidal field and RPS. Within the presented simulation the delay time is of the order of a few $100~{}\mathrm{Myr}$, which most likely underestimates the true time scale of the decoupling of TDGs form their host tidal arm due to the truncation of the arms density distribution (for details see Section 3.5). Nevertheless it provides enough time to convert the majority of the TDGs initial gas content into stars before ram pressure can act on the TDG itself. While we had already demonstrated by former numerical models that isolated TDGs, i.e. those detached from the gaseous tidal arm, can - though DM free - survive high SFRs and remain bound as dwarf galaxies. The new studies extend the TDGs to their environment as embedded TDGs and have shown that TDGs • grow substantially in mass, due to the refueling along the tidal arm. Depending on the structure, i.e. density and extent of the tidal arm, TDGs can grow by more than one order of magnitude in mass within the central region and are able to double or triple their mass within a radius of $2.5~{}\mathrm{kpc}$. • are converting gas efficiently into stars at high SFRs. TDGs survive even strong starburst with SFRs of several $\mathrm{M}_{\sun}~{}\mathrm{yr}^{-1}$. • are less sensitive to RPS than previous work (e.g. by Smith et al., 2013) suggests. During their earliest evolution, the potentially devastating effect of RPS is strongly reduced by the shielding of the embedding tidal arm. At later times, when the gaseous tidal arm has disappeared, RPS could act efficiently. Due to the fast conversion into star-dominated systems the potential of being harmed by RPS is, however, reduced because the lower gas fraction makes them less vulnerable to destruction by this gas-gas interactions. • remain bound and survive as star-dominated dwarf galaxies. Although the models presented here are only a small sample of the numerous possible initial conditions, these models are representative for surviving TDGs which either escape form their parent galaxies or remain on bound orbits around them. The selection of TDG candidates was taken under the aspect that they survived already in large-scale simulations (Yang et al., 2014) because our modeling aims at zooming-in on survivor candidates and tracing their mass accretion and evolution with respect to the influence of the tidal field and ram pressure acting under realistic tidal-arm conditions. Therefore, it was out of our scope to derive an initial-to-final mass relations or any conclusions of the tidal-tail conditions on the TDG evolution. This interesting aspect is not feasible for statistics from our few models but must be studied with an upcoming more comprehensive set of models. Acknowledgements The authors thank Francois Hammer and Yanbin Yang for kindly providing their simulation data which served as initial data for our simulations. The authors are grateful to Sylvia Ploeckinger for her advice to the numerical scheme and the referee Frederic Bournaud for his enthusiastic and supportive comments. The software used in this work was in part developed by the DOE NNSA-ASC OASCR Flash Center at the University of Chicago. The computational results presented have been achieved using the Vienna Scientific Cluster (VSC). References Barnes & Hernquist (1992) Barnes J. E., Hernquist L., 1992, Nature, 360, 715 Boehringer & Hensler (1989) Boehringer H., Hensler G., 1989, A&A, 215, 147 Boselli & Gavazzi (2014) Boselli A., Gavazzi G., 2014, A&ARv, 22, 74 Bournaud (2010) Bournaud F., 2010, Adv. Astron., 2010 Bournaud & Duc (2006) Bournaud F., Duc P.-A., 2006, A&A, 456, 481 Bournaud et al. (2014) Bournaud F., et al., 2014, ApJ, 780, 57 Braine et al. (2000) Braine J., Lisenfeld U., Due P.-A., Leon S., 2000, Nature, 403, 867 Braine et al. (2001) Braine J., Duc P.-A., Lisenfeld U., Charmandaris V., Vallejo O., Leon S., Brinks E., 2001, A&A, 378, 51 Braine et al. (2004) Braine J., Lisenfeld U., Duc P.-A., Brinks E., Charmandaris V., Leon S., 2004, A&A, 418, 419 Courant et al. (1967) Courant R., Friedrichs K., Lewy H., 1967, IBM Journal of Research and Development, 11, 215 Croxall et al. (2009) Croxall K. V., van Zee L., Lee H., Skillman E. D., Lee J. C., Côté S., Kennicutt Jr. R. C., Miller B. W., 2009, ApJ, 705, 723 Dabringhausen & Kroupa (2013) Dabringhausen J., Kroupa P., 2013, MNRAS, 429, 1858 Dalgarno & McCray (1972) Dalgarno A., McCray R. A., 1972, ARA&A, 10, 375 Duc (2012) Duc P.-A., 2012, Astrophysics and Space Science Proceedings, 28, 305 Duc & Mirabel (1994) Duc P.-A., Mirabel I. F., 1994, A&A, 289, 83 Duc & Mirabel (1998) Duc P.-A., Mirabel I. F., 1998, A&A, 333, 813 Duc et al. (2004) Duc P.-A., Bournaud F., Masset F., 2004, A&A, 427, 803 Duc et al. (2014) Duc P.-A., Paudel S., McDermid R. M., Cuillandre J.-C., Serra P., Bournaud F., Cappellari M., Emsellem E., 2014, MNRAS, 440, 1458 Elmegreen et al. (1993) Elmegreen B. G., Kaufman M., Thomasson M., 1993, ApJ, 412, 90 Fouquet et al. (2012) Fouquet S., Hammer F., Yang Y., Puech M., Flores H., 2012, MNRAS, 427, 1769 Fryxell et al. (2000) Fryxell B., et al., 2000, ApJS, 131, 273 Gupta et al. (2012) Gupta A., Mathur S., Krongold Y., Nicastro F., Galeazzi M., 2012, ApJ, 756, L8 Hammer et al. (2010) Hammer F., Yang Y. B., Wang J. L., Puech M., Flores H., Fouquet S., 2010, ApJ, 725, 542 Hammer et al. (2013) Hammer F., Yang Y., Fouquet S., Pawlowski M. S., Kroupa P., Puech M., Flores H., Wang J., 2013, MNRAS, 431, 3543 Hensler (1987) Hensler G., 1987, Mitt. der Astron. Ges., 70, 141 Hensler & Rieschick (2002) Hensler G., Rieschick A., 2002, in Grebel E. K., Brandner W., eds, ASP Conf. Ser. Vol. 285, Modes of Star Formation and the Origin of Field Populations. p. 341 Hensler et al. (2004) Hensler G., Theis C., Gallagher III. J. S., 2004, A&A, 426, 25 Hensler et al. (2017) Hensler G., Steyrleithner P., Recchi S., 2017, in Gil de Paz A., Knapen J. H., Lee J. C., eds, IAU Symp. Vol. 321, Formation and Evolution of Galaxy Outskirts. p. 99, doi:10.1017/S1743921316011261 Hunter et al. (2000) Hunter D. A., Hunsberger S. D., Roye E. W., 2000, ApJ, 542, 137 Kaviraj et al. (2012) Kaviraj S., Darg D., Lintott C., Schawinski K., Silk J., 2012, MNRAS, 419, 70 Koeppen et al. (1995) Koeppen J., Theis C., Hensler G., 1995, A&A, 296, 99 Kroupa (2001) Kroupa P., 2001, MNRAS, 322, 231 Larson et al. (1980) Larson R. B., Tinsley B. M., Caldwell C. N., 1980, ApJ, 237, 692 Lee-Waddell et al. (2014) Lee-Waddell K., et al., 2014, MNRAS, 443, 3601 Lisenfeld et al. (2016) Lisenfeld U., Braine J., Duc P. A., Boquien M., Brinks E., Bournaud F., Lelli F., Charmandaris V., 2016, A&A, 590, 92 Lisker et al. (2006) Lisker T., Grebel E. K., Binggeli B., 2006, AJ, 132, 497 Maeder (1996) Maeder A., 1996, in Leitherer C., Fritze-von-Alvensleben U., Huchra J., eds, ASP Conf. Ser. Vol. 98, From Stars to Galaxies: the Impact of Stellar Physics on Galaxy Evolution. p. 141 Marigo et al. (1996) Marigo P., Bressan A., Chiosi C., 1996, A&A, 313, 545 Matteucci & Recchi (2001) Matteucci F., Recchi S., 2001, ApJ, 558, 351 Mirabel et al. (1991) Mirabel I. F., Lutz D., Maza J., 1991, A&A, 243, 367 Mirabel et al. (1992) Mirabel I. F., Dottori H., Lutz D., 1992, A&A, 256, L19 Mo et al. (2010) Mo H., van den Bosch F. C., White S., 2010, Galaxy Formation and Evolution. Camebridge University Press Okazaki & Taniguchi (2000) Okazaki T., Taniguchi Y., 2000, ApJ, 543, 149 Ploeckinger (2014) Ploeckinger S., 2014, PhD thesis, University of Vienna Ploeckinger et al. (2014) Ploeckinger S., Hensler G., Recchi S., Mitchell N., Kroupa P., 2014, MNRAS, 437, 3980 Ploeckinger et al. (2015) Ploeckinger S., Recchi S., Hensler G., Kroupa P., 2015, MNRAS, 447, 2512 Ploeckinger et al. (2018) Ploeckinger S., Sharma K., Schaye J., Crain R. A., Schaller M., Barber C., 2018, MNRAS, 474, 580 Portinari et al. (1998) Portinari L., Chiosi C., Bressan A., 1998, A&A, 334, 505 Recchi (2014) Recchi S., 2014, Adv. Astron., 2014, 750754 Recchi et al. (2007) Recchi S., Theis C., Kroupa P., Hensler G., 2007, A&A, 470, L5 Recchi et al. (2009) Recchi S., Calura F., Kroupa P., 2009, A&A, 499, 711 Recchi et al. (2015) Recchi S., Kroupa P., Ploeckinger S., 2015, MNRAS, 450, 2367 Renaud et al. (2013) Renaud F., et al., 2013, MNRAS, 436, 1836 Ricker (2008) Ricker P. M., 2008, ApJS, 176, 293 Samland et al. (1997) Samland M., Hensler G., Theis C., 1997, ApJ, 476, 544 Schaye et al. (2015) Schaye J., et al., 2015, MNRAS, 446, 521 Schure et al. (2009) Schure K. M., Kosenko D., Kaastra J. S., Keppens R., Vink J., 2009, A&A, 508, 751 Smith et al. (2013) Smith R., Duc P. A., Candlish G. N., Fellhauer M., Sheen Y.-K., Gibson B. K., 2013, MNRAS, 436, 839 Springel (2005) Springel V., 2005, MNRAS, 364, 1105 Stinson et al. (2006) Stinson G., Seth A., Katz N., Wadsley J., Governato F., Quinn T., 2006, MNRAS, 373, 1074 Sweet et al. (2014) Sweet S. M., Drinkwater M. J., Meurer G., Bekki K., Dopita M. A., Kilborn V., Nicholls D. C., 2014, ApJ, 782, 35 Theis et al. (1992) Theis C., Burkert A., Hensler G., 1992, A&A, 265, 465 Thornton et al. (1998) Thornton K., Gaudlitz M., Janka H.-T., Steinmetz M., 1998, ApJ, 500, 95 Toomre & Toomre (1972) Toomre A., Toomre J., 1972, ApJ, 178, 623 Travaglio et al. (2004) Travaglio C., Hillebrandt W., Reinecke M., Thielemann F.-K., 2004, A&A, 425, 1029 Weilbacher et al. (2003) Weilbacher P. M., Duc P.-A., Fritze-v. Alvensleben U., 2003, A&A, 397, 545 Wen et al. (2012) Wen Z.-Z., Zheng X.-Z., Zhao Y.-H., Gao Y., 2012, Ap&SS, 337, 729 Wetzstein et al. (2007) Wetzstein M., Naab T., Burkert A., 2007, MNRAS, 375, 805 Yang et al. (2014) Yang Y., Hammer F., Fouquet S., Flores H., Puech M., Pawlowski M. S., Kroupa P., 2014, MNRAS, 442, 2419
Particle-in-cell simulations of high energy electron production by intense laser pulses in underdense plasmas Susumu Kato†, Eisuke Miura†, Mitsumori Tanimoto‡, Masahiro Adachi§, Kazuyoshi Koyama† † National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568, Japan ‡ Department of Electrical Engineering, Meisei University, Hino, Tokyo 191-8506, Japan § Graduate school of Advanced Science of Matter, Hiroshima University, Higashi-Hiroshima, Hiroshima 739-8530, Japan Abstract The propagation of intense laser pulses and the generation of high energy electrons from the underdense plasmas are investigated using two dimensional particle-in-cell simulations. When the ratio of the laser power and a critical power of relativistic self-focusing is the optimal value, it propagates stably and electrons have maximum energies. 1 Introduction The interaction of an intense ultrashort laser pulse with underdense plasmas has attracted much interest for a compact accelerator. Using intense laser systems, of which peak powers exceed 10 TW, electrons with energies greater than 100 MeV have been observed in various low density range, of which electron densities are from $6\times 10^{16}\mathrm{cm}^{-3}$ to $2.5\times 10^{19}\mathrm{cm}^{-3}$ [1, 2]. On the other hand, using terawatt class Ti:sapphire laser systems, electrons with energies greater than several mega-electron-volts have been observed from moderately underdense plasmas, of which densities are up to near the quarter-critical density[3, 4]. At the moderately underdense plasmas, the electron energies exceed the maximum energies determined by dephasing length. It is considered recently that the acceleration occurs by the direct laser acceleration[5] that includes stochastic or chaotic processes. In this paper, we study the propagation of the laser pulses and the generation of high energy electron in the underdense plasmas using two dimensional particle-in-cell (2D PIC) simulations. The laser power $P_{\mathrm{L}}$ beyond the critical power $P_{\mathrm{cr}}$ is necessary because self-focusing is important in a long-distance propagation[6]. Here, $P_{\mathrm{cr}}\simeq 17\left(n_{\mathrm{c}}/n_{\mathrm{e}}\right)\mathrm{GW}$, $n_{\mathrm{e}}$ and $n_{\mathrm{c}}$ are the electron density and critical density, respectively. We assume a terawatt class Ti:sapphire laser system as a compact laser system in the simulations parameter, because the plasma electron densities $n_{e}=(1\sim 20)\times 10^{19}\textrm{cm}^{-3}$. 2 2D PIC Simulation Results We use the 2D PIC simulation with immobile ions. The peak irradiated intensity, pulse length, and spot size are $5\times 10^{18}$ $\textrm{W/cm}^{2}$, $50$ fs, and 3.6 $\mu$m, respectively. $P_{\mathrm{L}}=2$ TW, namely the energy is $100$ mJ, when cylindrical symmetry is assumed, although we use two dimensional cartesian coordinates. The Rayleigh length $L_{\mathrm{R}}=50\mu$m. The plasma electron densities $n_{e}=(1\sim 20)\times 10^{19}\textrm{cm}^{-3}$, which correspond to $n_{\mathrm{e}}/n_{\mathrm{c}}=0.057\sim 0.11$, where $n_{\mathrm{c}}=1.7\times 10^{21}\textrm{cm}^{-3}$ for the wavelength $\lambda_{0}=0.8\mu\textrm{m}$. These parameters of the simulations are almost the same as the experiments of compact laser system[4]. The laser power $P_{\mathrm{L}}=2$ TW exceeds the critical powers $P_{\mathrm{cr}}$ of the relativistic self-focusing for $n_{e}\geqslant 1\times 10^{19}\textrm{cm}^{-3}$. Figures 1(a)-(e) show the intensities of laser pulses after propagating $2.5L_{\mathrm{R}}$ for electron densities $n_{e}=20,10,5,2$, and $1\times 10^{19}\textrm{cm}^{-3}$, respectively. For $n_{\mathrm{e}}=1$ and $2\times 10^{19}\textrm{cm}^{-3}$, namely, $P_{\mathrm{L}}/P_{\mathrm{cr}}\simeq 1$, the pulses stably propagate without modulation. Electrons with energies greater than 2 MeV are hardly observed. For $n_{\mathrm{e}}=5\times 10^{19}$$\textrm{cm}^{-3}$, the back of pulse is modulated. Electrons get energies greater than 20 MeV, as shown later. For $n_{\mathrm{e}}=1\times 10^{20}$$\textrm{cm}^{-3}$, a pulse separates into the bunches of which size about the plasma wavelength. A pulse breaks up and is not propagate stably any longer, for $n_{\mathrm{e}}=2\times 10^{20}$$\textrm{cm}^{-3}$, i.e. $P_{\mathrm{L}}/P_{\mathrm{cr}}\geqslant 10$. The electron energy spectra for electron densities $n_{\mathrm{e}}=20,10$, and $5\times 10^{19}\textrm{cm}^{-3}$ are shown in Figs. 2(a)-(c), respectively. The maximum energy is greater than $20$ MeV for $n_{\mathrm{e}}=5\times 10^{19}\textrm{cm}^{-3}$. Before the pulses propagate about one and two Rayleigh length, the maximum electron energies have been saturated at 20 MeV and 10 MeV for $n_{\mathrm{e}}=1$ and $2\times 10^{20}\textrm{cm}^{-3}$, respectively. 3 Concluding Remarks We study the propagation of the intense laser pulses and the generation of high energy electrons from the moderately underdense plasmas using 2D PIC simulations. For $P_{\mathrm{L}}/P_{\mathrm{cr}}\simeq 3$, the laser pulse of which power and pulse length are $2$ TW and $50$ fs stably propagates with modulation. As a result, the high energy electrons with energies greater than 20 MeV are observed and their energies have not been saturated, namely, electrons can gain higher energies propagating with the intense laser pulse through long size plasmas. For $P_{\mathrm{L}}/P_{\mathrm{cr}}\leqslant 2$, although the pulses stably propagate, no high energy electron is observed. On the other hand, for $P_{\mathrm{L}}/P_{\mathrm{cr}}\geqslant 5$, high energy electrons with energies up to 20 MeV are observed, although pulses does not propagate stably. The simulation results of the dependence to the plasma density of the maximum electron energy explain the latest experiment well qualitatively[4]. In the simulations, the maximum propagation distance is $3L_{\mathrm{R}}$ is limited by the performance of the computer and simulation code. Since the pulse has propagated sufficiently stably to $2.5L_{\mathrm{R}}$ for the plasma densities less than $5\times 10^{19}$$\textrm{cm}^{-3}$, simulations with a longer propagation distance is required. A part of this study was financially supported by the Budget for Nuclear Research of the Ministry of Education, Culture, Sports, Science and Technology (MEXT), based on the screening and counseling by the Atomic Energy Commission, and by the Advanced Compact Accelerator Development Program of the MEXT. References References [1] Z. Najmudin et al., Phys. Plasmas, 10 (2003) 2071 [2] Y. Kitagawa et al., Phys. Rev. Lett., 92 (2004) 205002. [3] C. Gahn et al., Phys. Rev. Lett., 83 (1999) 4772. [4] E. Miura et al., Bull. Am. Phys. Soc., 48 (2003) 195; K. Koyama et al., Int. J. Appl. Electron., 14 (2001/2002) 263; M. Adachi et al., Proceedings of the 31st EPS Conference on Plasma Phys. London, 28 June - 2 July 2004 ECA Vol.28G, P-5.024 (2004) [5] M. Tanimoto et al., Phys. Rev. E, 68 (2003) 026401; T. Nakamura et al., Phys. Plasmas, 9 (2002) 1801; Z.-M. Sheng et al., Phys. Rev. Lett., 88 (2002) 055004; J. Meyer-ter-Vehn and Z. M. Sheng, Phys. Plasmas, 6 (1999) 641; A. Pukov et al., Phys. Plasmas, 6 (1999) 2847. [6] G. Schmidt and W. Horton, Comments Plasma Phys. Controlled Fusion 9, 85 (1985); P. Sprangle et al., IEEE Trans. Plasma Sci. PS-15 (1987) 1987; G.-Z. Sun et al., Phys. Fluids 309 (1987) 526.
Abstract Using exact results from the theory of completely integrable systems of the Painlevé/Toda type, we examine the consequences for the theory of polyelectrolytes in the (nonlinear) Poisson-Boltzmann approximation. On Exact Solutions to the Cylindrical Poisson-Boltzmann Equation with Applications to Polyelectrolytes111Dedicated to Benjamin Widom on the occasion of his seventieth birthday. Craig A. Tracy Department of Mathematics and Institute of Theoretical Dynamics University of California, Davis, CA 95616, USA e-mail address: tracy@itd.ucdavis.edu Harold Widom Department of Mathematics University of California, Santa Cruz, CA 95064, USA e-mail address: widom@math.ucsc.edu 1. IntroductionA polyelectrolyte is a macromolecule with a large number of ionizable groups which in a solvent becomes highly charged [1, 2]. This highly charged structure is referred to as the polyion. If the polyion has charge $Q=Ze$, then in solution there are $Z$ counterions. (We take the polyion to have negative charge and thus the counterions have positive charge.) Typically the solvent also contains a salt and so the total number of counterions is $Z$ plus the number of positive ions from the salt. The negative salt ions are called coions. Typically the excess salt is a 1–1 salt, e.g. NaCl, though other salts are considered. ($\mbox{MgCl}_{2}$ is an example of a 2–1 salt.) One class of polyelectrolytes that is widely studied are those consisting of slender, rod-like particles. For example, the tobacco mosaic virus, a polyelectrolyte, has a diameter of about 18 nm with a length of 300 nm. For such systems the idealized model is an infinite cylinder of radius $a$ of uniform linear charge density with counterions and coions treated as point particles. This neglects several important effects such as the interaction between polyions, the flexibility degrees of freedom of the polyion, finite size effects, to name but a few. Nevertheless, this model has been extensively studied (for reviews see [3, 4]). In this idealized model there are two additional length scales: the electrostatic length scale (Bjerrum length) $\ell_{B}=e^{2}/\varepsilon k_{B}T$ ($\varepsilon$ is the solvent permittivity) and the inverse Debye-Hückel screening parameter $\kappa^{2}=8\pi\ell_{B}I$ where $I={1\over 2}\sum n_{j}q_{j}^{2}$ is the ionic strength with $n_{j}$ the molar concentration of ion $j$ of (integer-valued) charge $q_{j}$ with the convention that $q_{j}<0$ for counterions and $q_{j}>0$ for coions. From these two lengths we form the dimensionless Manning parameter [5, 6] $\xi=\ell_{B}/b$ where $b$ is the average spacing between charges on the polyion and the dimensionless distance parameter $r=\kappa R$ where $R$ is the cylindrical distance variable. A mean field theory approach to this model results in the Poisson-Boltzmann (PB) equation; that is, the electrostatic potential $\Psi$ is assumed to satisfy the Poisson equation of electrostatics with the density of various ions being given in terms of Boltzmann factors [3, 4, 7, 8, 9]. Introducing the reduced potential $y=e\Psi/k_{B}T$ the PB equation becomes $$\Delta y=-4\pi\ell_{B}\sum_{j}q_{j}n_{j}e^{-q_{j}y}\,.$$ In terms of $r$ and for cylindrical geometry the PB equation is, explicitly for the cases of 1–1 and 2–1 “excess of salt” [7], $${d^{2}y\over dr^{2}}+{1\over r}{dy\over dr}=\left\{\begin{array}[]{ll}\sinh y&% \mbox{1--1 salt,}\\ {1\over 3}\left(e^{2y}-e^{-y}\right)&\mbox{2--1 salt.}\end{array}\right.$$ (1.1) One boundary condition for (1.1) is obtained by applying Gauss’s Law at the surface of the polyion $$\lim_{r\rightarrow a}\ r{dy\over dr}=-2\xi.$$ (1.2) (We use the same symbol $a$ for both the polyion radius and the dimensionless polyion radius.)  The second boundary condition depends upon whether the system is closed or open. For a closed system one requires that the electric field vanish at some finite distance whereas for an open system with a finite concentration of added salt one requires that $$y(r)\rightarrow 0\mbox{ as }r\rightarrow\infty.$$ (1.3) We will consider only the case of an open system. A further simplification, and one we will also assume, is to take the polyion radius $a\rightarrow 0$ which means we model the polyion as a line charge. The mathematical problems that we address in this paper are now well formulated: To solve (1.1) subject to the boundary conditions (1.2) and (1.3). 2. Exact Solutions for a 1–1 and 2–1 SaltFirst note that the solutions to the linearized versions of (1.1) satisfying (1.2) and (1.3) are in both cases the familiar Debye-Hückel solution $$y_{DH}(r)=2\xi K_{0}(r)$$ (2.1) where $K_{0}$ is the modified Bessel function. The exact solution for the 1–1 salt is [10, 11, 12] $$\displaystyle y_{11}(r)$$ $$\displaystyle=$$ $$\displaystyle 2\log\det\left(I+\lambda K\right)-2\log\det\left(I-\lambda K\right)$$ (2.2) $$\displaystyle=$$ $$\displaystyle 4\sum_{j=0}^{\infty}{\lambda^{2j+1}\over 2j+1}\mbox{Tr}\left(K^{% 2j+1}\right)$$ where $K$ is the integral operator on ${\bf R}^{+}$ with kernel $${\exp\left(-r/2(x+1/x)\right)\over x+y}$$ and $$\lambda={1\over\pi}\sin{\pi\xi\over 2}\mbox{ \quad for }\xi\leq 1.$$ Note that $\mbox{Tr}(K)=K_{0}(r)$ and that $\mbox{Tr}(K^{j})=\mbox{O}(e^{-jr})$ as $r\rightarrow\infty$. For $r\rightarrow 0$ it has been proved that [10, 13] $$y_{11}(r)=-2\xi\log r+6\xi\log 2+2\log{\Gamma\left({1+\xi\over 2}\right)\over% \Gamma\left({1-\xi\over 2}\right)}+\mbox{o}(1)\mbox{ \quad for }\xi<1$$ (2.3) where $\Gamma$ is the gamma function. The exact solution for the 2–1 salt is [12] $$\displaystyle y_{21}(r)$$ $$\displaystyle=$$ $$\displaystyle\log\det\left(I-\lambda K_{3}\right)-\log\det\left(I-\lambda K_{2% }\right)$$ (2.4) $$\displaystyle=$$ $$\displaystyle\sum_{j=0}^{\infty}{\lambda^{j}\over j}\left(\mbox{Tr}\left(K_{2}% ^{j}\right)-\mbox{Tr}\left(K_{3}^{j}\right)\right)$$ where $K_{j}$ ($j=2,3$) are integral operators on ${\bf R}^{+}$ with kernel $$\displaystyle\omega^{j-2}(1-\omega)\exp\left[-(r/2\sqrt{3})\left((1-\omega)x+(% 1-\omega^{-1})x^{-1}\right)\right]\,\left(-\omega x+y\right)^{-1}$$ $$\displaystyle+\>\omega^{2(j-2)}(1-\omega^{2})\exp\left[-(r/2\sqrt{3})\left((1-% \omega^{2})x+(1-\omega^{-2})x^{-1}\right)\right]\,\left(-\omega^{2}x+y\right)^% {-1}.$$ Here $\omega=e^{2\pi i/3}$ and $${\lambda\over\lambda_{c}}=2\sin\left({2\pi\over 3}(\xi+1/4)\right)-1,\>\lambda% _{c}={1\over 2\sqrt{3}\pi},\mbox{ \quad for }\xi\leq{1\over 2}.$$ Again note that $$\mbox{Tr}\left(K_{2}\right)-\mbox{Tr}\left(K_{3}\right)=6K_{0}(r).$$ (2.5) An elementary calculation produces a simpler representation than the definition for the trace of the square of the operators: $$\mbox{Tr}\left(K_{2}^{2}\right)-\mbox{Tr}\left(K_{3}^{2}\right)=9\int_{0}^{% \infty}\int_{0}^{\infty}{\exp\left(-r/2(x_{1}+1/x_{1}+x_{2}+1/x_{2})\right)% \over x_{1}^{2}+x_{1}x_{2}+x_{2}^{2}}\,dx_{1}dx_{2}\,.$$ (2.6) The computations (2.5) and (2.6) lead one to suspect that for all postive integers $n$ the quantities $\mbox{Tr}(K_{2}^{n})-\mbox{Tr}(K_{3}^{n})$, as functions of $r$, are monotonically decreasing. Indeed, one can derive an alternative matrix kernel representation of the operators $K_{j}$ such that the monotonic decay is manifestly clear. For $r\rightarrow 0$ it has been proved that [13, 14] $$y_{21}(r)=-2\xi\log r+\left(2\log 2+3\log 3\right)\xi+\log{\Gamma\left({1+\xi% \over 3}\right)\Gamma\left({2+2\xi\over 3}\right)\over\Gamma\left({2-\xi\over 3% }\right)\Gamma\left({1-2\xi\over 3}\right)}+\mbox{o}(1)\mbox{ \quad for }\xi<1% /2.$$ (2.7) Higher order terms in both (2.3) and (2.7) can be computed from use of the differential equations and all additional constants appearing can be expressed in terms of the quantities above. 3. Asymptotics at the Critical Value of $\xi$—Counterion CondensationThe Oosawa-Manning arguments [1, 5, 6] for counterion condensation are well known and need not be repeated here. These arguments, when applied to a 1–1 salt and 2–1 salt, predict critical values of $\xi$ at $1$ and $1/2$, respectively. Recall that in the theory of counterion condensation, the meaning of $\xi>\xi_{c}$ is that the average charge spacing $b$ on the polyion is increased by counterion condensation (onto or around the polyion) until $\xi=\xi_{c}$ is achieved. Thus one must distinguish between the stoichiometric value of $\xi$ computed using only the charge groups on the polyion and the lower value of $\xi$ achieved through counterion condensation. Mathematically, the critical value of $\xi$ is seen through the qualitative change in the small $r$ asymptotics (2.3) and (2.7). For the 1–1 salt case we have as $r\rightarrow 0$ [10, 13] $$\exp\left(-y_{11}(r)/2\right)=-{r\over 2}\Omega_{1}-{r^{5}\over 2^{12}}\left(8% \Omega_{1}^{3}-8\Omega_{1}^{2}+4\Omega_{1}-1\right)+\mbox{O}(r^{9}\Omega_{1}^{% 5})$$ (3.1) where $$\Omega_{1}=\Omega_{1}(r)=\log(r/8)+\gamma$$ and $\gamma$ is Euler’s constant. Thus the potential $y_{11}(r)$ develops an additional $\log\log r$ singularity at the critical Manning parameter. Similarly for the 2–1 case as $r\rightarrow 0$ [13, 14] $$\exp\left(-y_{21}(r)\right)=-{r\over\sqrt{3}}\Omega_{2}+{r^{4}\over 81}\left(% \Omega_{2}^{2}-{2\over 3}\Omega_{2}+{2\over 9}\right)+\mbox{O}(r^{7}\Omega_{2}% ^{3})$$ (3.2) where $$\Omega_{2}=\Omega_{2}(r)=\log(r/8)+\gamma+{1\over 3}\log 2+{3\over 2}\log 3.$$ For $\xi>\xi_{c}$ the above solutions are no longer physically valid. For example, the Boltzmann factor $\exp\left(-y_{11}(r)\right)$ will become negative for small enough $r$. 4. Exact Electrostatic Free Energy for 1–1 and 2–1 Salt CasesAt constant temperature and pressure the free energy is the work done in placing charges on the polyion. This is the familiar “charging process” and one imagines an increment of charge $dq$ placed at the surface of the polyion so that the infinitesimal work done is $\Psi(a)dq$ where the electrostatic potential is evaluated at $R=a$ [15, 16]. Thus $$w^{\mbox{el}}=\int_{0}^{Q}\Psi(a)\,dq$$ is the free energy associated with a single line charge. Our solutions $y_{11}$ and $y_{21}$ are of the form $$y(r)=-2\xi\log r+y_{0}(\xi)+\mbox{o}(1)\mbox{ \quad as }r\rightarrow 0$$ and are for the limiting case of $a\rightarrow 0$. We take, therefore, for the value of the $e\Psi(a)/k_{B}T$ the quantity $-2\xi\log(\kappa a)+y_{0}(\xi)$. Re-writing the above expression for $w^{\mbox{el}}$ in terms of dimensionless quantities and multiplying the result by $N_{p}$, the total number of polyions in solution of volume $V$, we obtain for the free energy, $W^{\mbox{el}}$, of the entire solution [5, 6] $$\displaystyle{W^{\mbox{el}}\over Vk_{B}T}$$ $$\displaystyle=$$ $$\displaystyle-n_{p}\left(\log(\kappa a)\xi-{1\over\xi}\int_{0}^{\xi}y_{0}(\xi^% {\prime})\,d\xi^{\prime}\right)$$ $$\displaystyle:=$$ $$\displaystyle-n_{p}f(\xi)$$ where $n_{p}=N_{p}/V$ is the polyion concentration. Using the expressions (2.3) and (2.7) we calculate $f(\xi)$ in these two cases: $$\displaystyle f_{11}(\xi)$$ $$\displaystyle=$$ $$\displaystyle\left(\log(\kappa a)-3\log 2\right)\xi+{2\over\xi}\int_{0}^{\xi}% \log{\Gamma(1/2-\xi^{\prime}/2)\over\Gamma(1/2+\xi^{\prime}/2)}\,d\xi^{\prime}$$ $$\displaystyle=$$ $$\displaystyle\left(\log(\kappa a)-\log 2+\gamma\right)\xi-\sum_{n=1}^{\infty}{% \psi^{(2n)}(1/2)\over 2^{2n-1}(2n+2)!}\xi^{2n+1}$$ $$\displaystyle=$$ $$\displaystyle\left(\log(\kappa a)+1-3\log 2\right)+2\log{\Gamma({1-\xi\over 2}% )\over\Gamma({1+\xi\over 2})}$$ (4.1) $$\displaystyle+{2\over\xi}\log{\Gamma\left({1-\xi\over 2}\right)\Gamma\left({1+% \xi\over 2}\right)\over\Gamma^{2}\left(1/2\right)}+{4\over\xi}\log{G({1-\xi% \over 2})G({1+\xi\over 2})\over G^{2}(1/2)},$$ $$\displaystyle f_{21}(\xi)$$ $$\displaystyle=$$ $$\displaystyle\left(\log(\kappa a)-\log 2-{3\over 2}\log 3\right)\xi+{1\over\xi% }\int_{0}^{\xi}\log{\Gamma\left(2/3-\xi^{\prime}/3\right)\Gamma\left(1/3-2\xi^% {\prime}/3\right)\over\Gamma\left(1/3+\xi^{\prime}/3\right)\Gamma\left(2/3+2% \xi^{\prime}/3\right)}\,d\xi^{\prime}$$ (4.2) $$\displaystyle=$$ $$\displaystyle\left(\log(\kappa a)-\log 2+\gamma\right)\xi+{1\over 18}\left(% \psi^{(1)}(1/3)-\psi^{(1)}(2/3)\right)\xi^{2}$$ $$\displaystyle-{1\over 72}\left(\psi^{(2)}(1/3)+\psi^{(2)}(2/3)\right)\xi^{3}+% \mbox{O}(\xi^{4}).$$ In the above $\psi(x)=\Gamma^{\prime}(x)/\Gamma(x)$ and $\psi^{(n)}(x)$ is the $n^{\mbox{th}}$ derivative of $\psi$. The function $G(s)$ is the Barnes $G$-function [17] which is an entire function of $s$ and is defined by $$G(s+1)=(2\pi)^{s/2}\exp\left(-s/2-(1+\gamma)s^{2}/2\right)\prod_{k=1}^{\infty}% \left(1+{s\over k}\right)^{k}\exp\left(-s+{s^{2}\over 2k}\right).$$ The $G$-function satisfies the functional equation $G(s+1)=\Gamma(s)G(s)$ and has the special value $G(1)=1$. It arises in the present context through the integral $$\int_{0}^{z}\log\Gamma(x+1)\,dx={z\over 2}\log 2\pi-{1\over 2}z(z+1)+z\log% \Gamma(z+1)-\log G(z+1).$$ (Clearly one can also express $f_{21}$ in terms of the $G$-function.) In the small $\xi$ expansion for both $f_{11}$ and $f_{21}$ the linear term is the contribution from the Debye-Hückel theory. Observe that since the volume dependence resides solely in the $\log\kappa a$ term, such derived quantities as the osmotic pressure (which is expressible in terms of a first derivative of $W^{\mbox{el}}$ with respect to $V$) will be identical to that derived from the Debye-Hückel theory. The mean activity coefficients $\gamma_{\pm}$, expressible in terms of the function $f$, will have different numerical values from the Debye-Hückel theory, however their dependence on $n_{p}$ will be identical. Though this has been understood and used by previous workers, we stress these facts since it emphasizes the wider validity of the linear theory than one might a priori expect. Both $f_{11}$ and $f_{21}$ are singular at the critical Manning parameter and have leading singularities of the form $(\xi_{c}-\xi)\log(\xi_{c}-\xi)$. The graphs of $f_{11}$ and $f_{21}$ are shown in Fig. 1 where for convenience we have set $\kappa a=1$. 5. Partial Equilibrium Structure Factors Within the Poisson-Boltzmann approximation the density distributions of counterions ($+$) and coions ($-$) are given by $$n_{\pm}(r)=n_{\pm}\exp\left(\mp q_{j}y(r)\right)$$ where $n_{\pm}$ are the concentrations at infinity. Light scatters from the local concentration fluctuations of the counterions and coions and the observed scattering intensity in the static approximation is expressible in terms of the partial structure factors which are, in the cylindrical PB approximation, the (two-dimensional) Fourier transform of $\left(n_{\pm}(r)/n_{\pm}-1\right)$ [18, 19, 20]. Thus we examine, for the 1–1 salt, $$S_{\pm}(q,\xi)={\cal F}(e^{\pm y}-1)=2\pi\int_{0}^{\infty}J_{0}(qr)\left(e^{% \pm y(r)}-1\right)r\,dr$$ (5.1) where $q$ is the dimensionless wave number and $J_{0}$ is the Bessel function of zeroth order. For the 2–1 salt we replace $e^{y}\rightarrow e^{2y}$ in the above expression. At $q=0$, $$\displaystyle S_{+}(0,\xi)$$ $$\displaystyle=$$ $$\displaystyle\left\{\begin{array}[]{ll}\;4\pi\xi(1+\xi/2)&\mbox{1--1 salt,}\\ \;8\pi\xi(1+\xi/2)&\mbox{2--1 salt,}\end{array}\right.$$ (5.2) $$\displaystyle S_{-}(0,\xi)$$ $$\displaystyle=$$ $$\displaystyle\left\{\begin{array}[]{ll}\!-4\pi\xi(1-\xi/2)&\mbox{1--1 salt,}\\ \!-4\pi\xi(1-\xi)&\mbox{2--1 salt.}\end{array}\right.$$ (5.3) The differences $S_{+}(0,\xi)-S_{-}(0,\xi)$ follow directly from the differential equations (1.1) and the boundary condition (1.2). The individual terms require additional identities (for 1–1 case see eq. (6) in [11]). In the Debye-Hückel approximation (2.1), $S_{\pm}(0,\xi)$ is $\pm 4\xi$ for the 1–1 salt and similarly for the 2–1 salt. For the 1–1 salt, $S_{-}(q,\xi)=S_{+}(q,-\xi)$. To understand further how these results differ from the Debye-Hückel approximation, we expand the exponentials in (5.1) and use the expansions (2.2) and (2.4) to deduce the expansions: $$S_{\pm}(q,\xi)=\sum_{j=1}^{\infty}\lambda^{j}S_{\pm,j}(q).$$ (5.4) For the 1–1 salt we find, for example, $$\displaystyle S_{\pm,1}(q)$$ $$\displaystyle=$$ $$\displaystyle\pm{\cal F}\left(4\mbox{Tr}(K)\right)=\pm{8\pi\over 1+q^{2}},$$ $$\displaystyle S_{\pm,2}(q)$$ $$\displaystyle=$$ $$\displaystyle{\cal F}\left(8\mbox{Tr}(K)^{2}\right)=8\pi{\log\left(q/2+\sqrt{(% q/2)^{2}+1}\right)\over(q/2)\sqrt{(q/2)^{2}+1}}$$ $$\displaystyle S_{\pm,3}(q)$$ $$\displaystyle=$$ $$\displaystyle\pm{\cal F}\left({4\over 3}(\mbox{Tr}(K^{3})+8\mbox{Tr}(K)^{3})% \right),$$ $$\displaystyle=$$ $$\displaystyle\pm{32\pi\over 3}\int_{0}^{\infty}\int_{0}^{\infty}\int_{0}^{% \infty}{(x_{1}+x_{2}+x_{3})(1/x_{1}+1/x_{2}+1/x_{3})\over(x_{1}+x_{2})(x_{2}+x% _{3})(x_{3}+x_{1})}$$ $$\displaystyle\quad\times{(x_{1}+1/x_{1}+x_{2}+1/x_{2}+x_{3}+1/x_{3})\over\left% [4q^{2}+(x_{1}+1/x_{1}+x_{2}+1/x_{2}+x_{3}+1/x_{3})\right]^{3/2}}\,dx_{1}dx_{2% }dx_{3}.$$ From (5.2) it follows that $S_{\pm,3}(0)=\pm 4\pi^{3}/3$. In general $S_{\pm,2p+1}$ and $S_{\pm,2p+2}$ will involve the $\mbox{Tr}(K^{2\ell+1})$ ($\ell=1,\ldots,p$) and they will have, in the complex $q$-plane, branch point singularities at $q=\pm ip$. Since $\lambda=\mbox{O}(\xi)$, $\xi\rightarrow 0$, we see, for $\xi\ll 1$, that the Debye-Hückel approximation of taking the first term in the sum (5.4) is valid for bounded $q$. Similar expansions can be derived for the 2–1 salt where we note that $S_{\pm,1}(q)$ will again be the Debye-Hückel term (see (2.5)) but now $S_{\pm,2}(q)$ will contain irreducible “two-particle” effects (see (2.6)). For any scattering intensity $I(q)$, one experimentally accessible quantity is the normalized second moment $\mu$ in the small $q$ expansion $$\left[{I(q)\over I(0)}\right]^{-1}=1+\mu q^{2}+\mbox{O}(q^{4}).$$ For the partial structure factors $S_{\pm}$ one similarly defines $\mu_{\pm}(\xi)$ and in Fig. 2 we graph $\mu_{\pm}(\xi)$. We have defined the inverse correlation length by the distance to the nearest pole in $S(q)$. (The choice of the dimensionless wave number $q=k/\kappa$ fixes the location to $\pm i$.) For the Debye-Hückel approximation $\mu_{\pm}\equiv 1$, in contrast with the PB equation where $\mu_{+}$ ($\mu_{-}$) decreases (increases) with increasing $\xi$. The partial structure functions themselves are shown in Figs. 3 and 4 for various values of $\xi$. Note that with increasing $\xi$ there is a significant increase in $S_{+}(q,\xi)/S_{+}(0,\xi)$ for large $q$ whereas $S_{-}(q,\xi)/S_{-}(0,\xi)$ shows only a slow decrease for increasing $\xi$. To determine the large $q$ asymptotics of $S_{\pm}(q,\xi)$, we use the fact that if $$f(r)\sim\,r^{\lambda}\mbox{ as }r\rightarrow 0,$$ then $${\cal F}(f)(q)\sim C(\lambda)\,q^{-2-\lambda}\mbox{ as }q\rightarrow\infty$$ with $C(\lambda)=-2^{2+\lambda}\sin(\pi\lambda/2)\,\Gamma(1+\lambda/2)^{2}$. From (2.3) and the above fact we deduce for the 1–1 salt that as $q\rightarrow\infty$, for fixed $\xi<1$, $$S_{\pm}(q,\xi)\sim{C_{\pm}\over q^{2\mp 2\xi}}\ $$ (5.5) where $C_{\pm}=\exp\left(\pm y_{0}(\xi)\right)C(\mp 2\xi)$. At the critical value $\xi=1$ the short distance asymptotics of $e^{y_{11}}$ are no longer of the above form (see (3.1)). For this case we find as $q\rightarrow\infty$ $$\displaystyle S_{+}(q,1)/S_{+}(0,1)$$ $$\displaystyle=$$ $$\displaystyle{4\over 3}\,{1\over\log q}-{8\log 2\over 3}\,{1\over\log^{2}q}+% \mbox{O}({1\over\log^{3}q}),$$ $$\displaystyle S_{-}(q,1)/S_{-}(0,1)$$ $$\displaystyle=$$ $$\displaystyle{2\log q\over q^{4}}+{2(2\log 2-1)\over q^{4}}+\mbox{lower order terms.}$$ Similar large $q$ expansions hold for the 2–1 salt. In critical light or neutron scattering from simple fluid or magnetic systems, the large $q$ behaviour of $S(q)$ defines the critical exponent $\eta$ [21, 22]. For mean field theories of critical scattering, $\eta=0$. We remark that even though the PB equation is a mean field theory, we have a nonzero “$\eta$” ($\eta=\pm 2\xi$, $\xi<1$, for the 1–1 salt) which is a reflection of the fact that the short distance potential is the bare Coulomb potential. As the critical value of the Manning parameter is approached, the Poisson-Boltzmann theory predicts an enhanced scattering at large wave numbers from the concentration fluctuations of the counterions while at the same time predicting little change in the scattering from the concentration fluctuations of the coions. As is the case in critical scattering [23, 24], the measurement of these effects may well prove difficult. Acknowledgements This work was supported in part by the National Science Foundation through grants DMS–9303413 and DMS–9424292. References [1] Polyelectrolytes, F. Oosawa (Marcel Dekker, New York, 1971). [2] J.-L. Barrat and J.-F. Joanny, in: Advances in Chemical Physics: Polymeric Systems, Vol. 94, I. Prigogine and S. A. Rice, eds. (John Wiley, New York, 1996), pg. 1. [3] C. F. Anderson and M. T. Record, in: Annual Review of Physical Chemistry, Vol. 33, B. S. Rabinovitch, J. M. Schurr, and H. L. Strauss, eds. (Annual Reviews, Palo Alto, 1982), pg. 191. [4] C. F. Anderson and M. T. Record, in: Annual Review of Biophysics and Biophysical Chemistry, Vol. 19 (Annual Reviews, Palo Alto, 1990), pg. 423. [5] G. S. Manning, J. Chem. Phys.  51 (1969) 924. [6] G. S. Manning, in: Polyelectrolytes, E. Sélégny, M. Mandel, U. P. Strauss, eds. (Reidel Publ., Dordrecht, 1974), pg. 9. [7] A. Katchalsky, Z. Alexandrowicz, and O. Kedem, in: Chemical Physics of Ionic Solutions, B. E. Conway and R. G. Barradas, eds. (John Wiley, New York, 1965), pg. 295. [8] M. Fixman, J. Chem. Phys. 70 (1979) 4995. [9] H. Qian and J. A. Schellman, Transformed Poisson-Boltzmann Equations and Ionic Distributions Around Linear Polyelectrolytes, preprint. [10] B. M. McCoy, C. A. Tracy, T. T. Wu, J. Math. Phys. 18 (1977) 1058. [11] C. A. Tracy and H. Widom, Commun. Math. Phys. 179 (1996) 1. [12] H. Widom, Some Classes of Solutions to the Toda Lattice Hierarchy, to appear in Commun. Math. Phys., solv-int/9602001. [13] C. A. Tracy and H. Widom, Asymptotics of a Class of Solutions to the Cylindrical Toda Equations, solv-int/9701003. [14] A. V. Kitaev, J. Sov. Math. 46 (1989) 2077. [15] T. L. Hill, Arch. Biochem. and Biophys. 57 (1955) 229. [16] C. Tanford, Physical Chemistry of Macromolecules, (John Wiley, New York, 1961), Ch. 7. [17] E. W. Barnes, Quart. J. Pure and Appl. Math. 31 (1900) 264. [18] J. J. Hermans, Rec. des Travaux Chim. des Pays-Bas et de la Belgique, 68 (1949) 859. [19] B. J. Berne and R. Pecora, Dynamic Light Scattering (John Wiley, New York, 1976), Ch. 9. [20] T. Odijk, in: Light Scattering: Principles and Development, W. Brown, ed. (Clarendon Press, Oxford, 1996), pg. 103. [21] M. E. Fisher, J. Math. Phys. 5 (1964) 944. [22] J. Zinn-Justin, Quantum Field Theory and Critical Phenomena, 2nd ed. (Clarendon Press, Oxford, 1993). [23] C. A. Tracy and B. M. McCoy, Phys. Rev. B 12 (1975) 368. [24] R. F. Chang, H. Burstyn, and J. V. Sengers, Phys. Rev. A 19 (1979) 866.
Spontaneous emission enhancement in metal-dielectric metamaterials Ivan Iorsh${}^{1,2,*}$, Alexander Poddubny${}^{1,3}$, Alexey Orlov${}^{1}$, Pavel Belov,${}^{1,4}$, and Yuri S. Kivshar${}^{1,5}$ Abstract We study the spontaneous emission of a dipole emitter imbedded into a layered metal-dielectric metamaterial. We demonstrate ultra-high values of the Purcell factor in such structures due to a high density of states with hyperbolic isofrequency surfaces. We reveal that the traditional effective-medium approach greatly underestimates the value of the Purcell factor due to the presence of an effective nonlocality, and we present an analytical model which agrees well with numerical calculations. \address ${}^{1}$St. Petersburg University of Information Technologies, Mechanics and Optics (ITMO), St. Petersburg 197101, Russia ${}^{2}$St. Petersburg Academic University — Nanotechnology Research and Education Centre, St. Petersburg 194021, Russia ${}^{3}$Ioffe Physical Technical Institute, St. Petersburg 194021, Russia ${}^{4}$ School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, U.K. ${}^{5}$ Nonlinear Physics Center and Centre for Ultrahigh-bandwidth Devices for Optical Systems (CUDOS), Research School of Physics and Engineering, Australian National University, Canberra ACT 0200, Australia ${}^{*}$Corresponding author: iorsh86@yandex.ru \ocis 160.1190, 160.3918. Spontaneous emission rate of a light source can be tuned by engineering its environment. A ratio of the decay rate in the media as compared to that in vacuum is usually termed as the Purcell factor [1], and the possibility of a controllable change of of the radiative lifetime has been demonstrated in various optical systems, including photonic crystals and plasmonic nanostructures [2, 3, 4]. In metamaterials the Purcell factor can be greatly enhanced, with two possible mechanisms of achieving high values of the Purcell factors. In the first case, we should place a dipole emitter into a metamaterial that is described as an uniform hyperbolic medium. Such hyperbolic media are uniaxial media characterized by the permittivity tensor of the diagonal form with the principal components being of the opposite signs [5, 6]. The density of photonic states in such a system diverges. As a result, a huge Purcell factor can be reached, and its values being determined either by losses and inhomogeneity of the medium [6] or by a spatial extent of the source [7]. The second mechanism of achieving high Purcell factors employs the excitation of surface plasmon polaritons (SPPs) at the metal-dielectric interfaces inside a metamaterial [4, 8]. In this Letter, we study the spontaneous emission of a dipole emitter imbedded into a layered metal-dielectric metamaterial (see Fig. 1) and reveal the possibility for ultrahigh values of the Purcell factor due to a high density of states with hyperbolic isofrequency surfaces. In particular, we demonstrate the dramatic dependence of the Purcell factor near the SPP resonance on the ratio of the layer thicknesses and the dipole orientation, so that the Purcell factor can have either maximum or minimum at the plasmon frequency for high and low metal filling factors, respectively. This revealed mechanism of the enhancement of the Purcell factor has an interface nature, and thus it can not be described within any traditional homogenization approach. However, here we present an analytical model which agrees well with our numerical results. The hyperbolic medium at certain frequency ranges can be realized by layered metal-dielectric nanostructures. Through conventional effective-medium approach a metal-dielectric nanostructure formed by pairs of layers with permittivities $\varepsilon_{1},\varepsilon_{2}$, and thicknesses $d_{1},d_{2}$ (see Fig. 1) can be modeled as an uniaxial anisotropic medium with permittivity tensor of the form: $$\varepsilon_{\mathrm{eff}}=\left(\begin{matrix}\varepsilon_{\bot}&0&0\\ 0&\varepsilon_{\|}&0\\ 0&0&\varepsilon_{\|}\end{matrix}\right),\ \begin{array}[]{lcl}\varepsilon_{\|}% =\dfrac{\varepsilon_{1}d_{1}+\varepsilon_{2}d_{2}}{d_{1}+d_{2}},\\ \varepsilon_{\bot}=\left(\dfrac{\varepsilon_{1}^{-1}d_{1}+\varepsilon_{2}^{-1}% d_{2}}{d_{1}+d_{2}}\right)^{-1}\end{array}$$ At the same time, one of the key properties of such a medium is its strong nonlocality due to excitation of SPP modes at metal-dielectric interfaces [9]. Such structures were a subject of extensive theoretical and experimental studies being suggested for many applications such as superlenses with subwavelength resolution [10], a simple realization of the so-called hyperlens [11] and optical nanocircuits [12], as well as invisibility cloaks [13]. For a certain ratio of the layer thicknesses, one can reach the hyperbolic regime when high Purcell factors can be expected [8]. In particular, it was demonstrated that the effective medium approach is not always applicable for the calculation of the radiation emission, and it can overestimate the Purcell factor [8]. In the regime we consider here the effective-medium approach underestimates the Purcell factor, so that the high Purcell factors can be attained even in the structures which are not hyperbolic when being treated as effective media. We study a system shown schematically in Fig. 1. A point dipole emitter is placed in the center of a dielectric layer of the periodic metal-dielectric layered nanostructure, the simplest version of a metamaterial. In such a planar system, we consider independently two orientations of the dipole: parallel ($p_{x}$) and perpendicular ($p_{z}$) to the interface planes. Although the system is periodic, it can be formally treated as a cavity, where the central dielectric layer (labeled as the layer 1) with the dipole is surrounded by two identical mirrors. One can employ then the standard results of the cavity theory [14] to calculate the Purcell factors $F_{p}^{x}=1+R_{x}$, and $F_{p}^{z}=1+R_{z}$, where $$\displaystyle R_{x}=\frac{3}{2}\mathop{\mathrm{Re}}\nolimits\int\limits_{0}^{% \infty}\frac{dk_{\parallel}k_{\parallel}}{kk_{z}^{(1)}}\left[\frac{r_{s}}{(1-r% _{s})}+\frac{r_{p}(k_{z}^{(1)})^{2}}{k^{2}(1-r_{p})}\right]$$ (1a) $$\displaystyle R_{z}=-3\mathop{\mathrm{Re}}\nolimits\int\limits_{0}^{\infty}% \frac{dk_{\parallel}k_{\parallel}^{3}}{k_{z}^{(1)}k^{3}}\frac{r_{p}}{(1+r_{p})% }\>,$$ (1b) where $k=\sqrt{\varepsilon_{1}}\omega/c$ and $k_{z}^{(1,2)}=\{\varepsilon_{1,2}\omega^{2}/c^{2}-k_{\parallel}^{2}\}^{1/2}$ Integration in Eqs. (1) is performed over the in-plane component of the wavevector $k_{\parallel}$. The amplitude reflection coefficients of the semi-infinite periodic mirrors for $s$ and $p$ polarizations, $r_{s}$ and $r_{p}$ are readily calculated (see, e.g. [15]) $$\displaystyle r^{s,p}=\frac{r_{1}^{s,p}}{1-t_{1}^{s,p}{\rm e}^{{\rm i}k_{\perp% }D}}{\rm e}^{{\rm i}k_{z}^{(1)}d_{1}}$$ (2) where $r_{1}$, $t_{1}$ are the reflection and transmission coefficients for one period and a given polarization, $D=d_{1}+d_{2}$ is the structure period, and $k_{\perp}$ is the Bloch wavevector defined by the dispersion relation: $$\displaystyle\cos(k_{\perp}D)=$$ $$\displaystyle\cos(k_{z}^{(1)}d_{1})\cos(k_{z}^{(2)}d_{2})-$$ (3) $$\displaystyle\frac{1}{2}\left(\frac{Z_{1}^{s,p}}{Z_{2}^{s,p}}+\frac{Z_{2}^{s,p% }}{Z_{1}^{s,p}}\right)\sin(k_{z}^{(1)}d_{1})\sin(k_{z}^{(2)}d_{2})\>,$$ where $Z_{i}^{s}=k_{z}^{(i)},\quad Z_{i}^{p}={\varepsilon_{i}}/{k_{z}^{(i)}},i=1,2$. In our numerical calculation we chose silver as metal with the dielectric constant described by the Drude model [16]: $\varepsilon_{2}=\varepsilon_{\infty}-{\omega_{p}^{2}}/{\omega(\omega+{\rm i}% \gamma)}$ with $\varepsilon_{\infty}\approx 4.96,\>\omega_{p}\approx 8.98~{}{\rm eV},\gamma% \approx 0.018~{}{\rm eV}$, while the dielectric constant $\varepsilon_{\rm 1}$ was set to unity. The structure period is fixed to $D=30$ nm, and only a ratio of the thickness of silver and dielectric layers is varied, $\eta=d_{\rm 2}/d_{\rm 1}$. The dependence of Purcell factors (1) on the frequency for different metal thicknesses is shown in Fig. 2. For better presentation, the Purcell factor is scaled by cube of the light frequency, since the vacuum decay rate $\tau_{\rm vac}^{-1}$ is proportional to $\omega^{3}$ [17]. We observe that for the case when the silver layer is thicker than the dielectric layer, there is a sharp maximum of the Purcell factor at the frequency of the SPP resonance where the condition $\mathop{\mathrm{Re}}\nolimits\varepsilon_{\rm 2}=-1$ is fulfilled. However, for the case of a wider dielectric layer, the picture changes dramatically, and we observe a local minimum at the frequency of the SPP resonance. Next, we examine the latter case in detail. The solid line in Fig. 3(a) depicts the Purcell factor calculated for the in-plane dipole for $\eta=0.5$. These results reveal two characteristic effects. First, the Purcell factor can be enhanced even in the non-hyperbolic case. Second, the Purcell factor exhibits a sharp dip near the SPP resonance frequency. To explain this puzzling behavior, in Fig. 3(b) we plot the isofrequency contours $k_{\perp}(k_{\parallel})$ for the close-to-resonance frequencies. We observe two distinct contours: an elliptic contour $k_{\perp}^{2}/\varepsilon_{\perp}+k_{\parallel}^{2}/\varepsilon_{\parallel}=(% \omega/c)^{2}$ predicted by the effective-medium theory, and an additional hyperbolic contour. The importance of this contour has been mentioned in Ref. [9], where the isofrequency contours of metal-dielectric nanostructures were studied by the exact and effective-medium approach, and they attributed to the effect of nonlocal response of such layered structures. It is this extra contour that determines a dramatic enhancement of the Purcell factor in Fig. 2 in the regime when the structure is not hyperbolic. Next, we derive a closed form expression for the corresponding contribution to the Purcell factor, $$F_{p}^{z}=\frac{3\pi}{8}\frac{(k_{\parallel}^{(h)})^{2}\delta k_{\parallel}}{(% \omega/c)^{3}},$$ (4) $$k_{\parallel}^{(h)}=\frac{1}{d_{1}}\ln\xi,\;\delta k_{\parallel}=\xi^{-\frac{d% _{\rm 1}-d_{\rm 2}}{2d_{\rm 1}}},\;\xi=-\frac{\varepsilon_{1}\varepsilon_{2}}{% (\varepsilon_{1}+\varepsilon_{2})^{2}},$$ (5) which is valid in the vicinity of the plasmonic resonance, so that $\xi\gg 1$. Here $k_{\parallel}^{(h)}$ is the center and $\delta k_{\parallel}$ is the width of the hyperbolic contour. Naturally, the Purcell factor is determined by the density of states $(k_{\parallel}^{(h)})^{2}\delta k_{\parallel}$, and the extra factors in Eq. (4) depend on the the orientation and position of a dipole in the dielectric layer. The approximate expression (4), shown in Fig. 3(a) reproduces well the Purcell factor calculated numerically. The position of the center of the hyperbolic contour, found numerically from the equation $\cos[k_{\perp}(k_{\parallel}^{(h)})D]=0$, is plotted in Fig. 3(c), and it is well described by the analytical expression (4). The width of this contour as a function of the frequency is also reproduced by the function $\delta k_{\parallel}$ in Eq. (4) [see Fig. 3(d)]. Thus, this results uncover the origin of the dip in the Purcell factor at the surface plasmon frequency: while the position of the hyperbolic contour increases logarithmically when $\xi\to\infty$ as we approach the plasmon frequency, the width of the hyperbolic contour demonstrates a power-low decrease. Therefore, as a product of these two values the Purcell factor should vanish at the SPP resonance, but it remains finite due to losses in silver. It is also worth mentioning that the effect of the enhancement of the Purcell factor demonstrated here is purely nonlocal, since the effective-medium approach can predict only the first elliptic contour [dashed line in Fig. 3(b)] and thus the resulting integral (1) will be small. In conclusion, we have studied the spontaneous emission process and radiation rate of a dipole imbedded into a layered metal-dielectric metamaterial. We have demonstrated that the effective-medium approximation underestimates the radiative decay rate in such structures which is greatly increased in comparison with a bulk dielectric or metal. We have predicted, both analytically and numerically, ultrahigh values of the Purcell factor in such metal-dielectric structures due to a high density of states with hyperbolic isofrequency surfaces. This work was supported by the Ministry of Education and Science of the Russian Federation, RFBR and Dynasty Foundation (Russia), EPSRC (UK), and the Australian Research Council. The authors acknowledge useful discussions with C.R. Simovski and J. Sipe. References [1] E. M. Purcell, Phys. Rev. 69, 681 (1946). [2] E. Yablonovitch, Phys. Rev. Lett. 58, 2059 (1987). [3] V.S.C. Manga Rao and S. Hughes, Phys. Rev. B 75, 205437 (2007). [4] Y. C. Jun, R. D. Kekatpure, J. S. White, and M. L. Brongersma, Phys. Rev. B 78, 153111 (2008). [5] H. Xie, P. Leung, and D. Tsai, Solid State Comm. 149, 625 (2009). [6] Z. Jacob, J. Kim, G. V. Naik, A. Boltasseva, E. E. Narimanov, and V. M. Shalaev, Appl. Phys. B: Lasers and Optics 100, 215 (2010). [7] A. N. Poddubny, P. A. Belov, and Y. S. Kivshar, Phys. Rev. A 85 (2011). [8] S. Zhukovskiy, O. Kidwai, and J. E. Sipe, Opt. Lett. 36, 2530 (2011). [9] A. A. Orlov, P. M. Voroshilov, P. A. Belov, and Yu. S. Kivshar, Phys. Rev. B. 84, 045424 (2011). [10] P. A. Belov and Y. Hao, Phys. Rev. B 73, 113110 (2006). [11] Z. Liu, H. Lee, Y. Xiong, C. Sun, and X. Zhang, Science 315, 1686 (2007). [12] N. Engheta, Science 317, 1698 (2007). [13] W. Cai, U. K. Chettiar, A. V. Kildishev, and V. M. Shalaev, Opt. Express 16, 5444 (2008). [14] F. De Martini, M. Marrocco, P. Mataloni, L. Crescentini, and R. Loudon, Phys. Rev. A 43, 2480 (1991). [15] A. Yariv and P. Yeh, Optical Waves in Crystals (Wiley, New Jersey, 2002). [16] P. B. Johnson and R. W. Christy, Phys. Rev. B 6, 4370 (1972). [17] A. Kavokin, J. Baumberg, G. Malpuech, and F. Laussy, Microcavities, (Oxford Univ. Press, Oxford, 2011). References [1] E. M. Purcell, “Spontaneous emission probabilities at radio frequencies,” Phys. Rev. 69, 681 (1946). [2] E. Yablonovitch, “Inhibited spontaneous emission in solid-state physics and electronics,” Phys. Rev. Lett. 58, 2059–2062 (1987). [3] V. S. C. Manga Rao and S. Hughes, “Single quantum-dot Purcell factor and beta factor in a photonic crystal waveguide,” Phys. Rev. B 75 (2007). [4] Y. C. Jun, R. D. Kekatpure, J. S. White, and M. L. Brongersma, “Nonresonant enhancement of spontaneous emission in metal-dielectric-metal plasmon waveguide structures,” Phys. Rev. B 78, 153111 (2008). [5] H. Xie, P. Leung, and D. Tsai, “Molecular decay rates and emission frequencies in the vicinity of an anisotropic metamaterial,” Solid State Comm. 149, 625 – 629 (2009). [6] Z. Jacob, J. Kim, G. V. Naik, A. Boltasseva, E. E. Narimanov, and V. M. Shalaev, “Engineering photonic density of states using metamaterials,” Appl. Phys. B: Lasers and Optics 100, 215–218 (2010). [7] A. N. Poddubny, P. A. Belov, and Y. S. Kivshar, “Spontaneous radiation of a finite-size dipole emitter in hyperbolic media,” ArXiv (2011). [8] S. Zhukovskiy, O. Kidwai, and J. E. Sipe, “Dipole radiation near hyperbolic metamaterials: applicability of effective-medium approximation,” Opt. Letters 36 (2011). [9] A. A. Orlov, P. M. Voroshilov, P. A. Belov, and Y. S. Kivshar, “Engineered optical nonlocality in nanostructured metamaterials,” Phys. Rev. B. 84 (2011). [10] P. A. Belov and Y. Hao, “Subwavelength imaging at optical frequencies using a transmission device formed by a periodic layered metal-dielectric structure operating in the canalization regime,” Phys. Rev. B 73, 113110 (2006). [11] Z. Liu, H. Lee, Y. Xiong, C. Sun, and X. Zhang, “Far-field optical hyperlens magnifying sub-diffraction-limited objects,” Science 315, 1686 (2007). [12] N. Engheta, “Circuits with light at nanoscales: Optical nanocircuits inspired by metamaterials,” Science 317, 1698 (2007). [13] W. Cai, U. K. Chettiar, A. V. Kildishev, and V. M. Shalaev, “Designs for optical cloaking with high-order transformations,” Opt. Express 16, 5444–5452 (2008). [14] F. De Martini, M. Marrocco, P. Mataloni, L. Crescentini, and R. Loudon, “Spontaneous emission in the optical microscopic cavity,” Phys. Rev. A 43, 2480–2497 (1991). [15] A. Yariv and P. Yeh, Optical waves in crystals: propagation and control of laser radiation, Wiley classics library (John Wiley and Sons, 2002). [16] P. B. Johnson and R. W. Christy, “Optical constants of the noble metals,” Phys. Rev. B 6, 4370–4379 (1972). [17] A. Kavokin, J. Baumberg, G. Malpuech, and F. Laussy, Microcavities, Series on Semiconductor Science and Technology (Oxford University Press, 2011).
Phonon and crystal field excitations in geometrically frustrated rare earth titanates T.T.A. Lummen Zernike Institute for Advanced Materials, University of Groningen, Nijenborgh 4, 9747 AG Groningen, The Netherlands    I.P. Handayani Zernike Institute for Advanced Materials, University of Groningen, Nijenborgh 4, 9747 AG Groningen, The Netherlands    M.C. Donker Zernike Institute for Advanced Materials, University of Groningen, Nijenborgh 4, 9747 AG Groningen, The Netherlands    D. Fausti Zernike Institute for Advanced Materials, University of Groningen, Nijenborgh 4, 9747 AG Groningen, The Netherlands    G. Dhalenne Laboratoire de Physico-Chimie de l’Etat Solide, CNRS, UMR8182, Université Paris-Sud, Bâtiment 414, 91405 Orsay, France    P. Berthet Laboratoire de Physico-Chimie de l’Etat Solide, CNRS, UMR8182, Université Paris-Sud, Bâtiment 414, 91405 Orsay, France    A. Revcolevschi Laboratoire de Physico-Chimie de l’Etat Solide, CNRS, UMR8182, Université Paris-Sud, Bâtiment 414, 91405 Orsay, France    P.H.M. van Loosdrecht P.H.M.van.Loosdrecht@rug.nl Zernike Institute for Advanced Materials, University of Groningen, Nijenborgh 4, 9747 AG Groningen, The Netherlands (December 8, 2020; December 8, 2020; December 8, 2020) Abstract The phonon and crystal field excitations in several rare earth titanate pyrochlores are investigated. Magnetic measurements on single crystals of Gd${}_{2}$Ti${}_{2}$O${}_{7}$, Tb${}_{2}$Ti${}_{2}$O${}_{7}$, Dy${}_{2}$Ti${}_{2}$O${}_{7}$ and Ho${}_{2}$Ti${}_{2}$O${}_{7}$ are used for characterization, while Raman spectroscopy and terahertz time domain spectroscopy are employed to probe the excitations of the materials. The lattice excitations are found to be analogous across the compounds over the whole temperature range investigated (295-4 K). The resulting full phononic characterization of the R${}_{2}$Ti${}_{2}$O${}_{7}$ pyrochlore structure is then used to identify crystal field excitations observed in the materials. Several crystal field excitations have been observed in Tb${}_{2}$Ti${}_{2}$O${}_{7}$ in Raman spectroscopy for the first time, among which all of the previously reported excitations. The presence of additional crystal field excitations, however, suggests the presence of two inequivalent Tb${}^{3+}$ sites in the low temperature structure. Furthermore, the crystal field level at approximately 13 cm${}^{-1}$ is found to be both Raman and dipole active, indicating broken inversion symmetry in the system and thus undermining its current symmetry interpretation. In addition, evidence is found for a significant crystal field-phonon coupling in Tb${}_{2}$Ti${}_{2}$O${}_{7}$. These findings call for a careful reassessment of the low temperature structure of Tb${}_{2}$Ti${}_{2}$O${}_{7}$, which may serve to improve its theoretical understanding. pacs: 63.20.dd, 71.70.Ch, 75.50.Lk, 75.30.Cr I Introduction The term ’geometrical frustration’Ramirez (1994); Schiffer and Ramirez (1996); Diep (1994) applies to a system, when it is unable to simultaneously minimize all of its magnetic exchange interactions, solely due to its geometry. Magnetically interacting spins residing on such lattices are unable to order into a unique magnetic ground state due to the competing magnetic interactions between different lattice sites. Instead of selecting a single, unique magnetic ground state at low temperatures, a pure magnetically frustrated system has a macroscopically degenerate ground state. In real systems however, any secondary, smaller term (arising from single-ion or exchange anisotropy, further neighbor interactions, dipolar interactions, small lattice distortions or a magnetic field, for example) in the system’s Hamiltonian can favor certain magnetic ground states at very low temperatures, thereby (partially) lifting this peculiar degeneracy. In this fact lies the origin of the vast richness and diversity of the low temperature magnetic behavior of different frustrated systems in natureGreedan (2006, 2001); Collins and Petrenko (1997); Ramirez (1994); Moessner (2001). Geometries suitable to exhibit frustration typically consist of infinite networks of triangles or tetrahedra, which share one or more lattice sites. One of the most common structures known to be able to induce magnetic frustration is that of the pyrochlores, A${}_{2}$B${}_{2}$O${}_{7}$, where both the A${}^{3+}$ ions (rare earth element, coordinated to 8 O atoms) and the B${}^{4+}$ ions (transition metal element, coordinated to 6 O atoms) reside on a lattice of corner-sharing tetrahedra, known as the pyrochlore lattice. Thus, if either A${}^{3+}$ or B${}^{4+}$ is a magnetic species, frustration may occur due to competing interactions. A subclass of the pyrochlores is formed by the rare earth titanate family, R${}_{2}$Ti${}_{2}$O${}_{7}$, where the R${}^{3+}$ ion is the only (para-)magnetic species, since Ti${}^{4+}$ is diamagnetic ($3d^{0}$). For the pyrochlore lattice, both theoryVillain (1979); Reimers et al. (1991); Moessner and Chalker (1998) and Monte Carlo simulationsMoessner and Chalker (1998); Reimers (1992) predict a ’collective paramagnetic’ ground state, or the lack of long range magnetic ordering, for classical Heisenberg spins at finite temperature. The quantum Heisenberg spin (S = 1/2) model for the pyrochlore lattice also predicts a quantum disordered system at finite temperatures, a state often referred to as a ’spin liquid’Canals and Lacroix (1998). However, in reality the different perturbative terms in the corresponding Hamiltonian result in quite diverse low temperature magnetic behavior among the different rare earth titanatesGreedan (2006), of which the Gd, Tb, Ho and Dy variants are studied here. The supposedly least complex case is that of gadolinium titanate, Gd${}_{2}$Ti${}_{2}$O${}_{7}$. The Gd${}^{3+}$ ion has, in contrast to the Tb${}^{3+}$, Ho${}^{3+}$ and Dy${}^{3+}$ ions, a spin only ${}^{8}S_{7/2}$ ($L=0$) ground state, rendering the influence of crystal field levels and possible induced Ising-like anisotropy insignificant in Gd${}_{2}$Ti${}_{2}$O${}_{7}$. The experimentally determined Curie-Weiss temperature of Gd${}_{2}$Ti${}_{2}$O${}_{7}$ is $\simeq-10$ KRaju et al. (1999); Cashion et al. (1968); Ramirez et al. (2002), indicating antiferromagnetic nearest neighbor interactions. Thus, Gd${}_{2}$Ti${}_{2}$O${}_{7}$ could be considered as an ideal realization of the frustrated Heisenberg antiferromagnet with dipolar interactions. Experimentally, Gd${}_{2}$Ti${}_{2}$O${}_{7}$ has been found to undergo a magnetic ordering transition at $\simeq$ 1 KRaju et al. (1999). However, this transition corresponds to only partial ordering of the magnetic structure, as only 3 spins per tetrahedron orderChampion et al. (2001). In this partially ordered state the spins residing on the [111] planes of the crystal (which can be viewed as Kagomé planes) are ordered in a $120^{\circ}$ configuration, parallel to the Kagomé plane, while the spins residing on the interstitial sites remain either statically or dynamically disordered Stewart et al. (2004). Subsequent experimental investigations revealed a second ordering transition at $\simeq$ 0.7 K, corresponding to the partial ordering of the interstitial disordered spinsStewart et al. (2004); Ramirez et al. (2002) which, however, do remain (partially) dynamic down to 20mK.Yaouanc et al. (2005); Dunsiger et al. (2006) Despite of Gd${}_{2}$Ti${}_{2}$O${}_{7}$ supposedly being well approximated by the Heisenberg antiferromagnet with dipolar interactions, theoretical justification for this complex magnetic behavior remains difficultRaju et al. (1999); Palmer and Chalker (2000); Yaouanc et al. (2005). In Tb${}_{2}$Ti${}_{2}$O${}_{7}$, the dominant interactions are antiferromagnetic, as indicated by the experimentally determined Curie-Weiss temperature, $\;\theta_{CW}$ $\simeq-19$ KGardner et al. (1999). A study of the diluted compound (Tb${}_{0.02}$Y${}_{0.98}$)${}_{2}$Ti${}_{2}$O${}_{7}$ revealed that the contribution to $\;\theta_{CW}$ due to exchange and dipolar interactions is $\simeq-13$ K, comparable to the $\;\theta_{CW}$-value found in Gd${}_{2}$Ti${}_{2}$O${}_{7}$Gingras et al. (2000). Despite the energy scale of these interactions, the Tb${}^{3+}$ moments do not show long range magnetic order down to as low as 50 mK, making it the system closest to a real 3D spin liquid to dateGardner et al. (1999, 2003). However, crystal field (CF) calculations indicate a ground state doublet and Ising-like easy axis anisotropy for the (${}^{7}F_{6}$) Tb${}^{3+}$ magnetic moments along their local $<\!\!111\!\!>$ directions (the direction towards the center of the tetrahedron the particular atom is in), which would dramatically reduce the degree of frustration in the systemGingras et al. (2000); Rosenkranz et al. (2000a); Mirebeau et al. (2007); Malkin et al. (2004); Gardner et al. (2001). Theoretical models taking this anisotropy into account predict magnetic ordering temperatures of about 1 KGingras et al. (2000); den Hertog and Gingras (2000). Subsequent theoretical work suggests that the magnetic moment anisotropy is more isotropic than Ising-like, which could suppress magnetic orderingKao et al. (2003). Recently, Tb${}_{2}$Ti${}_{2}$O${}_{7}$ was argued to be in a quantum mechanically fluctuating ’spin ice’ stateice ; Molavian et al. (2007). Virtual quantum mechanical CF excitations (the first excited CF doublet is separated by only $\simeq 13$ cm${}^{-1}$ from the ground state doubletGingras et al. (2000); Mirebeau et al. (2007); Gardner et al. (2001)) are proposed to rescale the effective theoretical model from the unfrustrated Ising antiferromagnet to a frustrated resonating spin ice model. Nevertheless, the experimentally observed lack of magnetic ordering down to the millikelvin range and the true magnetic ground state in Tb${}_{2}$Ti${}_{2}$O${}_{7}$ remain as of yet enigmaticEnjalran et al. (2004); Molavian et al. (2007); Ehlers et al. (2004); Curnoe (2007). Illustrating the diversity in magnetic behavior due to the subtle differences in the rare earth species of the titanates, the situation in both Dy${}_{2}$Ti${}_{2}$O${}_{7}$Ramirez et al. (1999); Snyder et al. (2004); Ramirez et al. (2000) and Ho${}_{2}$Ti${}_{2}$O${}_{7}$Harris et al. (1997); Bramwell et al. (2001); Cornelius and Gardner (2001); Petrenko et al. (2003) is again different. The R${}^{3+}$ ions in these compounds have a ${}^{6}H_{15/2}$ (Dy${}^{3+}$) and a ${}^{5}I_{8}$ (Ho${}^{3+}$) ground state, respectively, with corresponding free ion magnetic moments $\mu=10.65\;\mu_{B}$ (Dy${}^{3+}$) and $\mu=10.61\;\mu_{B}$(Ho${}^{3+}$). These systems were first thought to have weak ferromagnetic nearest neighbor exchange interactions, as indicated by the small positive values of $\theta_{CW}$, $\simeq 2\;$ K and $\simeq 1\;$ K for Dy${}_{2}$Ti${}_{2}$O${}_{7}$ and Ho${}_{2}$Ti${}_{2}$O${}_{7}$, respectively Bramwell et al. (2000). More recently, however, the nearest neighbor exchange interactions in Dy${}_{2}$Ti${}_{2}$O${}_{7}$ and Ho${}_{2}$Ti${}_{2}$O${}_{7}$ were argued to be antiferromagnetic Melko and Gingras (2004). The effective ferromagnetic interaction between the spins in fact is shown to be due to the dominant ferromagnetic long-range magnetic dipole-dipole interactionsden Hertog and Gingras (2000); Bramwell and Gingras (2001). The R${}^{3+}$-ions in both Dy${}_{2}$Ti${}_{2}$O${}_{7}$ and Ho${}_{2}$Ti${}_{2}$O${}_{7}$ are well described by a well separated Ising doublet (first excited states are at $\sim 266$ and $\sim 165$ cm${}^{-1}$ ,respectivelyRosenkranz et al. (2000a)) with a strong single ion anisotropy along the local $<\!\!111\!\!>$ directions. Unlike for antiferromagnetically interacting spins with local $<\!\!111\!\!>$ Ising anisotropy, ferromagnetically interacting Ising spins on a pyrochlore lattice should be highly frustratedHarris et al. (1997); Bramwell and Harris (1998). As Anderson already pointed out half a century agoAnderson (1956), the resulting model is analogous to Pauling’s ice modelPauling (1935), which earned both Dy${}_{2}$Ti${}_{2}$O${}_{7}$ and Ho${}_{2}$Ti${}_{2}$O${}_{7}$ the title of ’spin ice’ compoundHarris et al. (1997); Bramwell and Harris (1998); Bramwell and Gingras (2001); Isakov et al. (2005). Although numerical simulations predict long range order at low temperatures for this modelMelko and Gingras (2004), experimental studies report no transition to a long range ordered state for either Dy${}_{2}$Ti${}_{2}$O${}_{7}$den Hertog and Gingras (2000); Ramirez et al. (1999); Fukazawa et al. (2002); Fennell et al. (2002) or Ho${}_{2}$Ti${}_{2}$O${}_{7}$Harris et al. (1997, 1998); Mirebeau et al. (2006), down to as low as 50 mK. As is apparent from above considerations, the low temperature magnetic behavior of the rare earth titanates is dictated by the smallest of details in the structure and interactions of the material. Therefore, a comprehensive experimental study of the structural, crystal field and magnetic properties of these systems may serve to clarify unanswered questions in their understanding. In this paper, dc magnetic susceptibility measurements, polarized Raman scattering experiments and terahertz time domain spectroscopy on aforementioned members of the rare earth titanates family are employed to gain more insight into the details that drive them towards such diverse behavior. Raman scattering allows for simultaneous investigation of structural and crystal field (CF) properties through the observation of both phononic and CF excitations, while the comparison between the various members helps identify the nature of the different excitations observed. II Experimental II.1 Sample Preparation Polycrystalline samples of R${}_{2}$Ti${}_{2}$O${}_{7}$ (where R = Gd, Tb, Dy, Ho) were synthesized by firing stoichiometric amounts of high purity ($>99.9\%$) TiO${}_{2}$ and the appropriate rare earth oxide (Gd${}_{2}$O${}_{3}$, Tb${}_{4}$O${}_{7}$, Dy${}_{2}$O${}_{3}$ or Ho${}_{2}$O${}_{3}$, respectively), in air, for several days with intermittent grindings. The resulting polycrystalline powder was subsequently prepared for single crystal growth, using the method described by Gardner, Gaulin and PaulGardner et al. (1998). The following single crystal growth (also done as described in ref. 49) using the floating zone technique yielded large, high quality single crystals of all of the R${}_{2}$Ti${}_{2}$O${}_{7}$ variants. Discs ($\simeq$ 1 mm thickness) with a,b-plane surfaces were cut from oriented single crystals and subsequently polished, in order to optimize scattering experiments. The Tb${}_{2}$Ti${}_{2}$O${}_{7}$ sample used in Raman experiments was subsequently polished down to $\simeq$ 250 $\mu$m thickness to facilitate THz transmission measurements. II.2 Instrumentation X-ray Laue diffraction, using a Philips PW 1710 diffractometer equipped with a Polariod XR-7 system, was employed to orient the single crystal samples of Gd${}_{2}$Ti${}_{2}$O${}_{7}$ and Tb${}_{2}$Ti${}_{2}$O${}_{7}$ for the polarized Raman spectroscopy experiments, while simultaneously confirming the single crystallinity of the samples. The Dy${}_{2}$Ti${}_{2}$O${}_{7}$ and Ho${}_{2}$Ti${}_{2}$O${}_{7}$ single crystals were oriented using an Enraf Nonius CAD4 diffractometer. The magnetic susceptibilities of the obtained rare earth titanates were measured using the Quantum Design MPMS-5 SQUID magnetometer of the ’Laboratoire de Physico-Chimie de l’Etat Solide’ (LPCES), CNRS, UMR8182 at the ’Université Paris-Sud in Orsay, France. The R${}_{2}$Ti${}_{2}$O${}_{7}$ samples, about 100 mg of single crystal (in the form of discs of approximately 4 mm diameter and 1 mm thickness), were placed in cylindrical plastic tubes and locked in position. Next, the samples were zero-field-cooled down to 1.7 K, after which the magnetization of the sample was measured as a function of the temperature in an applied magnetic field of 100 Oe, while warming the sample. Polarization controlled, inelastic light scattering experiments were performed on all oriented R${}_{2}$Ti${}_{2}$O${}_{7}$ samples. The experiments were performed in a $180^{\circ}$ backscattering configuration, using a triple grating micro-Raman spectrometer (T64000 Jobin-Yvon), consisting of a double grating monochromator (acting as a spectral filter) and a polychromator which disperses the scattered light onto a liquid nitrogen cooled CCD detector. The frequency resolution was better than 2 cm${}^{-1}$ for the frequency region considered. The samples were placed in a liquid helium cooled optical flow-cryostat (Oxford Instruments). The temperature was stabilized with an accuracy of 0.1 K in the whole range of measured temperatures (from 2.5 to 295 K). The 532.6 nm (frequency doubled) output of a Nd:YVO${}_{4}$ laser was focused on the Gd${}_{2}$Ti${}_{2}$O${}_{7}$, Tb${}_{2}$Ti${}_{2}$O${}_{7}$ and Dy${}_{2}$Ti${}_{2}$O${}_{7}$ samples using a 50x microscope objective and used as excitation source in the scattering experiments. A Krypton laser (676.4 nm) was used as the excitation source for the scattering experiments on the Ho${}_{2}$Ti${}_{2}$O${}_{7}$ sample, since 532.6 nm excitation (resonant at low temperatures in case of Ho${}_{2}$Ti${}_{2}$O${}_{7}$) results in fluorescence dominating the inelastic scattering spectrum in the 5-800 cm${}^{-1}$ spectral range. The power density on the samples was of the order of 50 $\mu$W/$\mu$m${}^{2}$ in all cases. The polarization was controlled both on the incoming and outgoing beam. Parallel ($\|$) and perpendicular ($\bot$) measurements on Gd${}_{2}$Ti${}_{2}$O${}_{7}$ and Tb${}_{2}$Ti${}_{2}$O${}_{7}$ were performed along crystallographic axes of the a,b surface of the samples, Porto notations c(aa)c and c(ab)c, respectively. Unfortunately, the orientation of the a and b axes in the (a,b) plane of the Dy${}_{2}$Ti${}_{2}$O${}_{7}$ and Ho${}_{2}$Ti${}_{2}$O${}_{7}$ surfaces with respect to the light polarizations was not known. Analogous Porto notations are c(xx)c ($\|$) and c(xy)c ($\bot$), respectively, where x is a direction in the (a,b) plane of the sample making an undetermined angle $\alpha$ with the a axis, while y, in the same a,b plane of the crystal, is perpendicular to the x direction. Raman spectra were fitted with lorentzian lineshapes to extract mode parameters. Terahertz time domain spectroscopy (THz TDS)Schmuttenmaer (2004) was performed on Tb${}_{2}$Ti${}_{2}$O${}_{7}$ using a home made setup similar to those described elsewhereBeard et al. (2000); Schmuttenmaer (2004). THz pulses (pulse duration of several ps, frequency range 0.3-2.5 THz) were generated through a difference frequency generation process in a ZnTe single crystal upon pulsed excitation (120 fs, 800 nm) by an amplified Ti:sapphire system. The magnitude of the time dependent electric field transmitted through the sample (w.r.t. that transmitted through vacuum) was measured at various temperatures, through electro-optic sampling in a second ZnTe single crystal, using 800 nm pulses of approximately 120 fs. The sample used was a thin slice of single crystalline Tb${}_{2}$Ti${}_{2}$O${}_{7}$ (the same sample as used in the Raman experiments), which was mounted on a copper plate with an aperture ($\varnothing$ 2 mm) and placed in a liquid helium cooled optical flow-cryostat (Oxford Instruments). The polarization of the THz radiation was parallel to the crystallographic a-axis. III Results and Discussion III.1 Magnetic Measurements The magnetic susceptibility $\chi$, defined as the ratio of the magnetization of the sample to the applied magnetic field, of all the rare earth titanate samples was measured in a 100 Oe applied magnetic field. Since the samples used were plate-like discs, the data have been corrected by a demagnetization factor as calculated for flat, cylindrical platesSato and Ishii (1989). Fig. 1 shows the inverse molar susceptibilities of all samples in the low temperature regime. For each sample, the data was fitted to a Curie-Weiss form for the molar magnetic susceptibility of an antiferromagnet: $$\chi_{m}=\frac{2C}{(T-\theta)}+B,$$ (1) where C is the Curie constant in CGS units ($C=N_{A}\mu^{2}\mu_{B}^{2}/3k_{b}\simeq\mu^{2}/8$, in [emu$\cdot$K$\cdot$mol${}^{-1}$]), $\theta$ is the expected transition temperature (giving an indication of the sign of the magnetic interactions) and $B$ is a temperature independent Van Vleck contribution to the susceptibility. The model was fitted to high temperature experimental data (100 K and up, where the demagnetization correction is of the order of 1 %) and linear regression analysis of $\chi^{-1}_{m,i}$ yielded the experimental values for $\theta_{i}$, $\mu_{i}$ and $B_{i}$. These are tabulated below in table 1, together with several values reported in literature. In general, the experimentally obtained data compare (where possible) favorable to the various values reported in literature (table 1). The extracted $\theta$ parameters for Dy${}_{2}$Ti${}_{2}$O${}_{7}$ and Ho${}_{2}$Ti${}_{2}$O${}_{7}$ do slightly deviate from literature values, presumably due to the estimation of the demagnetization factor (fits to the present uncorrected data yield $\theta$ values of approximately 1 and 2 K, respectively). The experimentally determined paramagnetic moments obtained for Gd${}_{2}$Ti${}_{2}$O${}_{7}$ and Tb${}_{2}$Ti${}_{2}$O${}_{7}$ are also in excellent agreement with the corresponding free ion values, which are $\mu$ = 7.94 $\mu_{B}$ and $\mu$ = 9.72 $\mu_{B}$ for the Gd${}^{3+}$ (${}^{8}S_{7/2}$) and Tb${}^{3+}$ (${}^{7}F_{6}$) free ions, respectively. The large negative Curie-Weiss temperatures for Gd${}_{2}$Ti${}_{2}$O${}_{7}$ and Tb${}_{2}$Ti${}_{2}$O${}_{7}$ indicate antiferromagnetic exchange coupling. In contrast, the small, positive $\theta$ values for Ho${}_{2}$Ti${}_{2}$O${}_{7}$ and Dy${}_{2}$Ti${}_{2}$O${}_{7}$ initially led to the assumption of weak ferromagnetic exchange interactions between nearest neighbor Dy${}^{3+}$ and Ho${}^{3+}$ ionsBramwell et al. (2000). As stated above, however, since the Ho${}^{3+}$ and Dy${}^{3+}$ ions have a large magnetic moment (free ion values are $\mu$ = 10.607 $\mu_{B}$ (${}^{5}I_{8}$ ground state) and $\mu$ = 10.646 $\mu_{B}$ (${}^{6}H_{15/2}$ ground state), respectively), the dipolar interactions between neighboring R${}^{3+}$ ions are dominating the effective nearest neighbor (n.n.) interactions. The n.n. exchange interactions are in fact antiferromagneticMelko and Gingras (2004), while the dominant dipolar n.n. interactions are of ferromagnetic natureden Hertog and Gingras (2000); Bramwell and Gingras (2001). Consequently, the effective n.n. interactions are slightly ferromagnetic, resulting in the positive $\theta$ values. Another consequence of the dipolar interactions and being dominant is the fact that extracting the real values of $\mu$ and $B$ from the inverse susceptibility curves becomes non-trivial, since more elaborate models taking the dipolar interaction into account are needed. III.2 Raman Spectroscopy III.2.1 Room temperature spectra To determine the Raman active vibrations of the single crystals, group theory analysis was employed. This predicts that, for the cubic rare earth titanate structure (R${}_{2}$Ti${}_{2}$O${}_{7}$) of space group Fd3̄m (O${}_{h}$${}^{7}$), the sublattices of the unit cell span the following irreducible representations: $$\begin{array}[]{rrll}\par 16(c)$-site$:&$Ti$^{4+}-sublattice&=&A_{2u}+E_{u}+2F% _{1u}+F_{2u}\\ 16(d)$-site$:&$R$^{3+}-sublattice&=&A_{2u}+E_{u}+2F_{1u}+F_{2u}\\ 48(f)$-site$:&$O$(1)-sublattice&=&A_{1g}+E_{g}+2F_{1g}+3F_{2g}+\\ &&&A_{2u}+E_{u}+3F_{1u}+2F_{2u}\\ 8(a)$-site$:&$O$(2)-sublattice&=&F_{1u}+F_{2g}\\ \par\end{array}$$ This makes the following decomposition into zone center normal modes (excluding the F${}_{1u}$ acoustic mode): $$\Gamma=A_{1g}+E_{g}+2F_{1g}+4F_{2g}+3A_{2u}+3E_{u}+7F_{1u}+4F_{2u}$$ Of these normal modes, only the $A_{1g}$, $E_{g}$ and the 4 $F_{2g}$ modes are Raman active. The 7 $F_{1u}$ modes are infrared active and the remaining modes are optically inactive. The symmetry coordinates for the optically active normal modes are given by Gupta et al.Gupta et al. (2001a). Based on the symmetries of the Raman active modes, the A${}_{1g}$ and E${}_{g}$ modes are expected to be observed in parallel polarization ($\|$) spectra, while the F${}_{2g}$ modes are expected in the perpendicular polarization ($\bot$) spectra. Surprisingly, the room temperature spectra of R${}_{2}$Ti${}_{2}$O${}_{7}$ show all the Raman active modes in both polarizations. Angle dependent Raman measurements (varying the angle between the incoming light polarization and the b-axis from $-45^{\circ}$ to $45^{\circ}$) reveal no orientation in which the theoretical selection rules are fully obeyed. Polycrystallinity and/or crystalline disorder is however, not believed to be the cause, since other techniques (X-ray Laue diffraction, magnetic measurements) reveal no such signs. Also, to our best knowledge, no Raman spectrum of any of these rare earth titanates fulfilling their phononic selection rules has been published. Although no full compliance of phononic selection rules was observed, the nature of the Raman active modes was clearly identified, through their angular dependence. Their assignment is indicated in fig. 2 as well as in table 2. The figure depicts the room temperature (RT), parallel polarization spectra of the rare earth titanates and the corresponding modes. The assignment of the mode symmetries is based on comparison with assignments in previous works on theseSaha et al. (2006); Zhang and Saxena (2005); Mori et al. (2003); Hess et al. (2002); Gupta et al. (2001a); Vandenborre and Husson (1983) and relatedBrown et al. (2003); Glerup et al. (2001); Gupta et al. (2001b, 2002); Vandenborre and Husson (1983); Vandenborre et al. (1983) compounds (see table 2), on aforementioned measurements (not shown) of the angular dependence of mode intensities and on the temperature dependence of the modes (vide infra). Comparison between the RT Raman spectra yields the conclusion that the nature of the R${}^{3+}$ ion has only slight influence on the Raman active vibrational modes. This is not surprising, since in all the Raman active modes only the oxygen atoms are displacedSaha et al. (2006); Gupta et al. (2001a). Consequently, there is no obvious systematic variation of the phonon frequencies with the mass of the respective rare earth ions, as has been noted before for the rare earth titanatesGupta et al. (2001a), hafnatesGupta et al. (2002), manganatesBrown et al. (2003) and stannatesGupta et al. (2001b). The assignment of the R${}_{2}$Ti${}_{2}$O${}_{7}$ modes in literature has been mostly consistent (see Table 2), yet there are a few debated details. There is general agreement on the nature of the modes at $\simeq$ 210 cm${}^{-1}$ (F${}_{2g}$, O(2)-sublattice modeSaha et al. (2006)), $\simeq$ 519 cm${}^{-1}$ (A${}_{1g}$, R-O stretching modeMori et al. (2003); Zhang and Saxena (2005)) and $\simeq$ 556 cm${}^{-1}$ (F${}_{2g}$, O(1)-sublattice modeSaha et al. (2006)). Temperature and angle dependent Raman measurements show the band around $\simeq$ 315 cm${}^{-1}$ to consist of two modes, a F${}_{2g}$ mode around 310 cm${}^{-1}$ (O-R-O bending modeMori et al. (2003); Zhang and Saxena (2005)) and an E${}_{g}$ mode around 327 cm${}^{-1}$ (O(1)-sublattice modeSaha et al. (2006)), as recognized by Saha et al.Saha et al. (2006) and Vandenborre et al.Vandenborre et al. (1983). Earlier works either interchanged the mode assignment within this bandZhang and Saxena (2005); Gupta et al. (2001a) or ascribed the whole band to only one of these modesMori et al. (2003); Hess et al. (2002). However, our temperature and angle dependent measurements confirm the assignment made by Saha et al. The last expected phonon, an F${}_{2g}$ mode, has been either not accounted for Gupta et al. (2001a), combined with the A${}_{1g}$ mode in one bandSaha et al. (2006); Vandenborre and Husson (1983) or ascribed to low intensity peaks around 105Mori et al. (2003), 450 Zhang and Saxena (2005) or 680Hess et al. (2002) cm${}^{-1}$ in previous works. Here, it is ascribed to a broad, low intensity mode around $\simeq$ 260 cm${}^{-1}$. This mode is not clearly resolved in the RT spectra due to the fact that it overlaps largely with the neighboring, strong F${}_{2g}$ modes at $\simeq$ 210 cm${}^{-1}$ and $\simeq$ 309 cm${}^{-1}$. Fitting with those two peaks only, however, does not adequately reproduce the experimental spectral shape in the the 200-300 cm${}^{-1}$ window. Additionally, as the temperature is lowered, the phonon modes sharpen and the existence of this excitation becomes obvious in the spectra. Worth noting are also the two anomalous modes in Tb${}_{2}$Ti${}_{2}$O${}_{7}$ at $\simeq$ 303 (F${}_{2g}$) and $\simeq$ 313 cm${}^{-1}$ (E${}_{g}$), which have lower frequencies and wider lineshapes compared to their counterparts in the other, isostructural rare earth titanates. Next to the expected Raman active vibrations, the spectra in this work show some very weak scattering intensity at low wavenumbers (first two rows in table 2), which has been reported beforeVandenborre and Husson (1983); Mori et al. (2003). Vandenborre et al.Vandenborre and Husson (1983) were unable to account for this intensity in their calculations, while Mori et al.Mori et al. (2003) offer the plausible assignment to trace R${}_{2}$O${}_{3}$ in the system. The latter assignment is also tentatively adopted here. Mori et al. also suggested the ’missing’ $F_{2g}$ mode may be responsible for some of this low wavenumber intensity. Additionally, a weak mode is observed at 450 cm${}^{-1}$ in most R${}_{2}$Ti${}_{2}$O${}_{7}$ compounds, as was also seen before. Zhang et al.Zhang and Saxena (2005) and Hess et al.Hess et al. (2002) ascribed the ’missing’ $F_{2g}$ mode to this feature. Alternatively, Mori et al.Mori et al. (2003) interpreted it as being due to trace amounts of starting compound TiO${}_{2}$, which is knownPorto et al. (1967) to have a phonon at 447 cm${}^{-1}$. The true origin of this mode is at present unclear. Finally, there is some low intensity scattering intensity at higher wavenumbers, around 700 cm${}^{-1}$ (last two rows of table 2). This intensity has been observed beforeMori et al. (2003); Vandenborre and Husson (1983); Saha et al. (2006); Hess et al. (2002); Zhang and Saxena (2005) and is ascribed to forbidden IR modes made active by slight, local non-stoichiometry in the system.Mori et al. (2003); Saha et al. (2006) III.2.2 Temperature dependence Raman spectra of the R${}_{2}$Ti${}_{2}$O${}_{7}$ crystals were recorded at temperatures ranging from RT to 4 K. Figure 3 depicts the evolution of both the parallel ($\|$) and perpendicular ($\bot$) polarization Raman spectra of the R${}_{2}$Ti${}_{2}$O${}_{7}$ crystals with decreasing temperature. Going down in temperature several spectral changes occur in the Raman spectra. The evolution of the phononic excitations with temperature are very similar in the different R${}_{2}$Ti${}_{2}$O${}_{7}$ lattices. Again, this is not surprising, since only oxygen atoms are displaced in the Raman active phononsSaha et al. (2006); Gupta et al. (2001a). Firstly, the lowest frequency phonon ($F_{2g}$, $\omega$ $\simeq$ 210 cm${}^{-1}$ at RT) shows a strong softening ($\omega\simeq$ 170 cm${}^{-1}$ at 4 K) and sharpening with decreasing temperature, revealing the previously unresolved $F_{2g}$ mode ($\omega_{RT}$ $\simeq$ 260 cm${}^{-1}$), which also softens ($\omega_{4K}$ $\simeq$ 190 cm${}^{-1}$), but doesn’t show a strong narrowing. Secondly, the sharpening of both the $\simeq$ 309 cm${}^{-1}$ F${}_{2g}$ and the $\simeq$ 324 cm${}^{-1}$ E${}_{g}$ mode decreases their spectral overlap, clearly justifying the two-mode interpretation of the $\simeq$ 315 cm${}^{-1}$ band at RT. Additionally, both modes show a slight softening upon cooldown. Also the A${}_{1g}$ phonon ($\omega_{RT}$ $\simeq$ 519 cm${}^{-1}$) shows the familiar softening ($\omega_{4K}$ $\simeq$ 511 cm${}^{-1}$) and sharpening trend on cooling. Finally, due to its large width and low intensity, describing the temperature evolution of the highest frequency F${}_{2g}$ phonon proves rather difficult, though it seems to soften slightly. Comparison of the R${}_{2}$Ti${}_{2}$O${}_{7}$ spectra in fig. 3 yields the observation that the anomalous phonons (at $\simeq$ 303 (F${}_{2g}$) and $\simeq$ 313 cm${}^{-1}$ (E${}_{g}$)) in Tb${}_{2}$Ti${}_{2}$O${}_{7}$ remain wide throughout the temperature range, in contrast to the corresponding modes in the other titanates. Additionally, these modes are shifting in opposite directions in Tb${}_{2}$Ti${}_{2}$O${}_{7}$ only: the F${}_{2g}$ mode softens ($\omega_{4K}$ $\simeq$ 295 cm${}^{-1}$) while the E${}_{g}$ mode considerably hardens ($\omega_{4K}$ $\simeq$ 335 cm${}^{-1}$). An explanation for this anomalous behavior could be coupling of these phonons to low frequency crystal field excitations of the Tb${}^{3+}$-ions (vide infra). III.2.3 Crystal field excitations Aside from the phonons in the Raman spectra of R${}_{2}$Ti${}_{2}$O${}_{7}$, several spectra also show crystal field (CF) excitations of the R${}^{3+}$ ions at low temperatures, as shown in fig. 4. The CF level splitting in the different rare earth ions depends on their electronic configuration and their local surroundings. In the R${}_{2}$Ti${}_{2}$O${}_{7}$ family, the simplest case is that of the Gd${}^{3+}$ ion (4$f^{7}$), which has a spin only ${}^{8}S_{7/2}$ ($L=0$) ground state, resulting in the absence of a level splitting due to the local crystal field. Consequently, the Raman spectrum of Gd${}_{2}$Ti${}_{2}$O${}_{7}$ shows no CF excitations, making it a suitable ’template’ of the R${}_{2}$Ti${}_{2}$O${}_{7}$ Raman spectrum with lattice excitations only. Combined with the strong correspondence of the phononic excitations in the R${}_{2}$Ti${}_{2}$O${}_{7}$ spectra, it allows for quick identification of CF modes in the other compounds. More complicated is the CF level splitting in the Tb${}^{3+}$ ion (4$f^{8}$), which has a ${}^{7}F_{6}$ ground state. Several studies calculating the crystal field for the Tb${}^{3+}$-ion in Tb${}_{2}$Ti${}_{2}$O${}_{7}$Gingras et al. (2000); Mirebeau et al. (2007); Gardner et al. (2001); Malkin et al. (2004), based on inelastic neutron scattering resultsGingras et al. (2000); Gardner et al. (2001); Gaulin et al. (1998); Kanada et al. (1999); Gardner et al. (1999); Mirebeau et al. (2007), yielded slightly differing energy level schemes for the lowest crystal field levels of Tb${}^{3+}$ in Tb${}_{2}$Ti${}_{2}$O${}_{7}$  as is schematically depicted in fig. 5. Here, solid lines depict CF levels observed experimentally, while dotted lines indicate CF levels obtained through CF calculations. Shown in fig. 6, which is a zoom-in on the low-wavenumber region of the 4 K perpendicular polarization spectrum of Tb${}_{2}$Ti${}_{2}$O${}_{7}$, are the CF excitations that are observed using inelastic light scattering. As is also clear from fig. 5, all CF excitations previously observed using inelastic neutron scattering are also observed here. Furthermore, additional low-lying excitations can be seen, which are easily identified as CF levels, by comparison with the ’lattice-only template’ spectrum of Gd${}_{2}$Ti${}_{2}$O${}_{7}$. The excitations from the crystal field ground state to the higher crystal field levels, of approximate calculated values 13, 60, 83 and 118 cm${}^{-1}$ (see fig. 5), are all observed at very similar frequencies, at 12.9, 60.8, 83.1 and 119.2 cm${}^{-1}$, respectively. The 60.8 cm${}^{-1}$ level has not been observed experimentally before, though it did follow from the CF calculation made by Gingras et al.Gingras et al. (2000). Conversely, the $\simeq$ 135.2 cm${}^{-1}$ mode has been observed through inelastic neutron scattering, yet has not been accounted for in CF calculations. Although Gardner et al.Gardner et al. (2001) interpreted it as an optical phonon, this excitation is clearly identified as a CF excitation here, through the isostructural comparison with the other titanates. The latter assignment is also made by Mirebeau et al.Mirebeau et al. (2007). Additionally, new CF excitations are observed at 7.7 and 47.2 cm${}^{-1}$. While the former is recognized as a CF excitation from the CF ground state to a new CF energy level, the 47.2 cm${}^{-1}$ could also be interpreted as an excitation from the excited CF level at 12.9 cm${}^{-1}$ to the higher excited CF level at 60.8 cm${}^{-1}$. While such an excitation is Raman active (see below), its occurrence at 4 K seems unlikely, since at this temperature the excited CF level (12.9 cm${}^{-1}$ $\simeq$ 19 K) is not expected to be populated enough to give rise to a measurable Raman signal. Additionally, were it an excitation from an excited state, its intensity would decrease upon cooling. Instead, the intensity of this mode steadily increases with decreasing temperature. Using $E_{g}$ and $E_{g}$, $A_{2g}$ and $A_{1g}$ irreducible representationsGingras et al. (2000); Malkin et al. (2004) for the ground state and excited CF levels, respectively, it can be confirmed that indeed all these ground state to excited state transitions are expected to be Raman active. Through analogous considerations, the lowest excitation ($E_{g}$ $\rightarrow$ $E_{g}$, 13 cm${}^{-1}$) is found to be symmetry forbidden in a direct dipole transition. In this respect, terahertz time domain spectroscopy (THz TDS) in the range of 10 to 50 cm${}^{-1}$ (0.3 to 1.5 THz) was employed to probe the absorption of Tb${}_{2}$Ti${}_{2}$O${}_{7}$ at various temperatures. Using this technique it is possible to extract complex optical quantities in the THz range through the direct measurement of the time trace of the transmitted THz pulse. The obtained curves for the real ($\epsilon_{1}(\nu)$) and imaginary ($\epsilon_{2}(\nu)$) parts of the dielectric constant at various temperatures are plotted in fig. 7. The plots clearly show an absorption around 0.45 THz (15 cm${}^{-1}$) at low temperatures, corresponding to the 13 cm${}^{-1}$ CF level also observed in the Raman spectra of Tb${}_{2}$Ti${}_{2}$O${}_{7}$. This is corroborated by the identical temperature dependence of the mode in both methods: its spectral signature decreases with increasing temperature, vanishing around 90 K. The fact that the 13 cm${}^{-1}$ CF level is observed in a direct dipole transition indicates that this level is not a true $E_{g}$ $\rightarrow$ $E_{g}$ transition, since such a dipole transition would be symmetry forbidden. Overall, the existence of the additional CF levels in Tb${}_{2}$Ti${}_{2}$O${}_{7}$  unaccounted for by CF calculations, suggests the presence of a second Tb${}^{3+}$ site in the structure, with an energy level scheme different from those reported previously. Three additional CF levels are observed experimentally, while four would naively be expected for a slightly differing Tb site in this low wavenumber region. A fourth new CF level might be unresolved due to the strong CF level at 83 cm${}^{-1}$ or might simply be symmetry forbidden in a Raman transition. Moreover, the fact that the $\sim$ 13 cm${}^{-1}$ CF level is simultaneously Raman and dipole active indicates the breaking of inversion symmetry in the system, which questions the validity of its current symmetry interpretation. Recently, the exact symmetry of the Tb${}_{2}$Ti${}_{2}$O${}_{7}$ lattice has been extensively studied. Han et al.Han et al. (2004) performed neutron powder diffraction and x-ray absorption fine-structure experiments down to 4.5 K, revealing a perfect pyrochlore lattice, within experimental error. Ofer et al.Ofer et al. (2007) find no static lattice distortions on the timescale of 0.1 $\mu$s, down to 70 mK. Most recently, however, Ruff et al.Ruff et al. (2007) found finite structural correlations at temperatures below 20 K, indicative of fluctuations above a very low temperature structural transition. The present experimental results suggest these fluctuations may even induce a minute static disorder, resulting in the observed CF level diagram. In Dy${}_{2}$Ti${}_{2}$O${}_{7}$ the Dy${}^{3+}$ ions (4$f^{9}$) have a ${}^{6}H_{15/2}$ ground state. Crystal field calculations have been performed by Jana et al.Jana et al. (2002), who deduced an energy level scheme consisting of eight Kramers’ doublets, with a separation of $\simeq$ 100 cm${}^{-1}$ of the first excited state. Malkin et al.Malkin et al. (2004) and Rosenkranz et al.Rosenkranz et al. (2000b) estimated the first excited state gap to be $>$ 200 cm${}^{-1}$ and $\simeq$ 266 cm${}^{-1}$, respectively. The Raman spectrum of Dy${}_{2}$Ti${}_{2}$O${}_{7}$ shows only one extra excitation compared to the ’lattice template’ spectrum of Gd${}_{2}$Ti${}_{2}$O${}_{7}$, at an energy of $\sim 287$ cm${}^{-1}$. This excitation is tentatively ascribed to the first excited CF level, comparing most favorably to the estimation of Rosenkranz et al.. For Ho${}_{2}$Ti${}_{2}$O${}_{7}$, ground state ${}^{5}I_{8}$ (Ho${}^{3+}$,4$f^{10}$), several CF energy level schemes have been calculatedRosenkranz et al. (2000b); Malkin et al. (2004); Siddharthan et al. (1999); Jana and Ghosh (2000), all with first excited state separations around 150 cm${}^{-1}$. However, the Raman spectrum of Ho${}_{2}$Ti${}_{2}$O${}_{7}$ does not show any clear inelastic light scattering from CF levels to compare these calculations to. Although there is some weak intensity around $\simeq$ 150 cm${}^{-1}$, which compares favorably with all of the calculated level schemes, the intensity of this scattering is insufficient to definitively ascribe it to a CF level. IV Conclusions To summarize, several members of the rare earth titanates family R${}_{2}$Ti${}_{2}$O${}_{7}$ were studied using magnetic susceptibility measurements and polarized inelastic light scattering experiments. Lattice excitations were found to vary only slightly between crystals with different rare earth ions. Temperature dependent measurements also revealed completely analogous behavior of the phononic excitations, except for two anomalous phonons in Tb${}_{2}$Ti${}_{2}$O${}_{7}$, which seem to be coupled to the low energy crystal field excitations in that compound. Such crystal field excitations were observed in Tb${}_{2}$Ti${}_{2}$O${}_{7}$ and possibly also in Dy${}_{2}$Ti${}_{2}$O${}_{7}$. Only one non-phononic excitation is clearly observed in Dy${}_{2}$Ti${}_{2}$O${}_{7}$, its energy consistent with estimates of the first excited crystal field level. For Tb${}_{2}$Ti${}_{2}$O${}_{7}$, all of the the previously determined crystal field energy levels were confirmed. Moreover, the resulting energy level diagram was expanded by three newly observed CF levels, only one of which has been calculated before. Also, one previously reported level was found to be both Raman and dipole active, contradicting its current presumed symmetry. These findings may reflect the existence of two inequivalent Tb sites in the low temperature structure or a static symmetry reduction of another nature, suggesting the recently found structural correlations to induce a minute static disorder in Tb${}_{2}$Ti${}_{2}$O${}_{7}$ at very low temperatures. The new crystal field information may serve to help elucidate the complex theoretical enigma of Tb${}_{2}$Ti${}_{2}$O${}_{7}$. Acknowledgements. The authors would like to thank F. van der Horst for his help using the PW 1710 diffractometer. We also acknowledge fruitful discussions with M. Mostovoi, D. Khomskii and S. Singh. This work is part of the research programme of the ’Stichting voor Fundamenteel Onderzoek der Materie (FOM)’, which is financially supported by the ’Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO)’ References Ramirez (1994) A. P. Ramirez, Annu. Rev. Mater. Sci 24, 453 (1994). Schiffer and Ramirez (1996) P. Schiffer and A. P. Ramirez, Comments Condens. Matter Phys. 18, 21 (1996). Diep (1994) H. T. Diep, ed., Magnetic Systems with Competing Interactions (World Scientific, Singapore, 1994). Greedan (2006) J. E. Greedan, J. Alloys Compd. 408-412, 444 (2006). Greedan (2001) J. E. Greedan, J. Mater. Chem. 11, 37 (2001). Collins and Petrenko (1997) M. F. Collins and O. A. Petrenko, Can. J. Phys. 75, 605 (1997). Moessner (2001) R. Moessner, Can. J. Phys. 79, 1283 (2001). Villain (1979) J. Villain, Z. Phys. B 33, 31 (1979). Reimers et al. (1991) J. N. Reimers, A. J. Berlinsky, and A. C. Shi, Phys. Rev. B 43, 865 (1991). Moessner and Chalker (1998) R. Moessner and J. T. Chalker, Phys. Rev. Lett. 80, 2929 (1998). Reimers (1992) J. N. Reimers, Phys. Rev. B 45, 7287 (1992). Canals and Lacroix (1998) B. Canals and C. Lacroix, Phys. Rev. Lett. 80, 2933 (1998). Raju et al. (1999) N. P. Raju, M. Dion, M. J. P. Gingras, T. E. Mason, and J. E. Greedan, Phys. Rev. B 59, 14489 (1999). Cashion et al. (1968) J. D. Cashion, A. H. Cooke, M. J. M. Leask, T. L. Thorp, and M. R. Wells, J. Mater. Sci. 3, 402 (1968). Ramirez et al. (2002) A. P. Ramirez, B. S. Shastry, A. Hayashi, J. J. Krajewski, D. A. Huse, and R. J. Cava, Phys. Rev. Lett. 89, 067202 (2002). Champion et al. (2001) J. D. M. Champion, A. S. Wills, T. Fennell, S. T. Bramwell, J. S. Gardner, and M. A. Green, Phys. Rev. B 64, 140407(R) (2001). Stewart et al. (2004) J. R. Stewart, G. Ehlers, A. S. Wills, S. T. Bramwell, and J. S. Gardner, J. Phys.: Condens. Matter 16, L321 (2004). Yaouanc et al. (2005) A. Yaouanc, P. Dalmas de Réotier, V. Glazkov, C. Marin, P. Bonville, J. A. Hodges, P. C. M. Gubbens, S. Sakarya, and C. Baines, Phys. Rev. Lett. 95, 047203 (2005). Dunsiger et al. (2006) S. R. Dunsiger, R. F. Kiefl, J. A. Chakhalian, J. E. Greedan, W. A. MacFarlane, R. I. Miller, G. D. Morris, A. N. Price, N. P. Raju, and J. E. Sonier, Phys. Rev. B 73, 172418 (2006). Palmer and Chalker (2000) S. E. Palmer and J. T. Chalker, Phys. Rev. B 62, 488 (2000). Gardner et al. (1999) J. S. Gardner, S. R. Dunsiger, B. D. Gaulin, M. J. P. Gingras, J. E. Greedan, R. F. Kiefl, M. D. Lumsden, W. A. MacFarlane, N. P. Raju, J. E. Sonier, et al., Phys. Rev. Lett. 82, 1012 (1999). Gingras et al. (2000) M. J. P. Gingras, B. C. den Hertog, M. Faucher, J. S. Gardner, S. R. Dunsiger, L. J. Chang, B. D. Gaulin, N. P. Raju, and J. E. Greedan, Phys. Rev. B 62, 6496 (2000). Gardner et al. (2003) J. S. Gardner, A. Keren, G. Ehlers, C. Stock, E. Segal, J. M. Roper, B. Fåk, M. B. Stone, P. R. Hammar, D. H. Reich, et al., Phys. Rev. B 68, 180401(R) (2003). Rosenkranz et al. (2000a) S. Rosenkranz, A. P. Ramirez, A. Hayashi, R. J. Cava, R. Siddharthan, and B. S. Shastry, J. Appl. Phys. 87, 5914 (2000a). Mirebeau et al. (2007) I. Mirebeau, P. Bonville, and M. Hennion, Phys. Rev. B 76, 184436 (2007). Malkin et al. (2004) B. Z. Malkin, A. R. Zakirov, M. N. Popova, S. A. Klimin, E. P. Chukalina, E. Antic-Fidancev, P. Goldner, P. Aschehoug, and G. Dhalenne, Phys. Rev. B 70, 075112 (2004). Gardner et al. (2001) J. S. Gardner, B. D. Gaulin, A. J. Berlinsky, P. Waldron, S. R. Dunsiger, N. P. Raju, and J. E. Greedan, Phys. Rev. B 64, 224416 (2001). den Hertog and Gingras (2000) B. C. den Hertog and M. J. P. Gingras, Phys. Rev. Lett. 84, 3430 (2000). Kao et al. (2003) Y. J. Kao, M. Enjalran, A. Del Maestro, H. R. Molavian, and M. J. P. Gingras, Phys. Rev. B 68, 172407 (2003). (30) The ground state configuration of the spin ice model is formed by satisfying conditions analogous to Pauling’s ’ice rules’: every tetrahedron has both two spins pointing into and two spins pointing out of the tetrahedron. This (under)constraint on the system again results in a macroscopically large number of degenerate ground states. Molavian et al. (2007) H. R. Molavian, M. J. P. Gingras, and B. Canals, Phys. Rev. Lett. 98, 157204 (2007). Enjalran et al. (2004) M. Enjalran, M. J. P. Gingras, Y. J. Kao, A. del Maestro, and H. R. Molavian, J. Phys.: Condens. Matter 16, S673 (2004). Ehlers et al. (2004) G. Ehlers, A. L. Cornelius, T. Fennell, M. Koza, S. T. Bramwell, and J. S. Gardner, Physica B 385-386, 307 (2004). Curnoe (2007) S. H. Curnoe, Phys. Rev. B 75, 212404 (2007). Ramirez et al. (1999) A. P. Ramirez, A. Hayashi, R. J. Cava, R. Siddharthan, and B. S. Shastry, Nature 399, 333 (1999). Snyder et al. (2004) J. Snyder, B. G. Ueland, J. S. Slusky, H. Karunadasa, R. J. Cava, and P. Schiffer, Phys. Rev. B 69, 064414 (2004). Ramirez et al. (2000) A. P. Ramirez, C. L. Broholm, R. J. Cava, and G. R. Kowach, Physica B 280, 290 (2000). Harris et al. (1997) M. J. Harris, S. T. Bramwell, D. F. McMorrow, T. Zeiske, and K. W. Godfrey, Phys. Rev. Lett. 79, 2554 (1997). Bramwell et al. (2001) S. T. Bramwell, M. J. Harris, B. C. den Hertog, M. J. P. Gingras, J. S. Gardner, D. F. McMorrow, A. R. Wildes, A. L. Cornelius, J. D. M. Champion, R. G. Melko, et al., Phys. Rev. Lett. 87, 047205 (2001). Cornelius and Gardner (2001) A. L. Cornelius and J. S. Gardner, Phys. Rev. B 64, 060406(R) (2001). Petrenko et al. (2003) O. A. Petrenko, M. R. Lees, and G. Balakrishnan, Phys. Rev. B 68, 012406 (2003). Bramwell et al. (2000) S. T. Bramwell, M. N. Field, M. J. Harris, and I. P. Parkin, J. Phys.: Condens. Matter 12, 483 (2000). Melko and Gingras (2004) R. G. Melko and M. J. P. Gingras, J. Phys.: Condens. Matter 16, R1277 (2004). Bramwell and Gingras (2001) S. T. Bramwell and M. J. P. Gingras, Science 294, 1495 (2001). Bramwell and Harris (1998) S. T. Bramwell and M. J. Harris, J. Phys.: Condens. Matter 10, L215 (1998). Anderson (1956) P. W. Anderson, Phys. Rev. 102, 1008 (1956). Pauling (1935) L. Pauling, J. Am. Chem. Soc. 57, 2680 (1935). Isakov et al. (2005) S. V. Isakov, R. Moessner, and S. L. Sondhi, Phys. Rev. Lett. 95, 217201 (2005). Fukazawa et al. (2002) H. Fukazawa, R. G. Melko, R. Higashinaka, Y. Maeno, and M. J. P. Gingras, Phys. Rev. B 65, 054410 (2002). Fennell et al. (2002) T. Fennell, O. A. Petrenko, G. Balakrishnan, S. T. Bramwell, J. D. M. Champion, B. Fak, M. J. Harris, and D. M. Paul, Appl. Phys. A 74, S889 (2002). Harris et al. (1998) M. J. Harris, S. T. Bramwell, T. Zeiske, D. F. McMorrow, and P. J. C. King, J. of Mag. and Magn. Mater. 177, 757 (1998). Mirebeau et al. (2006) I. Mirebeau, A. Apetrei, I. N. Goncharenko, and R. Moessner, J. Phys.: Condens. Matter 16, S635 (2006). Gardner et al. (1998) J. S. Gardner, B. D. Gaulin, and D. M. Paul, J. Crystal Growth 191, 740 (1998), and references therein. Schmuttenmaer (2004) C. A. Schmuttenmaer, Chem. Rev. 62, 1759 (2004). Beard et al. (2000) M. C. Beard, G. M. Turner, and C. A. Schmuttenmaer, Phys. Rev. B 62, 15764 (2000). Sato and Ishii (1989) M. Sato and Y. Ishii, J. Appl. Phys. 66, 983 (1989). Gupta et al. (2001a) H. C. Gupta, S. Brown, N. Rani, and V. B. Gohel, J. Raman Spectrosc. 32, 41 (2001a). Saha et al. (2006) S. Saha, D. V. S. Muthu, C. Pascanut, N. Dragoe, R. Suryanarayanan, G. Dhalenne, A. Revcolevschi, S. Karmakar, S. M. Sharma, and A. K. Sood, Phys. Rev. B 74, 064109 (2006). Zhang and Saxena (2005) F. X. Zhang and S. K. Saxena, Chem. Phys. Lett. 413, 248 (2005). Mori et al. (2003) M. Mori, G. M. Tompsett, N. M. Sammes, E. Suda, and Y. Takeda, Solid State Ionics 158, 79 (2003), and references therein. Hess et al. (2002) N. J. Hess, B. D. Begg, S. D. Conradson, D. E. McCready, P. L. Gassman, and W. J. Weber, J. Phys Chem. B 106, 4663 (2002). Vandenborre and Husson (1983) M. T. Vandenborre and E. Husson, J. Solid State Chem. 50, 362 (1983). Brown et al. (2003) S. Brown, H. C. Gupta, J. A. Alonso, and M. J. Martinez-Lope, J. Raman Spectrosc. 34, 240 (2003). Glerup et al. (2001) M. Glerup, O. F. Nielsen, and W. F. Poulsen, J. Solid State Chem. 160, 25 (2001). Gupta et al. (2001b) H. C. Gupta, S. Brown, N. Rani, and V. B. Gohel, Int. J. Inorg. Mater. 3, 983 (2001b). Gupta et al. (2002) H. C. Gupta, S. Brown, N. Rani, and V. B. Gohel, J. Phys. Chem. Solids 63, 535 (2002). Vandenborre et al. (1983) M. T. Vandenborre, E. Husson, J. P. Chatry, and D. Michel, J. Raman Spectrosc. 14, 63 (1983). Porto et al. (1967) S. P. S. Porto, P. A. Fleury, and T. C. Damen, Phys. Rev. 154, 522 (1967). Gaulin et al. (1998) B. D. Gaulin, J. S. Gardner, S. R. Dunsiger, Z. Tun, M. D. Lumsden, R. F. Kiefl, N. P. Raju, J. N. Reimers, and J. E. Greedan, Physica B 241-243, 511 (1998). Kanada et al. (1999) M. Kanada, Y. Yasui, M. Ito, H. Harashina, M. Sato, H. Okumura, and K. Kakurai, J. Phys. Soc. Japan 68, 3802 (1999). Han et al. (2004) S. W. Han, J. S. Gardner, and C. H. Booth, Phys. Rev. B 69, 024416 (2004). Ofer et al. (2007) O. Ofer, A. Keren, and C. Baines, J. Phys.: Condens. Matter 19, 145270 (2007). Ruff et al. (2007) J. P. C. Ruff, B. D. Gaulin, J. P. Castellan, K. C. Rule, J. P. Clancy, J. Rodriguez, and H. A. Dabkowska, Phys. Rev. Lett. 99, 237202 (2007). Jana et al. (2002) Y. M. Jana, A. Sengupta, and D. Ghosh, J. Magn. Magn. Mater. 248, 7 (2002). Rosenkranz et al. (2000b) S. Rosenkranz, A. P. Ramirez, A. Hayashi, R. J. Cava, R. Siddharthan, and B. S. Shastry, J. Appl. Phys 87, 5914 (2000b). Siddharthan et al. (1999) R. Siddharthan, B. S. Shastry, A. P. Ramirez, A. Hayashi, R. J. Cava, and S. Rosenkranz, Phys. Rev. Lett. 83, 1854 (1999). Jana and Ghosh (2000) Y. M. Jana and D. Ghosh, Phys. Rev. B 61, 9657 (2000).
\JournalInfo Robotic Telescopes, Student Research and Education (RTSRE) Proceedings \Archive Conference Proceedings, San Diego, California, USA, Jun 18-21, 2017 Fitzgerald, M., James, C.R., Buxner, S., Eds. Vol. 1, No. 1, (2018) ISSN XXXX-XXXX (online) / doi : doidoidoidoi / CC BY-NC-ND license Peer Reviewed Article. rtsre.org/ojs \PaperTitleA Robotic Telescope For University-Level Distance Teaching \AuthorsKolb U.1*, Brodeur M.1, Braithwaite NStJ.1, and Minocha S.2 \Keywordsrobotic telescopes; practical science; group working; distance teaching \AbstractWe present aspects of the deployment of a remotely operable telescope for teaching practical science to distance learning undergraduate students. We briefly describe the technical realization of the facility, PIRATE, in Mallorca and elaborate on how it is embedded in the Open University curriculum. The PIRATE teaching activities were studied as part of a wider research project into the importance of realism, sociability and meta-functionality for the effectiveness of virtual and remote laboratories in teaching practical science. We find that students accept virtual experiments (e.g. a telescope simulator) when they deliver genuine, ”messy” data, clarify how they differ from a realistic portrayal, and are flagged as training tools. A robotic telescope is accepted in place of on-site practical work when realistic activities are included, the internet connection is stable, and when there is at least one live video feed. The robotic telescope activity should include group work and facilitate social modes of learning. Virtual experiments, though normally considered as asynchronous tools, should also include social interaction. To improve student engagement and learning outcomes a greater situational awareness for the robotic telescope setting should be devised. We conclude this report with a short account of the current status of PIRATE after its relocation from Mallorca to Tenerife, and its integration into the OpenScience Observatories. Introduction The Open University has developed its own small-aperture robotic telescope facilities to cater for Open University (OU) distance learning undergraduate and postgraduate students. The OU is the UK’s largest provider of part-time distance learning qualifications and derives its name from the practice of open access to any module regardless of prior education. Clear advice and guidance on the hierarchical nature of the Science disciplines, particularly in the Physical Sciences, and on recommended study pathways is given, but there is no formal barrier to enrollment. The student cohort is diverse in background and age, with a median age between 30 and 40 years, and most students holding down a full-time or part-time job. Studying individual modules as a one-off is possible, but students aspiring to a degree in the astronomy domain can opt for a BSc (Hons) entitled ”Natural Sciences (Astronomy and Planetary Sciences)” which includes online practical activities at Stages 2 and 3 (OU Stages 2 and 3 are roughly equivalent to years 2 and 3 at a traditional university with full-time students). The strict scheduling needs of time-limited observing activities within the curriculum for student cohorts of order 100, and the specific need to cater for OU students, made it desirable to establish an OU-owned and administered facility rather than buying in on existing robotic telescope networks which come with their own set of constraints and limitations. To this end we initiated a pilot project in 2008, the Physics Innovation Robotic Telescope Explorer (PIRATE), funded by the then Physics Innovation Centre of Excellence in Teaching and Learning (piCETL), one of a number of CETLs created by the Higher Education Funding Council in England (HEFCE). The initial configuration (Lucas and Kolb (2011); Kolb et al. (2010)), a robotic German mount by Software Bisque, in a simple roll-off shed on the roof of the catering building of the Observatori Astronomic de Mallorca (OAM; N $39^{\circ}$ $38^{\prime}$ $34.31^{\prime\prime}$, E $2^{\circ}$ $57^{\prime}$ $3.34^{\prime\prime}$, 160m above sea level), as seen in Figure  1 (left). The OAM is a teaching observatory that hosted, at the time, several week-long courses in observational astronomy per year for those OU students who were able to travel to Mallorca. PIRATE underwent upgrades over the years as described in Kolb (2014) (see also, Holmes et al. (2011)), moving into a robotic 3.5m clam-shell Baader Planetarium dome at the top of the main observatory tower and replacing the Celestron-14 with a PlaneWave CDK17 17 inch astrograph (Figure  1, right). The study of the educational use of PIRATE which forms the core of this report relates to this consolidated Mallorca phase of the facility. Curriculum Use There are currently two modules in the OU’s curriculum where optical robotic telescopes are deployed to support the teaching of practical science in general, and techniques in observational astronomy in particular. The 30 credit Stage 2 module SXPA288 (previously also SXP288) Practical science: physics and astronomy comprises four different 6-7 week long topics in the physical sciences, each with their distinct on-line or virtual practical activities. For context, full-time students would complete 120 credits per year, while most OU students study at a rate of 60 credits per year. The astronomy topic in SXPA288/SXP288 gives the students the choice between two projects. The first project deploys the OU’s remotely operable 3m radio dish ARROW (A Robotic Radio telescope Over the Web) to map the distribution of neutral hydrogen in the Galaxy. The second project, which is the focus of the subsequent discussion on how the learning activity was perceived by the students, constructs color magnitude diagrams of star clusters, using PIRATE, to study the cluster distances and ages. In both projects the primary aim is the learning of basic experimental (observational) skills including planning and conducting an experiment, record keeping, awareness of the role of uncertainties, and report writing. This is facilitated by the undertaking of a real scientific observational investigation. The secondary aim is to teach basic methods and techniques in observational astronomy, such as, for the PIRATE strand, CCD image data reduction and aperture photometry. Whilst the data analysis part of either project can be achieved with archival data alone, the center piece of each strand is a series of online session where the students are remotely controlling the hardware in real-time to acquire their own data for later analysis. The practical activity thus conveys the challenges of, in the case of PIRATE, night-sky imaging; it provides ownership of the process and hence a powerful motivation for learners. The 4 hr long PIRATE observing sessions are shared between an observer team of 4 or 5 students, both to increase the number of students with access to the telescope, and to share the workload and responsibilities during the session by peer support. The assessment of the activity, a short write-up presenting and interpreting the colour magnitude diagrams of three star clusters, is prepared by each student individually and contributes to the overall module grade roughly in proportion to the time spent on the activity. The 30 credit Stage 3 module S382 Astrophysics includes a 10 week long astrophysical data analysis project making up a third of the module. As for SXPA288 the students have a choice between two project flavors, the PIRATE strand and an archival data strand. The former involves the acquisition of photometric data of a periodic variable star coincident with an X-ray source, to classify or help improve the classification of the source, while the latter makes use of the Sloan Digital Sky Survey to construct and interpret a composite spectrum of a quasar. The S382 project is structured into weekly activities and involves groups of about 10 students who collaborate to achieve the project goal. Collaborative working is promoted by activities that explicitly require dividing tasks up between group members, and coordination within the group. To this end in each study week one group member takes on the role of project manager. The outcome of the project is a 3000 word collaborative project report, in the style of a scientific paper, written using a wiki. The communication during the project is mediated by synchronous meetings, mostly via Skype or Blackboard Collaborate, and by asynchronous forums and a wiki. The PIRATE project has three phases. The three week long induction phase culminates in a target selection exercise where the group compiles a shortlist of suitable sources from a catalog of periodic variable stars identified by SuperWASP that are coincident with a ROSAT source (Norton et al., 2007). The academic module lead considers all individual shortlists and approves the actual targets, often doubling up between groups so that in case of bad weather data acquired by a different group can supplement a group’s own data. The 4 or 5 week long data acquisition phase typically sees 1 full night of observing per week for each of the groups. Literature research and data analysis is advanced in parallel. The review phase during the last 2-3 weeks of the project is devoted to writing up and editing the final wiki report. The grade on the project is determined from two components, with equal weight. The first component is the project report; this is common to all group members, but down-weighted for those (rare) cases of students who have not satisfactorily engaged with the collaborative aspects of the project. The second assessment component is a portfolio of weekly progress reports, prepared by each students separately, that provide specific evidence of how the student achieved the project module learning outcomes. Observing Sessions In both curriculum uses of PIRATE the observing sessions are set up as an interactive experience with real-time, online control, allowing a team of 2-5 student observers to conduct the session’s observing program while also linked to each other via voice-only audio. The control system adopted for PIRATE in Mallorca was based on a Windows PC running the observatory control software ACP by DC-3 Dreams, with relevant auxiliary software, including MaxIm DL by Diffraction Ltd to control the camera, FocusMax to administer the auto-focus procedure, and The Sky X as the driver for the mount. The ACP web interface served as the student observer team’s main communication portal with the telescope, displaying in real-time status information and thumbnail previews of the latest images, while allowing the user to submit commands and to download their raw data frames as FITS files. All observer team members have simultaneous access to the control interface. Further external tools, not linked into ACP, a webcam video and audio stream, a periodically updating all-sky camera and weather information from a Boltwood cloud sensor completed the diagnostic information available to the observer team. The S382 PIRATE project students also make use of a ”virtual telescope”, in the form of the PIRATE simulator, where the control software ACP is set up in its simulator mode. The user interacts with the simulator in same way via the ACP web interface as the user of the real telescope. The simulator pretends to go through the motions of slewing the telescope, focusing, tracking, image acquisition, guiding, image download etc., therefore no hardware is actually involved. The image returned to the user is a mocked-up fits file of a star field on the basis of a star catalog and an assumed point spread function. The PIRATE simulator was set up behind a booking system offering 2 hour long training slots. Each student was expected to work through at least one training slot in good time before their real observing session, to familiarize themselves with the control interface and the routine activities they will encounter during the observing run. At the beginning of the real-time observing session the members of the observer team are assigned to specific roles, such as telescope operator, log keeper and data analyst, to help organize their collaboration. Meeting fellow students online and speaking to them via the computer has been a novel experience for many SXPA288 and S382 students. The traditional OU distance learner at the time, and in many cases still now, would have been used to exchanging messages on a forum, emailing and perhaps speaking to a tutor on the phone, and more recently attending module-wide online lectures or tutorials using the virtual classroom software Blackboard Collaborate. But even those synchronous events allow reluctant participants to take a backseat, attending but with limited or no interaction, and a majority often watch the recording rather than take part in the live event, either out of choice or because they were unavailable. The real-time observing sessions thus uniquely foster team-building and synchronous collaborative working of distance learners. Weather permitting the S382 sessions ran from dusk to the early hours, for as long as a team was willing to persist. Anecdotal evidence suggests that once the observing program was well underway conversations on science and studying in general progressed to a deeper philosophical level, or to more light-hearted exchanges, and some teams developed a friendship which outlasted the module. When an observer team begins a real-time session for the first time an astronomy tutor with a higher level of access to the remote telescope system, known to the students as the Night Duty Astronomer (NDA), welcomes the team on Skype, provides a basic briefing on safe observing, and verifies that the team have a reasonable grasp of what is expected from them. It is also the NDA’s responsibility to ensure that the observatory is powered on and fully connected, or to arrange for technical help if that is not the case. The student guidance notes for the observing session are self-contained and written for distance learners who study independently. In principle there should therefore be no need to provide tutor support throughout the session, so the NDA is meant to withdraw after the initial briefing, but has to remain on call in case of technical problems, or any other issues that would prevent the successful conduct of the session. In practice, the NDA often has been playing a more active role, for two main reasons. Perhaps not unexpectedly some observer teams turned up with insufficient time spent on preparing for the session, and some regarded the NDA as a convenient demonstrator who knew all the answers. In such cases the NDA sought to find a balance between helping students along towards acquiring useful data for the project and giving pointers that would allow the team to achieve the learning outcomes in the way intended, by team work using the teaching resource provided. The second reason for occasionally more active NDA involvement was due to the system set-up of PIRATE in Mallorca. The OAM is not connected to the electricity grid and obtains power from an on-site generator. At the time S382 and SXPA288 were presented from Mallorca neither the power supply nor the internet connection speed was perfectly stable, giving rise to occasional system glitches where the telescope would suddenly appear to go offline, or would refuse to execute remote commands. The generally high humidity at the low altitude Mediterranean location increased the hardware’s vulnerability to the occasional loss of connection or loss of communication with its control computer. The vast majority of these issues could be swiftly resolved by the NDA using a remote-desktop style connection that allows reconnecting components or rebooting software or even the whole system. The observer teams by and large did not mind as long as the NDA was swiftly at the task, but the overall observing experience was clearly affected by these events. Equally important, the NDA post is a limited resource and not paid as a full-time support astronomer, and nor would the distance teaching of observational astronomy in the OU context be affordable if such a full-time post were required. A sustainable use of remote telescopes in the way the OU curriculum demands requires a robust site infrastructure. The OAM is awaiting investment to achieve this, but recent developments, as described in Section 5 ensured that PIRATE is now on such a robust footing. The rationale for asking observer teams to stay online with PIRATE throughout the session, even into the early hours and when only one target is being monitored in a long time series, was that this is the most effective way to ensure that the data quality remains good throughout the run, that observations can safely resume after intermittent clouds, and that the a technical malfunction can acted upon quickly. At the same time the real-time element fosters team building and enhances collaboration, thus creating a highly motivational learning environment quite unlike what OU distance learners would have been used to. The observer team setting is also accessible to solitary learners, less outgoing students or students who prefer to limit their social interactions, as it is possible to contribute to and benefit from those session by text chat. Evaluation The curriculum deployment of PIRATE in Mallorca was evaluated over the period 2013-2015 as part of a PhD research project at the Open University (e.g. Brodeur (2016); see also Brodeur et al. (2014a) and Brodeur et al. (2014b)). Specifically, the research considered the importance of authenticity, sociability and meta-functionality for the effectiveness of virtual experiments (such as a telescope simulator) and remote-access experiments (such as a robotic telescope) in teaching practical science to undergraduate students. This contributes to a wider critical discussion on the educational merits of remote labs, where some professional researchers and educators have cast doubt on the effectiveness of remote observatories in teaching observational astronomy (e.g. Privon et al. (2009); Jacobi et al. (2008)), while others found that remote experiments can be as effective as proximal ones (e.g. Corter et al. (2004); Lowe et al. (2013)). Our study deployed a mixed-methods approach (Greene and Caracelli, 1997) on a sample of Open University students, and for the focus on the astronomy aspect included 40 students at Stage 2 and 160 students at Stage 3. The quantitative data comprised anonymized demographic information and assessment scores from student records, as well as pre- and post-activity electronic surveys with ranking questions and Likert-style queries. Qualitative data were obtained from the open-ended survey questions and in semi-structured individual or group interviews conducted via Skype (nine at Stage 2, eight at Stage 3). A detailed account of this study will be published elsewhere (Brodeur et al, in preparation). Here we report only some of the results pertinent to the use of robotic telescopes. Quantitative Findings The analysis of the survey responses does not reveal any stand-out, strong correlations between student demographics and perceived experience, but there are two notable trends. The first one is that with increasing age students display an increased agreement with the sentiments that both virtual experiments (VEs) and remote experiments (REs) make practical science enjoyable. By contrast younger students expressed a preference for being co-located with practical equipment. This may surprise at first glance as the younger generation has been growing up in an environment where digital devices are ubiquitous and accessing the virtual, online world is an integral part of the daily routine. The older generation have necessarily embraced online technology later in life. Yet younger students may find real labs more desirable just because they are different from the virtual world which has become too common-place. Older students may value remote labs more because they are more easily accessible, more comfortable to use, at a time and place that is convenient in the busy life they are familiar with. The second trend seen in the surveys relates to the assessment outcome, i.e. the grades achieved on the module with the embedded telescope activity. Students who agree with the notion that VEs make practical work more enjoyable tend to obtain higher grades. It is difficult to disentangle if this trend is simply a consequence of the fact that more motivated students are doing better overall, or if indeed those more engaged students who take time to prepare the observing session on the simulator are better prepared for the real observing session to such an extent that they ultimately more readily achieve the overall learning outcomes. Mirroring this, students who replied that REs are not effective for teaching collaboration tend to obtain lower grades on the module. As the telescope activities have a significant on-line collaborative element it is likely that the affected students represent learners who either feel that they are being negatively impacted by fellow students in their group, either because they perceive them as not contributing sufficiently, or because there are students who dominate the group or who are too far ahead and confidence and motivation - or these are students who are genuinely solitary learners who are unduly challenged by the need for group working. Qualitative findings Analysis of the free text survey responses and of interview transcripts reveals insights into the perceived importance of realism, sociability and meta-functionality for PIRATE and its simulator. The ACP web interface and the webcam stream was seen as ”realistic”, sufficiently so that there was no desire to have access to a virtual reproduction or animation of hardware control panels that some students might be familiar with in the context of an amateur telescope, such as a handset as it is used to drive the mount. Yet the single webcam trained onto the telescope that only shows darkness when the IR beam is turned off for the actual image acquisition, is not enough to convey a sense of being ”connected” to the remote telescope. The students also enjoy and demand realistic data and scenarios, as opposed to merely re-tracing idealized steps that deliver perfect versions of the desired measurements or astronomical images. A student said ”You learn more when things go wrong”, and this implies the desire to be exposed to real-world, messy data. Realism is important, but data-realism, not photorealism. The students valued the provisions of opportunities for social learning. The Skype-mediated voice-only meetings were not seen to be as real or as collaborative as an observer team that is co-located at an observatory, but these synchronous remote meetings provide clearly a very different experience from a synchronous forum, the normal collaborative tool for OU students. Including talking heads in the online meeting was not seen as a potential improvement because the students felt that there were already enough visual clues to deal with during an observing session, including the ACP web interface, the webcam, the weather diagnostics, the observer’s log, the fits file viewer and instructions on the Virtual learning Environment or in PDF documents. A student on the autism spectrum also stated that voice-only sessions are preferable to video links. Various suggestions for improving the collaborative learning were made, such as the team should meet and form before the observing session, and the simulator session should be offered as a collaborative activity just like the real observing run, rather than as a session for a single user. A meta-functional element included in the simulator is the accelerated completion of processes that take considerable time in the real world, such as long image exposures or the focus run. Most students appeared to appreciate that that this saves time, but they also felt that such meta-functionality needs to be clearly signposted so as to manage expectations for the real run. Some students would have preferred a toggle to pro-actively choose between ”real” behavior and ”accelerated” mode. Summary and Recommendations Students accept virtual experiments (telescope simulator) for ”hands on” activities when VEs approximate the interfaces used to operate real-world instruments, when VEs deliver genuine, ”messy” data, and clarify how they differ from a realistic portrayal, and are flagged as training tools. Students accept remote experiments (the robotic telescope) in place of on-site practical work when REs incorporate realistic activities, a stable internet connection, and at least one live video feed. They should include group work and facilitate social modes of learning. Several recommendations for improving student engagement and learning outcomes from virtual and remote experiments emerge from our study: It is preferable to devise greater situational awareness rather than rely on video conferencing to mimic co-located practical work. The experiments should enable in-person social interactions, and these should be facilitated to form before the actual remote experiment activity. Virtual experiments, though normally considered as asynchronous tools, should also include social interaction. OpenScience Observatories at Tenerife We end this account with a brief summary of the latest developments of the PIRATE project that commenced after the end of the curriculum use period considered above. Our evaluation highlighted the importance of the resilience of the hardware set-up and the stability of the internet connection for providing a satisfactory student experience. Technical glitches may deliver some of the desired real-world, messy data and may represent a good learning opportunity, but if pushed too far, or if encountered too frequently, any potential benefit is lost by instilling confusion and denting student confidence. An opportunity to address this issue for PIRATE arose in 2015, when a HEFCE-funded capital infrastructure award enabled a new Open University taught postgraduate qualification, the MSc Space Science and Technology. This opened the door for a step-change that has now completely transformed the PIRATE project and established unprecedented opportunities for teaching and research in observational astronomy for OU students at all Stages. The funding for moving PIRATE to a prime observing site became available in 2015/6, at about the same time when discussions began to transfer the Bradford Robotic Telescope (BRT) (Baruch, 2015) to the OU. Negotiations with the Instituto de Astrofísica de Canarias (Spain) led to the relocation of an upgraded PIRATE to the Observatorio del Teide (Tenerife), and to the construction of a second, new facility, COAST (COmpletey Autonomous Service Telescope) to replace the aging hardware of the BRT, and to handle observing requests from the public portal telescope.org. PIRATE is now in a Baader Planetarium 4.5m All-Sky dome, a robotic dome in clam-shell design (Figure 2). The optical tube assembly is a 17 inch PlaneWave CDK-17 corrected Dall-Kirkheim astrograph with focal ratio f/6.8, mounted on 10Micron’s German equatorial mount GM4000. The FLI ProLine KAF-16803 imagining camera has $4096^{2}$ 9 micron pixels, giving a $43^{\prime}$ field of view and $0.63^{\prime\prime}$/px plate scale. The camera is equipped with Baader LRGB broadband filters and H$\alpha$, SII and OIII narrowband filters. The control software ABOT by Sybilla Technology, a much developed version of what is described in Sybilski et al. (2014), provides both a web control interface for real-time use and manages the fully autonomous queue-scheduled mode of operation. The public front-end telescope.org inherited from the BRT project is passing scheduled requests to Abot for execution. The second facility, COAST, resides next to PIRATE in a smaller Baader Planetarium 3.5m All-Sky dome. A 14 inch Celestron-14 Schmidt Cassegrain telescope with focal ratio f/11 is mounted on a GM4000. The legacy SBIG STL-1001E imaging camera has $1024^{2}$ 24 $\mu$ pixels, giving a $29^{\prime}$ field of view and $1.7^{\prime\prime}$/px plate scale. The camera is equipped with Baader Johnson-Cousin BVR broadband filters and H$\alpha$, SII and OIII narrow-band filters. The control software is also ABOT. The large number of usable dark hours per year makes the facilities ideally suited for the highly constrained time-tabled use required for the wider OU curriculum. An assessment of the effectiveness of our new facilities and the ABOT control interface in teaching observational techniques will be the subject of a forthcoming study. PIRATE and COAST form now the Tenerife facilities of the OpenScience Observatories, the Astronomy ”wing” of the OpenSTEM Labs, an award-winning major Open University initiative to bring practical science to distance learners world-wide. Acknowledgements The PIRATE project is a team effort and would not have been feasible without the many contributions by current and former members of the PIRATE crew - too numerous to list here (see http://pirate.open.ac.uk/contact.html and http://www.telescope.org/contact-us.php). We are grateful to the staff and volunteers of the Observatori Astronomic de Mallorca who helped maintain the facility during its Mallorca period and who provided local emergency assistance. Baader Planetarium played a leading role in the planning and construction of the Tenerife facilities. We thank their staff for their continuous support of the project, and in particular Ladislav Rehak who spent countless hours keeping PIRATE afloat for its curriculum use while it was temporarily based in Mammendorf. We are indebted to John Baruch who made the move of the pioneering telescope.org project to the OU possible. Don Pollacco and John Baruch planted the seeds for the recent relocation of PIRATE to Tenerife. We thank Piotr Sybilski and his team at Sybilla Technologies for their collaboration in moving the project forward. Finally we acknowledge again funding by the HEFCE for the OpenSTEM Labs initiative, and by the Wolfson Foundation. References Baruch (2015) Baruch, J. (2015). A robotic telescope for science and education. Astronomy & Geophysics, 56(2):2.18–2.21. Brodeur et al. (2014a) Brodeur, M., Kolb, U., Minocha, S., and Braithwaite, N. (2014a). Teaching undergraduate astrophysics with PIRATE. Revista Mexicana de Astronomía y Astrofísica, 45:129–132. Brodeur et al. (2014b) Brodeur, M., Kolb, U., Minocha, S., and Braithwaite, N. (2014b). Teaching undergraduate astrophysics with PIRATE. Revista Mexicana de Astronomía y Astrofísica, 45:133–134. Brodeur (2016) Brodeur, M. S. (2016). Design priorities for online laboratories in undergraduate practical science. PhD thesis, Open University. Corter et al. (2004) Corter, J. E., Nickerson, J. V., Esche, S. K., and Chassapis, C. (2004). Remote versus hands-on labs: A comparative study. In Frontiers in Education, 2004. FIE 2004. 34th Annual. IEEE. F1G-17-21. Greene and Caracelli (1997) Greene, J. C. and Caracelli, V. J. (1997). Defining and describing the paradigm issue in mixed-method evaluation. New directions for evaluation, 1997(74):5–17. Holmes et al. (2011) Holmes, S., Kolb, U., Haswell, C., Burwitz, V., Lucas, R., Rodriguez, J., Rolfe, S., Rostron, J., and Barker, J. (2011). PIRATE: a remotely operable telescope facility for research and education. Publications of the Astronomical Society of the Pacific, 123(908):1177–1187. Jacobi et al. (2008) Jacobi, I. C., Newberg, H. J., Broder, D., Finn, R. A., Milano, A. J., Newberg, L. A., Weatherwax, A. T., and Whittet, D. C. (2008). Effect of night laboratories on learning objectives for a nonmajor astronomy class. Astronomy Education Review, 7:66–73. Kolb (2014) Kolb, U. (2014). The PIRATE facility: at the crossroads of research and teaching. Revista Mexicana de Astronomía y Astrofísica, 45:16–19. Kolb et al. (2010) Kolb, U., Lucas, R., Burwitz, V., Holmes, S., Haswell, C., Rodgriguez, J., Rolfe, S., Rostron, J., and Barker, J. (2010). PIRATE-the piCETL astronomical telescope explorer. In: Norton, Andrew ed., Electronic Resources for Teaching and Learning. Milton Keynes: The Open University, pp 58-64. (http://oro.open.ac.uk/26018/). Lowe et al. (2013) Lowe, D., Newcombe, P., and Stumpers, B. (2013). Evaluation of the use of remote laboratories for secondary school science education. Research in Science Education, 43(3):1197–1219. Lucas and Kolb (2011) Lucas, R. and Kolb, U. (2011). Software architecture for an unattended remotely controlled telescope. Journal of the British Astronomical Association, 121(5):265–269. Norton et al. (2007) Norton, A. J., Wheatley, P., West, R., Haswell, C., Street, R., Cameron, A. C., Christian, D., Clarkson, W., Enoch, B., Gallaway, M., et al. (2007). New periodic variable stars coincident with ROSAT sources discovered using SuperWASP. Astronomy & Astrophysics, 467(2):785–905. Privon et al. (2009) Privon, G. C., Beaton, R. L., Whelan, D. G., Yang, A., Johnson, K., and Condon, J. (2009). The importance of hands-on experience with telescopes for students. arXiv preprint arXiv:0903.3447. Sybilski et al. (2014) Sybilski, P. W., Pawłaszek, R., Kozłowski, S. K., Konacki, M., Ratajczak, M., and Hełminiak, K. G. (2014). Software for autonomous astronomical observatories: challenges and opportunities in the age of big data. In Software and Cyberinfrastructure for Astronomy III, volume 9152, page 91521C. International Society for Optics and Photonics.
$\tau$-FPL: Tolerance-Constrained Learning in Linear Time ††thanks: A preliminary version of this work appeared in the Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, 2018. \nameAo Zhang \emailaz.aozhang@gmail.com \addrSchool of Computer Science and Software Engineering East China Normal University, Shanghai, China \AND\nameNan Li \emailnanli.ln@alibaba-inc.com \addrInstitute of Data Science and Technologies Alibaba Group, Hangzhou, China \AND\nameJian Pu \emailjianpu@sei.ecnu.edu.cn \addrSchool of Computer Science and Software Engineering East China Normal University, Shanghai, China \AND\nameJun Wang\emailjwang@sei.ecnu.edu.cn \addrSchool of Computer Science and Software Engineering East China Normal University, Shanghai, China \AND\nameJunchi Yan \emailyanesta13@163.com \addrIBM Research – China, Shanghai, China School of Computer Science and Software Engineering East China Normal University, Shanghai, China \AND\nameHongyuan Zha \emailzha@cc.gatech.edu \addrCollege of Computing, Georgia Institute of Technology, Atlanta, USA School of Computer Science and Software Engineering East China Normal University, Shanghai, China Abstract Learning a classifier with control on the false-positive rate plays a critical role in many machine learning applications. Existing approaches either introduce prior knowledge dependent label cost or tune parameters based on traditional classifiers, which lack consistency in methodology because they do not strictly adhere to the false-positive rate constraint. In this paper, we propose a novel scoring-thresholding approach, $\tau$-False Positive Learning ($\tau$-FPL) to address this problem. We show the scoring problem which takes the false-positive rate tolerance into accounts can be efficiently solved in linear time, also an out-of-bootstrap thresholding method can transform the learned ranking function into a low false-positive classifier. Both theoretical analysis and experimental results show superior performance of the proposed $\tau$-FPL over existing approaches. $\tau$-FPL: Tolerance-Constrained Learning in Linear Time ††thanks: A preliminary version of this work appeared in the Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, 2018. Ao Zhang az.aozhang@gmail.com School of Computer Science and Software Engineering East China Normal University, Shanghai, China Nan Li nanli.ln@alibaba-inc.com Institute of Data Science and Technologies Alibaba Group, Hangzhou, China Jian Pu jianpu@sei.ecnu.edu.cn School of Computer Science and Software Engineering East China Normal University, Shanghai, China Jun Wang jwang@sei.ecnu.edu.cn School of Computer Science and Software Engineering East China Normal University, Shanghai, China Junchi Yan yanesta13@163.com IBM Research – China, Shanghai, China School of Computer Science and Software Engineering East China Normal University, Shanghai, China Hongyuan Zha zha@cc.gatech.edu College of Computing, Georgia Institute of Technology, Atlanta, USA School of Computer Science and Software Engineering East China Normal University, Shanghai, China Keywords: Neyman-Pearson Classification, False Positive Rate Control, Bipartite Ranking, Partial-AUC Optimization, Euclidean Projection 1 Introduction In real-world applications, such as spam filtering (Drucker et al., 1999) and medical diagnosing (Huang et al., 2010), the loss of misclassifying a positive instance and negative instance can be rather different. For instance, in medical diagnosing, misdiagnosing a patient as healthy is more dangerous than misclassifying healthy person as sick. Meanwhile, in reality, it is often very difficult to define an accurate cost for these two kinds of errors (Liu and Zhou, 2010; Zhou and Zhou, 2016). In such situations, it is more desirable to keep the classifier working under a small tolerance of false-positive rate (FPR) $\tau$, i.e., only allow the classifier to misclassify no larger than $\tau$ percent of negative instances. Traditional classifiers trained by maximizing classification accuracy or AUC are not suitable due to mismatched goal. In the literature, classification under constrained false-positive rate is known as Neyman-Pearson (NP) Classification problem (Scott and Nowak, 2005; Lehmann and Romano, 2006; Rigollet and Tong, 2011), and existing approaches can be roughly grouped into several categories. One common approach is to use cost-sensitive learning, which assigns different costs for different classes, and representatives include cost-sensitive SVM (Osuna et al., 1997; Davenport et al., 2006, 2010), cost-interval SVM (Liu and Zhou, 2010) and cost-sensitive boosting (Masnadi-Shirazi and Vasconcelos, 2007, 2011). Though effective and efficient in handling different misclassification costs, it is usually difficult to find the appropriate misclassification cost for the specific FPR tolerance. Another group of methods formulates this problem as a constrained optimization problem, which has the FPR tolerance as an explicit constraint (Mozer et al., 2002; Gasso et al., 2011; Mahdavi et al., 2013). These methods often need to find the saddle point of the Lagrange function, leading to a time-consuming alternate optimization. Moreover, a surrogate loss is often used to simplify the optimization problem, possibly making the tolerance constraint not satisfied in practice. The third line of research is scoring-thresholding methods, which train a scoring function first, then find a threshold to meet the target FPR tolerance (Drucker et al., 1999). In practice, the scoring function can be trained by either class conditional density estimation (Tong, 2013) or bipartite ranking (Narasimhan and Agarwal, 2013b). However, computing density estimation itself is another difficult problem. Also most bipartite ranking methods are less scalable with super-linear training complexity. Additionally, there are methods paying special attention to the positive class. For example, asymmetric SVM (Wu et al., 2008) maximizes the margin between negative samples and the core of positive samples, one-class SVM (Ben-Hur et al., 2001) finds the smallest ball to enclose positive samples. However, they do not incorporate the FPR tolerance into the learning procedure either. In this paper, we address the tolerance constrained learning problem by proposing $\tau$-False Positive Learning ($\tau$-FPL). Specifically, $\tau$-FPL is a scoring-thresholding method. In the scoring stage, we explicitly learn a ranking function which optimizes the probability of ranking any positive instance above the centroid of the worst $\tau$ percent of negative instances. Whereafter, it is shown that, with the help of our newly proposed Euclidean projection algorithm, this ranking problem can be solved in linear time under the projected gradient framework. It is worth noting that the Euclidean projection problem is a generalization of a large family of projection problems, and our proposed linear-time algorithm based on bisection and divide-and-conquer is one to three orders faster than existing state-of-the-art methods. In the thresholding stage, we devise an out-of-bootstrap thresholding method to transform aforementioned ranking function into a low false-positive classifier. This method is much less prone to overfitting compared to existing thresholding methods. Theoretical analysis and experimental results show that the proposed method achieves superior performance over existing approaches. 2 From Constrained Optimization to Ranking In this section, we show that the FPR tolerance problem can be transformed into a ranking problem, and then formulate a convex ranking loss which is tighter than existing relaxation approaches. Let $\mathcal{X}=\{\bm{x}\mid\bm{x}\in\mathbb{R}^{d}:||\bm{x}||\leq 1\}$ be the instance space for some norm $||\cdot||$ and $\mathcal{Y}=\{-1,+1\}$ be the label set, $\mathcal{S}=\mathcal{S}_{+}\cup\mathcal{S}_{-}$ be a set of training instances, where $\mathcal{S}_{+}=\{\bm{x}_{i}^{+}\in\mathcal{X}\}_{i=1}^{m}$ and $\mathcal{S}_{-}=\{\bm{x}_{j}^{-}\in\mathcal{X}\}_{j=1}^{n}$ contains $m$ and $n$ instances independently sampled from distributions $\mathbb{P}^{+}$ and $\mathbb{P}^{-}$, respectively. Let $0\leq\tau\ll 1$ be the maximum tolerance of false-positive rate. Consider the following Neyman-Pearson classification problem, which aims at minimizing the false negative rate of classifier under the constraint of false positive rate: $$\begin{split}\displaystyle\min_{f,b}~{}~{}\mathbb{P}_{\bm{x}^{+}\sim\mathbb{P}% ^{+}}\left(f(\bm{x}^{+}\right)\leq b)~{}~{}~{}~{}\textrm{\small s.t.}~{}~{}% \mathbb{P}_{\bm{x}^{-}\sim\mathbb{P}^{-}}\left(f(\bm{x}^{-})>b\right)\leq\tau,% \end{split}$$ (1) where $f:\mathcal{X}\to\mathbb{R}$ is a scoring function and $b\in\mathbb{R}$ is a threshold. With finite training instances, the corresponding empirical risk minimization problem is $$\begin{split}\displaystyle\min_{f,b}&\displaystyle~{}{\cal L}_{emp}(f,b)=\frac% {1}{m}\sum\nolimits_{i=1}^{m}\mathbb{I}\left(f(\bm{x_{i}^{+}})\leq b\right)\\ \\ \displaystyle\textrm{\small s.t.}&\displaystyle~{}\quad\frac{1}{n}\sum% \nolimits_{j=1}^{n}\mathbb{I}\left(f(\bm{x_{j}^{-}})>b\right)\leq\tau,\end{split}$$ (2) where $\mathbb{I}(u)$ is the indicator function. The empirical version of the optimization problem is difficult to handle due to the presence of non-continuous constraint; we introduce an equivalent form below. Proposition 1. Define $f(\bm{x}_{[j]}^{-})$ be the $j$-th largest value in multiset $\{f(\bm{x}_{i})\mid\bm{x}_{i}\in\mathcal{S}_{-}\}$ and $\lfloor.\rfloor$ be the floor function. The constrained optimization problem (2) share the same optimal solution $f^{*}$ and optimal objective value with the following ranking problem $$\min_{f}\frac{1}{m}\sum_{i=1}^{m}\mathbb{I}\left(f(\bm{x}_{i}^{+})-f(\bm{x}_{[% \lfloor\tau n\rfloor+1]}^{-})\leq 0\right).$$ (3) Proof. For a fixed $f$, it is clear that the constraint in (2) is equivalent to $b\geq f(\bm{x}_{[\lfloor\tau n\rfloor+1]}^{-})$. Since the objective in (2) is a non-decreasing function of $b$, its minimum is achieved at $b=f(\bm{x}_{[\lfloor\tau n\rfloor+1]}^{-})$. From this, we can transform the original problem (2) into its equivalent form (3) by substitution. ∎ Proposition 1 reveals the connection between constrained optimization (2) and ranking problem (3). Intuitively, problem (3) makes comparsion between each positve sample and the $(\lfloor\tau n\rfloor+1)$-th largest negative sample. Here we give further explanation about its form: it is equivalent to the maximization of the partial-AUC near the risk area. Although it seems that partial-AUC optimization considers fewer samples than the full AUC, optimizing (3) is intractable even if we replace $\mathbb{I}(\cdot)$ by a convex loss, since the operation $[\cdot]$ is non-convex when $\lfloor\tau n\rfloor>0$. Indeed, Theorem 1 further shows that whatever $\tau$ is chosen, even for some weak hypothesis of $f$, the corresponding optimization problem is NP-hard. Definition 1. A surrogate function of  $\mathbb{I}(u\leq 0)$ is a contionuous and non-increasing function $L:\mathbb{R}\to\mathbb{R}$ satisfies that (i) $\forall u\leq 0$, $L(u)\geq L(0)\geq 1$; (ii) $\forall u>0$, $0\leq L(u)<L(0)$; (iii) $L(u)\rightarrow 0$ as $u\rightarrow+\infty$. Theorem 1. For fixed $\lambda>0$, $0<\tau<1$ and surrogate function $L(\cdot)$, optimization problem $\tau$-$OPT_{L}^{\lambda}$ $$\min_{f\in\mathcal{H}^{d}}\frac{1}{m}\sum_{i=1}^{m}L\left(f(\bm{x}_{i}^{+})-f(% \bm{x}_{[\lfloor\tau n\rfloor+1]}^{-})\right)$$ with hyperplane hypothesis set $\mathcal{H}^{d}=\{f(\bm{x})=\bm{w}^{\top}\bm{x}\mid\bm{w}\in\mathbb{R}^{d},||% \bm{w}||\leq\lambda\}$ is NP-hard. This intractability motivates us to consider the following upper bound approximation of (3): $$\min_{f}\frac{1}{m}\sum_{i=1}^{m}\mathbb{I}\left(f(\bm{x}_{i}^{+})-\frac{1}{% \lfloor\tau n\rfloor+1}\sum_{i=1}^{\lfloor\tau n\rfloor+1}f(\bm{x}_{[i]}^{-})% \leq 0\right)$$ (4) which prefers scores on positive examples to exceed the mean of scores on the worst $\tau$-proportion of negative examples. If $\tau=0$, optimization (4) is equivalent to the original problem (3). In general cases, equality also could hold when both the scores of the worst $\tau$-proportion negative examples are the same. The advantages of considering ranking problem (4) include (details given later): • For appropriate hypothesis set of $f$, replacing $\mathbb{I}(\cdot)$ by its convex surrogate will produce a tight convex upper bound of the original minimization problem (3), in contrast to the cost-sensitive classification which may only offer an insecure lower bound. This upper bound is also tighter than the convex relaxation of the original constrained minimization problem (2) (see Proposition 2); • By designing efficient learning algorithm, we can find the global optimal solution of this ranking problem in linear time, which makes it well suited for large-scale scenarios, and outperforms most of the traditional ranking algorithms; • Explicitly takes $\tau$ into consideration and no additional cost hyper-parameters required; • The generalization performance of the optimal solution can be theoretically established. 2.1 Comparison to Alternative Methods Before introducing our algorithm, we briefly review some related approaches to FPR constrained learning, and show that our approach can be seen as a tighter upper bound, meanwhile maintain linear training complexity. Cost sensitive Learning One alternative approach to eliminating the constraint in (2) is to approximate it by introducing asymmetric costs for different types of error into classification learning framework: $$\min_{f,b}{\cal L}_{emp}^{C}(f,b)=\frac{1}{m}\sum_{i=1}^{m}\mathbb{I}\left(f(% \bm{x_{i}^{+}})\leq b\right)+\frac{C}{n}\sum_{j=1}^{n}\mathbb{I}\left(f(\bm{x_% {j}^{-}})>b\right),$$ where $C\geq 0$ is a hyper-parameter that punishes the gain of false positive instance. Although reasonable for handling different misclassification costs, we point out that methods under this framework indeed minimize a lower bound of problem (2). This can be verified by formulating (2) into unconstrained form ${\cal L}_{emp}^{\prime}(f,b)$ $$\displaystyle\!\!\!\!{\cal L}_{emp}^{\prime}(f,b)$$ $$\displaystyle\triangleq$$ $$\displaystyle\!\!\!\!\max_{\lambda\geq 0}\frac{1}{m}\sum_{i=1}^{m}\mathbb{I}% \left(f(\bm{x_{i}^{+}})\leq b\right)\!+\!\lambda\left(\frac{1}{n}\sum_{j=1}^{n% }\mathbb{I}\left(f(\bm{x_{j}^{-}})>b\right)-\tau\right)$$ $$\displaystyle\geq$$ $$\displaystyle\!\!\!\!\frac{1}{m}\sum_{i=1}^{m}\mathbb{I}\left(f(\bm{x_{i}^{+}}% )\leq b\right)+C\left(\frac{1}{n}\sum_{j=1}^{n}\mathbb{I}\left(f(\bm{x_{j}^{-}% })>b\right)-\tau\right)$$ $$\displaystyle=$$ $$\displaystyle\!\!\!\!{\cal L}_{emp}^{C}(f,b)-C\tau.$$ Thus, for a fixed $C$, minimizing ${\cal L}_{emp}^{C}$ is equivalent to minimize a lower bound of ${\cal L}_{emp}$. In other words, cost-sensitive learning methods are insecure in this setting: mismatched $C$ can easily violate the constraint on FPR or make excessive requirements, and the optimal $C$ for specified $\tau$ is only known after solving the original constrained problem, which can be proved NP-hard. In practice, one also has to make a continuous approximation for the constraint in (2) for tractable cost-sensitive training, this multi-level approximation makes the relationship between the new problem and original unclear, and blocks any straightforward theoretical justification. Constrained Optimization  The main difficulty of employing Lagrangian based method to solve problem (2) lies in the fact that both the constraint and objective are non-continuous. In order to make the problem tractable while satisfying FPR constraint strictly, the standard solution approach relies on replacing $\mathbb{I}(\cdot)$ with its convex surrogate function (see Definition 1) to obtain a convex constrained optimization problem 111For precisely approximating the indicator function in constraint, choosing “bounded” loss such as sigmoid or ramp loss (Collobert et al., 2006; Gasso et al., 2011) etc. seems appealing. However, bounded functions always bring about a non-convex property, and corresponding problems are usually NP-hard (Yu et al., 2012). These difficulties limit both efficient training and effective theoretical guarantee of the learned model.. Interestingly, Proposition 2 shows that whichever surrogate function and hypothesis set are chosen, the resulting constrained optimization problem is a weaker upper bound than that of our approach. Proposition 2. For any non-empty hypothesis set $\mathcal{H}$ and $f\in\mathcal{H}$, convex surrogate function $L(\cdot)$, let $$\displaystyle\bar{R}_{0}$$ $$\displaystyle=$$ $$\displaystyle\!\!\!\frac{1}{m}\sum\nolimits_{i=1}^{m}\mathbb{I}\left(f(\bm{x}_% {i}^{+})-f(\bm{x}_{[\lfloor\tau n\rfloor+1]}^{-})\leq 0\right)$$ $$\displaystyle\bar{R}_{1}$$ $$\displaystyle=$$ $$\displaystyle\!\!\!\frac{1}{m}\sum\nolimits_{i=1}^{m}L\left(f(\bm{x}_{i}^{+})-% \frac{1}{\lfloor\tau n\rfloor+1}\sum_{j=1}^{\lfloor\tau n\rfloor+1}f(\bm{x}_{[% j]}^{-})\right)$$ $$\displaystyle\bar{R}_{2}$$ $$\displaystyle=$$ $$\displaystyle\!\!\!\min_{b\in\mathbb{R}}\frac{1}{m}\sum_{i=1}^{m}L\left(f(\bm{% x}_{i}^{+})-b\right)~{}\textrm{s.t.}~{}\frac{1}{n}\sum_{j=1}^{n}L\left(b-f(\bm% {x}_{j}^{-})\right)\leq\tau,$$ we have $$\bar{R}_{0}\leq\bar{R}_{1}\leq\bar{R}_{2}.$$ Thus, in general, there is an exact gap between our risk $\bar{R}_{1}$ and convex constrained optimization risk $\bar{R}_{2}$. In practice one may prefer a tighter approximation since it represents the original objective better. Moreover, our Theorem 5 also achieves a tighter bound on the generalization error rate, by considering empirical risk $\bar{R}_{1}$. Ranking-Thresholding Traditional bipartite ranking methods usually have super-linear training complexity. Compared with them, the main advantage of our algorithm comes from its linear-time conplexity in each iteration without any convergence rate (please refer to Table 1). We named our algorithm $\tau$-FPL, and give a detailed description of how it works in the next sections. 3 Tolerance-Constrained False Positive Learning Based on previous discussion, our method will be divided into two stages, namely scoring and thresholding. In scoring, a function $f(\cdot)$ is learnt to maximize the probability of giving higher a score to positive instances than the centroid of top $\tau$ percent of the negative instances. In thresholding, a suitable threshold $b$ will be chosen, and the final prediction of an instance $\bm{x}$ can be obtained by $$y=\mathrm{sgn}\left(f(\bm{x})-b\right)\ .$$ (5) 3.1 Tolerance-Constrained Ranking In (4), we consider linear scoring function $f(\bm{x})=\bm{w}^{\top}\bm{x}$, where $\bm{w}\in\mathbb{R}^{d}$ is the weight vector to be learned, $\text{L}_{2}$ regularization and replace $\mathbb{I}(u<0)$ by some of its convex surrogate $l(\cdot)$. Kernel methods can be used for nonlinear ranking functions. As a result, the learning problem is $$\min\limits_{\bm{w}}\frac{1}{m}\sum\nolimits_{i=1}^{m}l\left(\bm{w}^{\top}\bm{% x}_{i}^{+}-\frac{1}{k}\sum\nolimits_{j=1}^{k}\bm{w}^{\top}\bm{x}_{[j]}^{-}% \right)+\frac{R}{2}\|\bm{w}\|^{2},$$ (6) where $R>0$ is the regularization parameter, and $k=\lfloor\tau n\rfloor+1$. Directly minimizing (6) can be challenging due to the $[\cdot]$ operator, we address it by considering its dual, Theorem 2. Define $\bm{X}^{+}=[\bm{x}_{1}^{+},...,\bm{x}_{m}^{+}]^{\top}$ and $\bm{X}^{-}=[\bm{x}_{1}^{-},...,\bm{x}_{m}^{-}]^{\top}$ be the matrix containing positive and negative instances in their rows respectively, the dual problem of (6) can be written by $$\displaystyle\min\limits_{(\bm{\alpha},\bm{\beta}\in\Gamma_{k})}\!g(\bm{\alpha% },\bm{\beta})\!\!=\!\!\frac{1}{2mR}||\bm{\alpha}^{\top}\bm{X}^{+}\!-\!\bm{% \beta}^{\top}\bm{X}^{-}||^{2}+\sum_{i=1}^{m}l^{*}(-\alpha_{i}),$$ (7) where $\bm{\alpha}$ and $\bm{\beta}$ are dual variables, $l^{*}(\cdot)$ is the convex conjugate of $l(\cdot)$, and the domain $\Gamma_{k}$ is defined as $$\displaystyle\Gamma_{k}=\left\{\bm{\alpha}\in\mathbb{R}_{+}^{m},\bm{\beta}\in% \mathbb{R}_{+}^{n}\mid\sum_{i=1}^{m}\alpha_{i}=\sum_{j=1}^{n}\beta_{j};\beta_{% j}\leq\frac{1}{k}\sum_{i=1}^{m}\alpha_{i},\forall j\right\}.$$ Let $\bm{\alpha}^{*}$ and $\bm{\beta}^{*}$ be the optimal solution of (7), the optimal $$\bm{w}^{*}=(mR)^{-1}(\bm{\alpha}^{*\top}\bm{X}^{+}-\bm{\beta}^{*\top}\bm{X}^{-% })^{\top}.$$ (8) According to Theorem 1, learning scoring function $f$ is equivalent to learning the dual variables $\bm{\alpha}$ and $\bm{\beta}$ by solving problem (7). Its optimization naturally falls into the area of projected gradient method. Here we choose $l(u)=[1-u]_{+}^{2}$ where $[x]_{+}\triangleq\max\{x,0\}$ due to its simplicity of conjugation. The key steps are summarized in Algorithm 1. At each iteration, we first update solution by the gradient of the objective function $g(\bm{\alpha},\bm{\beta})$, then project the dual solution onto the feasible set $\Gamma_{k}$. In the sequel, we will show that this projection problem can be efficiently solved in linear time. In practice, since $g(\cdot)$ is smooth, we also leverage Nesterov’s method to further accelerate the convergence of our algorithm. Nesterov’s method (Nesterov, 2003) achieves $O(1/T^{2})$ convergence rate for smooth objective function, where $T$ is the number of iterations. 3.2 Linear Time Projection onto the Top-k Simplex One of our main technical results is a linear time projection algorithm onto $\Gamma_{k}$, even in the case of $k$ is close to $n$. For clear notations, we reformulate the projection problem as $$\begin{split}\displaystyle\min_{\bm{\alpha}\geq 0,\bm{\beta}\geq 0}&% \displaystyle~{}\frac{1}{2}||\bm{\alpha}-\bm{\alpha}^{0}||^{2}+\frac{1}{2}||% \bm{\beta}-\bm{\beta}^{0}||^{2}\\ \displaystyle\text{\small s.t.}&\displaystyle~{}\sum\nolimits_{i=1}^{m}\alpha_% {i}=\sum\nolimits_{j=1}^{n}\beta_{j},\quad\beta_{j}\leq\frac{1}{k}\sum% \nolimits_{i=1}^{m}\alpha_{i},\forall j.\end{split}$$ (9) It should be noted that, many Euclidean projection problems studied in the literature can be seen as a special case of this problem. If the term $\sum_{i=1}^{m}\alpha_{i}$ is fixed, or replaced by a constant upper bound $C$, we obtain a well studied case of continuous quadratic knapsack problem (CQKP) $$\displaystyle\min_{\bm{\beta}}~{}~{}||\bm{\beta}-\bm{\beta}^{0}||^{2}\quad% \textrm{\small s.t.}~{}\sum\nolimits_{i=1}^{n}\beta_{i}\ \leq\ C,0\leq\beta_{i% }\leq C_{1}\ ,$$ where $C_{1}=C/k$. Several efficient methods based on median-selecting or variable fixing techniques are available (Patriksson, 2008). On the other hand, if $k=1$, all upper bounded constraints are automatically satisfied and can be omitted. Such special case has been studied, for example, in (Liu and Ye, 2009) and (Li et al., 2014), both of which achieve $O(n)$ complexity. Unfortunately, none of those above methods can be directly applied to solving the generalized case (9), due to its property of unfixed upper-bound constraint on $\beta$ when $k>1$. To our knowledge, the only attempt to address the problem of unfixed upper bound is (Lapin et al., 2015). They solve a similar (but simpler) problem $$\min_{\beta}~{}~{}||\bm{\beta}-\bm{\beta}^{0}||^{2}~{}~{}\textrm{\small s.t.}~% {}~{}0\leq\beta_{j}\leq\frac{1}{k}\sum\nolimits_{i=1}^{n}\beta_{i}$$ based on sorting and exhaustive search and their method achieves a runtime complexity $O(n\log(n)+kn)$, which is super-linear and even quadratic when $k$ and $n$ are linearly dependent. By contrast, our proposed method can be applied to both of the aforementioned special cases with minor changes and remains $O(n)$ complexity. The notable characteristic of our method is the efficient combination of bisection and divide-and-conquer: the former offers the guarantee of worst complexity, and the latter significantly reduces the large constant factor of bisection method. We first introduce the following theorem, which gives a detailed description of the solution for (9). Theorem 3. $(\bm{\alpha}^{*}\in\mathbb{R}^{m},\bm{\beta}^{*}\in\mathbb{R}^{n})$ is the optimal solution of (9) if and only if there exist dual variables $C^{*}\geq 0$, $\lambda^{*}$, $\mu^{*}\in\mathbb{R}$ satisfy the following system of linear constraints: $$\displaystyle C^{*}$$ $$\displaystyle=$$ $$\displaystyle\sum\nolimits_{i=1}^{m}[\alpha_{i}^{0}-\lambda^{*}]_{+}$$ (10) $$\displaystyle C^{*}$$ $$\displaystyle=$$ $$\displaystyle\sum\nolimits_{j=1}^{n}\min\{[\beta_{j}^{0}-\mu^{*}]_{+},C^{*}/k\}$$ (11) $$\displaystyle 0$$ $$\displaystyle=$$ $$\displaystyle\lambda^{*}+\mu^{*}+\frac{1}{k}\sum\nolimits_{j=1}^{n}\left[\beta% _{j}^{0}-\mu^{*}-C^{*}/k\right]_{+}$$ (12) and $\alpha_{i}^{*}=[\alpha_{i}^{0}-\lambda^{*}]_{+}$, $\beta_{j}^{*}=\min\{[\beta_{j}^{0}-\mu^{*}]_{+},{C^{*}}/{k}\}$. Based on Theorem 3, the projection problem can be solved by finding the value of three dual variables $C$, $\lambda$ and $\mu$ that satisfy the above linear system. Here we first propose a basic bisection method which guarantees the worst time complexity. Similar method has also been used in (Liu and Ye, 2009). For brevity, we denote $\alpha_{[i]}^{0}$ and $\beta_{[i]}^{0}$ the $i$-largest dimension in $\bm{\alpha^{0}}$ and $\bm{\beta^{0}}$ respectively, and define function $C(\lambda)$, $\mu(C)$, $\delta(C)$ and $f(\lambda)$ as follows222Indeed, for some $C$, $\mu(C)$ is not one-valued and thus need more precise definition. Here we omit it for brevity, and leave details in section 6.6.: $$\displaystyle C(\lambda)$$ $$\displaystyle=$$ $$\displaystyle\sum\nolimits_{i=1}^{m}[\alpha_{i}^{0}-\lambda]_{+}$$ (13) $$\displaystyle\mu(C)$$ $$\displaystyle=$$ $$\displaystyle\mu\text{ satisfies }(\ref{def mu})$$ (14) $$\displaystyle\delta(C)$$ $$\displaystyle=$$ $$\displaystyle\mu(C)+{C}/{k}$$ (15) $$\displaystyle f(\lambda)$$ $$\displaystyle=$$ $$\displaystyle k\lambda+k\mu(C(\lambda))+\sum\nolimits_{j=1}^{n}[\beta_{j}^{0}-% \delta(C(\lambda))]_{+}.$$ (16) The main idea of leveraging bisection to solve the system in theorem 3 is to find the root of $f(\lambda)=0$. In order to make bisection work, we need three conditions: $f$ should be continuous; the root of $f$ can be efficiently bracketed in a interval; and the value of $f$ at the two endpoints of this interval have opposite signs. Fortunately, based on the following three lemmas, all of those requirements can be satisfied. Lemma 1. (Zero case)  $(\bm{0}^{m},\bm{0}^{n})$ is an optimal solution of (9) if and only if $k\alpha_{[1]}^{0}+\sum_{j=1}^{k}\beta_{[j]}^{0}\leq 0$. Lemma 2. (Bracketing $\lambda^{*}$) If $C^{*}>0$, $\lambda^{*}\in(-\beta_{[1]}^{0},\ \alpha_{[1]}^{0})$. Lemma 3. (Monotonicity and convexity) 1. $C(\lambda)$ is convex, continuous and strictly decreasing in $(-\infty$, $\alpha_{[1]}^{0})$; 2. $\mu(C)$ is continuous, monotonically decreasing in $(0,+\infty)$; 3. $\delta(C)$ is continuous, strictly increasing in $(0,+\infty)$; 4. $f(\lambda)$ is continuous, strictly increasing in $(-\infty,\alpha_{[1]}^{0})$. Furthermore, we can define the inverse function of $C(\lambda)$ as $\lambda(C)$, and rewrite $f(\lambda)$ as: $$f(\lambda(C))=k\lambda(C)+k\mu(C)+\sum\nolimits_{j=1}^{n}[\beta_{j}^{0}-\delta% (C)]_{+},$$ (17) it is a convex function of $C$, strictly decreasing in $(0,+\infty)$. Lemma 1 deals with the special case of $C^{*}=0$. Lemma 2 and 3 jointly ensure that bisection works when $C^{*}>0$; Lemma 2 bounds $\lambda^{*}$; Lemma 3 shows that $f$ is continuous, and since it is also strictly increasing, the value of $f$ at two endpoints must has opposite sign. Basic method: bisection & leverage convexity We start from select current $\lambda$ in the range $(-\beta_{[1]}^{0},\ \alpha_{[1]}^{0})\triangleq[l,u]$. Then compute corresponding $C$ by (13) in $O(m)$, and use the current $C$ to compute $\mu$ by (14). Computing $\mu$ can be completed in $O(n)$ by a well-designed median-selecting algorithm (Kiwiel, 2007). With the current (i.e. updated) $C$, $\lambda$ and $\mu$ in hand, we can evaluate the sign of $f(\lambda)$ in $O(n)$ and determine the new bound of $\lambda$. In addition, the special case of $C=0$ can be checked using Lemma 1 in $O(m+n)$ by a linear-time k-largest element selecting algorithm (Kiwiel, 2005). Since the bound of $\lambda$ is irrelevant to $m$ and $n$, the number of iteration for finding $\lambda^{*}$ is $\log(\frac{u-l}{\epsilon})$, where $\epsilon$ is the maximum tolerance of the error. Thus, the worst runtime of this algorithm is $O(m+n)$. Furthermore, we also leverage the convexity of $f(\lambda(C))$ and $C(\lambda)$ to further improve this algorithm, please refer to (Liu and Ye, 2009) for more details about related techniques. Although bisection solves the projections in linear time, it may lead to a slow convergence rate. We further improve the runtime complexity by reducing the constant factor $\log(\frac{u-l}{\epsilon})$. This technique benefits from exploiting the monotonicity of both functions $C(\lambda)$, $\mu(C)$, $\delta(C)$ and $f(\lambda)$, which have been stated in Lemma 3. Notice that, our method can also be used for finding the root of arbitary piecewise linear and monotone function, without the requirement of convexity. Improved method: endpoints Divide & Conquer Lemma 3 reveals an important chain monotonicity between the dual variables, which can used to improve the performance of our baseline method. The key steps are summarized in Algorithm 2. Denote the value of a variable $z$ in iteration $t$ as $z_{t}$. For instance, if $\lambda_{t}>\lambda_{t-1}$, from emma 3 we have $C_{t}<C_{t-1}$, $\mu_{t}>\mu_{t-1}$ and $\delta_{t}<\delta_{t-1}$. This implies that we can set uncertainty intervals for both $\lambda$, $C$, $\mu$ and $\delta$. As the interval of $\lambda$ shrinking, lengths of these four intervals can be reduced simultaneously. On the other hand, notice that $C(\lambda)$ is indeed piecewise linear function (at most $m+1$ segments), the computation of its value only contains a comparison between $\lambda_{t}$ and all of the $\alpha_{i}^{0}$s. By keeping a cache of $\alpha_{i}^{0}$s and discard those elements which are out of the current bound of $\lambda$ in advance, in each iteration we can reduce the expected comparison counts by half. A more complex but similar procedure can also be applied for computing $\mu(C)$, $\delta(C)$, and $f(\lambda)$, because both of these functions are piecewise linear and the main cost is the comparison with $O(m+n)$ endpoints. As a result, for approximately linear function and evenly distributed breakpoints, if the first iteration of bisection costs $\gamma(m+n)$ time, the overall runtime of the projection algorithm will be $\gamma(m+n)+\gamma(m+n)/2+...\leq 2\gamma(m+n)$, which is much less than the original bisection algorithm whose runtime is $\log(\frac{u-l}{\epsilon})\gamma(m+n)$. 3.3 Convergence and Computational Complexity Following immediately from the convergence result of Nesterov’s method, we have: Theorem 4. Let $\bm{\alpha}_{T}$ and $\bm{\beta}_{T}$ be the output from the $\tau$-FPL algorithm after T iterations, then $g(\bm{\alpha}_{T},\bm{\beta}_{T})\leq\min g(\bm{\alpha},\bm{\beta})+\epsilon$, where $T\geq O(1/\sqrt{\epsilon})$. Finally, the computational cost of each iteration is dominated by the gradient evaluation and the projection step. Since the complexity of projection step is $O(m+n)$ and the cost of computing the gradient is $O(d(m+n))$, combining with Theorem 4 we have that: to find an $\epsilon$-suboptimal solution, the total computational complexity of $\tau$-FPL is $O(d(m+n)/\sqrt{\epsilon})$. Table 1 compares the computational complexity of $\tau$-FPL with that of some state-of-the-art methods. The order of validation complexity corresponds to the number of hyper-parameters. From this, it is easy to see that $\tau$-FPL is asymptotically more efficient. 3.4 Out-of-Bootstrap Thresholding In the thresholding stage, the task is to identify the boundary between the positive instances and $(1-\tau)$ percent of the negative instances. Though thresholding on the training set is commonly used in (Joachims, 1996; Davenport et al., 2010; Scheirer et al., 2013), it may result in overfitting. Hence, we propose an out-of-bootstrap method to find a more accurate and stable threshold. At each time, we randomly split the training set into two sets $\mathcal{S}_{1}$ and $\mathcal{S}_{2}$, and then train on $\mathcal{S}_{1}$ as well as the select threshold on $\mathcal{S}_{2}$. The procedure can be running multiple rounds to make use of all the training data. Once the process is completed, we can obtain the final threshold by averaging. On the other hand, the final scoring function can be obtained by two ways: learn a scoring function using the full set of training data, or gather the weights learned in each previous round and average them. This method combines both the advantages of out-of-bootstrap and soft-thresholding techniques: accurate error estimation and reduced variance with little sacrifice on the bias, thus fits the setting of thresholding near the risk area. 4 Theoretical Guarantees Now we develop the theoretical guarantee for the scoring function, which bounds the probability of giving any positive instances higher score than $1-\tau$ proportion of negative instances. To this end, we first define $h(\bm{x},f)$, the probability for any negative instance to be ranked above $x$ using $f$, i.e. $h(x,f)=\mathbb{E}_{x^{-}\sim\mathbb{P}^{-}}[\mathbb{I}(f(\bm{x})\leq f(\bm{x}^% {-}))]$, and then measure the quality of $f$ by $P(f,\tau)=\mathbb{P}_{\bm{x}^{+}\sim\mathbb{P}^{+}}(h(\bm{x}^{+},f)\geq\tau)$, which is the probability of giving any positive instances lower score than $\tau$ percent of negative instances. The following theorem bounds $P(f,\tau)$ by the empirical loss $L_{\bar{k}}$. Theorem 5. Given training data $\mathcal{S}$ consisting of $m$ independent instances from distribution $\mathbb{P}^{+}$ and $n$ independent instances from distribution $\mathbb{P}^{-}$, let $f^{*}$ be the optimal solution to the problem (6). Assume $m\geq 12$ and $n\gg s$. We have, for proper $R$ and any $k\leq n$, with a probability at least $1-2e^{-s}$, $$P(f^{*},\tau)\leq L_{\bar{k}}+O\left(\sqrt{(s+\log(m)/m)}\right),$$ (18) where $\tau=k/n+O\left(\sqrt{\log m/n}\right)$, and $L_{\bar{k}}=\frac{1}{m}\sum_{i=1}^{m}l\left(f^{*}(\bm{x}_{i}^{+})-\frac{1}{k}% \sum_{j=1}^{k}f^{*}(\bm{x}_{[j]}^{-})\right)$. Theorem 5 implies that if $L_{\bar{k}}$ is upper bounded by $O(\log(m)/m))$, the probability of ranking any positive samples below $\tau$ percent of negative samples is also bounded by $O(\log(m)/m))$. If $m$ is approaching infinity, $P(f^{*},\tau)$ would be close to 0, which means in that case, we can almost ensure that by thresholding at a suitable point, the true-positive rate will get close to 1. Moreover, we observe that $m$ and $n$ play different roles in this bound. For instance, it is well known that the largest absolute value of Gaussian random instances grows in $\log(n)$. Thus we believe that the growth of $n$ only slightly affects both the largest and the centroid of top-proportion scores of negatives samples. This leads to a conclusion that increasing $n$ only slightly raise $L_{\bar{k}}$, but significant reduce the margin between target $\tau$ and $k/n$. On the other hand, increasing $m$ will reduce upper bound of $P$, thus increasing the chance of finding positive instances at the top. In sum, $n$ and $m$ control $\tau$ and $P$ respectively. 5 Experiment Results 5.1 Effectiveness of the Linear-time Projection We first demonstrate the effectiveness of our projection algorithm. Following the settings of (Liu and Ye, 2009), we randomly sample 1000 samples from the normal distribution ${\cal N}(0,1)$ and solve the projection problem. The comparing method is ibis (Liu and Ye, 2009), an improved bisection algorithm which also makes use of the convexity and monotonicity. All experiments are running on an Intel Core i5 Processor. As shown in Fig.2, thanks to the efficient reduction of the constant factor, our method outperforms ibis by saving almost $75\%$ of the running time in the limit case. We also solve the projection problem proposed in (Lapin et al., 2015) by using a simplified version of our method, and compare it with the method presented in (Lapin et al., 2015) (PTkC), whose complexity is $O(n\log(n)+kn)$. As one can observe from Fig.2(b), our method is linear in complexity regarding with $n$ and does not suffer from the growth of $k$. In the limit case (both large $k$ and $n$), it is more than three-order of magnitude faster than the competitors. 5.2 Ranking Performance Next, we validate the ranking performance of our $\tau$-FPL method, i.e. scoring and sorting test samples, and then evaluate the proportion of positive samples ranked above $1-\tau$ proportion of negative samples. Considering ranking performance independently can avoid the practical problem of mismatching the constraint in (2) on testing set, and always offer us the optimal threshold. Specifically, we choose (3) as evaluation and validation criterion. Compared methods include cost-sensitive SVM (CS-SVM) (Osuna et al., 1997), which has been shown a lower bound approximation of (3); TopPush (Li et al., 2014) ranking, which focus on optimizing the absolute top of the ranking list, also a special case of our model ($\tau=0$); $\text{SVM}_{\text{tight}}^{\text{pAUC}}$ (Narasimhan and Agarwal, 2013a), a more general method which designed for optimizing arbitrary partial-AUC. We test two version of our algorithms: $\tau$-Rank and $2\tau$-Rank, which correspond to the different choice of $\tau$ in learning scheme. Intuitively, enlarge $\tau$ in training phase can be seen as a top-down approximation—from upper bound to the original objective (2). On the other hand, the reason for choosing $2\tau$ is that, roughly speaking, the average score of the top $2\tau$ proportion of negative samples may close to the score of $\tau n$-th largest negative sample. Settings. We evaluate the performance on publicly benchmark datasets with different domains and various sizes333https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary. For small scale datasets($\leq 10,000$ instances), 30 times stratified hold-out tests are carried out, with $2/3$ data as train set and $1/3$ data as test set. For large datasets, we instead run 10 rounds. In each round, hyper-parameters are chosen by 5-fold cross validation from grid, and the search scope is extended if the optimal is at the boundary. Results. Table 2 reports the experimental results. We note that at most cases, our proposed method outperforms other peer methods. It confirms the theoretical analysis that our methods can extract the capacity of the model better. For TopPush, it is highly-competitive in the case of extremely small $\tau$, but gradually lose its advantage as $\tau$ increase. The algorithm of $\text{SVM}_{\text{tight}}^{\text{pAUC}}$ is based on cutting-plane methods with exponential number of constraints, similar technologies are also used in many other ranking or structured prediction methods, e.g. Structured SVM (Tsochantaridis et al., 2005). The time complexity of this kind of methods is $O((m+n)\log(m+n))$, and we found that even for thousands of training samples, it is hard to finish experiments in allowed time. 5.3 Overall Classification Accuracy In this section we compare the performance of different models by jointly learning the scoring function and threshold in training phase, i.e. output a classifier. To evaluate a classifier under the maximum tolerance, we use Neyman-Pearson score (NP-score) (Scott, 2007). The NP-score is defined by $\frac{1}{\tau}\max\{fpr,\tau\}-tpr$ where $fpr$ and $tpr$ are false-positive rate and true-positive rate of the classifier respectively, and $\tau$ is the maximum tolerance. This measure punishes classifiers whose false-positive rates exceed $\tau$, and the punishment becomes higher as $\tau\to 0$. Settings. We use the similar setting for classification as for ranking experiments, i.e., for small scale datasets, 30 times stratified hold-out tests are carried out; for large datasets, we instead run 10 rounds. Comparison baselines include: Cost-Sensitive Logistic Regression (CS-LR) which choose a surrogate function that different from CS-SVM; Bias-Shifting Support Vector Machine (BS-SVM), which first training a standard SVM and then tuning threshold to meet specified false-positive rate; cost-sensitive SVM (CS-SVM). For complete comparison, we also construct a CS-SVM by our out-of-bootstrap thresholding (CS-SVM-OOB), to eliminate possible performance gains comes from different thresholding method, and focus on the training algorithm itself. For all of comparing methods, the hyper-parameters are selected by 5-fold cross-validation with grid search, aims at minimizing the NP-score, and the search scope is extended when the optimal value is at the boundary. For our $\tau$-FPL, in the ranking stage the regularization parameter $R$ is selected to minimize (3) , and then the threshold is chosen to minimize NP-score. We test two variants of our algorithms: $\tau$-FPL and $2\tau$-FPL, which corresponding different choice of $\tau$ in learning scheme. As mentioned previously, enlarge $\tau$ can be seen as a top-down approximation towards the original objective. Results. The NP-score results are given in Table 3. First, we note that both our methods can achieve the best performance in most of the tests, compared to various comparing methods. Moreover, it is clear that even using the same method to select the threshold, the performance of cost sensitive method is still limited. Another observation is that both of the three algorithms which using out-of-bootstrap thresholding can efficiently control the false positive rate under the constraint. Moreover, $\tau$-FPLs are more stable than other algorithms, which we believe benefits from the accurate splitting of the positive-negative instances and stable thresholding techniques. 5.4 Scalability We study how $\tau$-FPL scales to a different number of training examples by using the largest dataset real-sim. In order to simulate the limit situation, we construct six datasets with different data size, by up-sampling original dataset. The sampling ratio is $\{1,2,2^{2},...2^{5}\}$, thus results in six datasets with data size from 72309 to 2313888. We running $\tau$-FPL ranking algorithm on these datasets with different $\tau$ and optimal $R$ (chosen by cross-validation), and report corresponding training time. Up-sampling technology ensures that, for a fixed $\tau$, all the six datasets share the same optimal regularization parameter $R$. Thus the unique variable can be fixed as data size. Figure 3 shows the log-log plot for the training time of $\tau$-FPL versus the size of training data, where different lines correspond to different $\tau$. It is clear that the training time of $\tau$-FPL is indeed linear dependent in the number of training data. This is consistent with our theoretical analysis and also demonstrate the scalability of $\tau$-FPL. 6 Proofs and Technical Details In this section, we give all the detailed proofs missing from the main text, along with ancillary remarks and comments. Notation. In the following, we define $z_{[i]}$ be the $i$-th largest dimension of a vector $\bm{z}=(z_{1},...,z_{N})\in\mathbb{R}^{N}$, define $\alpha_{[0]}^{0}=\beta_{[0]}^{0}=\infty$, $\alpha_{[n+1]}^{0}=\beta_{[n+1]}^{0}=-\infty$, define $B_{i}^{j}$ as the range $(\beta_{[j]}^{0},\beta_{[i]}^{0}]$ and $B_{i}$ as the abbreviation of $B_{i}^{i+1}$. 6.1 Proof of Theorem 1: NP-Hardness of $\tau$-$OPT_{L}^{\lambda}$ Our proof is based on a turing-reduction of the following Maximum Agreement Problem (MAP) to $\tau$-$OPT_{L}^{\lambda}$. Maximum Agreement Problem. Define $\mathcal{H^{\prime}}=\{f(\bm{x})=\bm{w}^{\top}\bm{x}-b\mid\bm{w}\in\mathbb{R}^% {p},b\in R\}$. We say $f\in\mathcal{H^{\prime}}$ and point $(\bm{x},y)\in R^{p}\times\{+1,-1\}$ reach an agreement if $yf(\bm{x})>0$. Given data set $D=\{(\bm{x}_{1},y_{1}),...(\bm{x}_{N},y_{N})\}$ contains $N$ samples, find a $f\in\mathcal{H^{\prime}}$ with the maximum number of agreements reached on $D$. It is clear that MAP is in fact equivalent to the problem of binary classification using 0-1 loss and hyperplane hypothesis set. In general, both solving MAP or approximately maximizing agreements to within some constant factor (418/415) are NP-hard (Ben-David et al., 2003). Now we introduce the decision problem version of MAP (DP-MAP). Decision Problem of MAP. Given data set $D=\{(\bm{x}_{1},y_{1}),...(\bm{x}_{N},y_{N})\}$ and any $0\leq k\leq N$, whether there exists some $f\in\mathcal{H^{\prime}}$ reaches at least $k$ agreements on $D$ ? If yes, output one of such $f$. If we have an oracle algorithm $O$ of DP-MAP, we can solve MAP by applying bisection search on $k$ and take the maximum value of $k$ that $O$ output Yes as solution. The overall number of calling $O$ is at most $\log(N)$. This naive reduction shows that DP-MAP is also NP-hard. More precisely, it is a NP-Complete problem. Now we consider to solve DP-MAP by $\tau$-$OPT_{L}^{\lambda}$. This means that, for fixed $0<\tau<1$, $\lambda>0$ and surrogate $L(\cdot)$, we have an oracle algorithm of $\tau$-$OPT_{L}^{\lambda}$, denoted by $O$. We need to design an algorithm so that for all $0\leq k\leq N$ and data set $D$ with $N$ $p$-dimensional points, we can determine if there exists a hyperplane reaches at least $k$ agreements on $D$ by polynomial calls to $O$. Case-1: $N-\lfloor\tau N\rfloor<k\leq N$.  In this case, we construct the following dataset $D_{O}$ as input to $O$: $$\displaystyle m$$ $$\displaystyle=$$ $$\displaystyle 1$$ $$\displaystyle n$$ $$\displaystyle=$$ $$\displaystyle N+A$$ $$\displaystyle d$$ $$\displaystyle=$$ $$\displaystyle p+1$$ $$\displaystyle\bm{x}^{+}$$ $$\displaystyle=$$ $$\displaystyle\bm{0}^{d}$$ $$\displaystyle\bm{x}_{i}^{-}$$ $$\displaystyle=$$ $$\displaystyle(-y_{i}\bm{x}_{i}^{\top},-y_{i}),~{}~{}i=1,...,N$$ $$\displaystyle\bm{x}_{N+j}^{-}$$ $$\displaystyle=$$ $$\displaystyle\bm{0}^{d},~{}~{}j=1,...,A$$ Here, $A\in\mathbb{N}$ is the smallest non-negative integer that satisfies $$g(A)\triangleq\lfloor\tau(N+A)\rfloor-A=N-k$$ (19) We give some properties about $g(A)$. Lemma 4. (Properties of $g:\mathbb{N}\rightarrow\mathbb{Z}$) 1. $0\leq g(A)-g(A+1)\leq 1$; 2. $g(A)\leq 0$ when $A\geq\frac{\tau}{1-\tau}N$; 3. For any integer $T\in[0,g(0)]$, there exist $A=\mathcal{O}(N)$ such that $g(A)=T$. Both of them are easy to verify, so we omit the details. Combine these properties and the fact $0\leq N-k<g(0)$, we know that there must exist some $A\in[0,\lceil\frac{\tau}{1-\tau}N\rceil]$ satisfies (19), and thus the size of dataset we constructed above is linearly related to $N$ and $p$. Now we introduce the following lemma. Lemma 5. There exists hyperplane reaches at least $k$ agreements iff the minimum value that $O$ found is less than $L(0)$. Proof. On the one hand, if there exists a hyperplane $f_{0}(\bm{x})=\bm{w}_{0}^{\top}\bm{x}+b_{0}\in\mathcal{H^{\prime}}$ reaches at least $k$ agreements on $D$, we know $|\mathcal{T}|\leq N-k$ where $\mathcal{T}=\{t\mid(\bm{w}_{0}^{\top},b_{0})(y_{t}\bm{x_{t}}^{\top},y_{t})^{% \top}\leq 0\}$. Define $\bm{w_{1}}=\frac{\lambda(\bm{w}_{0}^{\top},b_{0})^{\top}}{||(\bm{w}_{0}^{\top}% ,b_{0})^{\top}||}$. Now $\bm{w}_{1}\in\mathcal{H}^{d}$ and at most $N-k$ different values of $t\in\{1,...,N\}$ satisfies $$\bm{w}_{1}^{T}\bm{x}^{+}-\bm{w}_{1}^{T}\bm{x}_{t}^{-}\leq 0.$$ (20) Note that for any $j=N+1,...,N+A$ we have $\bm{w}_{1}^{T}\bm{x}^{+}-\bm{w}_{1}^{T}\bm{x}_{j}^{-}=0$, combine these two observations we have: at most $N-k+A$ different values $t\in\{1,...,N+A\}$ satisfies (20). Thus, $$\displaystyle\bm{w}_{1}^{T}\bm{x}^{+}-\bm{w}_{1}^{T}\bm{x}_{[N-k+A+1]}^{-}>0$$ $$\displaystyle\Rightarrow$$ $$\displaystyle L(\bm{w}_{1}^{T}\bm{x}^{+}-\bm{w}_{1}^{T}\bm{x}_{[N-k+A+1]}^{-})% <L(0)$$ $$\displaystyle\Rightarrow$$ $$\displaystyle L(\bm{w}_{1}^{T}\bm{x}^{+}-\bm{w}_{1}^{T}\bm{x}_{[\lfloor\tau(N+% A)\rfloor+1]}^{-})<L(0)$$ $$\displaystyle\Rightarrow$$ $$\displaystyle\min\limits_{\bm{w}\in\mathcal{H}^{d}}L(\bm{w}^{T}\bm{x}^{+}-\bm{% w}^{T}\bm{x}_{[\lfloor\tau n\rfloor+1]}^{-})<L(0)$$ The LHS of the last inequality is indeed the minimum value that $O$ output. On the other hand, it is obvious that the above proof is easily reversible, this completes the proof of lemma. ∎ By Lemma 5, we can determine if there exists a hyperplane reaches at least $k$ agreements by calling $O$ once. If the output minimum value is less than $L(0)$, the hyperplane that $O$ learned is exactly corresponds to the hyperplane that reaches enough agreements on $D$, otherwise there is no such hyperplane. We thus complete the reduction. Case-2: $0\leq k\leq N-\lfloor\tau N\rfloor$. The dataset we used here as input to $O$ is $$\displaystyle m$$ $$\displaystyle=$$ $$\displaystyle 1$$ $$\displaystyle n$$ $$\displaystyle=$$ $$\displaystyle N+A$$ $$\displaystyle d$$ $$\displaystyle=$$ $$\displaystyle p+2$$ $$\displaystyle\bm{x}^{+}$$ $$\displaystyle=$$ $$\displaystyle\bm{0}^{d}$$ $$\displaystyle\bm{x}_{i}^{-}$$ $$\displaystyle=$$ $$\displaystyle(-y_{i}\bm{x}_{i}^{\top},-y_{i},0)^{\top},~{}~{}i=1,...,N$$ $$\displaystyle\bm{x}_{N+j}^{-}$$ $$\displaystyle=$$ $$\displaystyle(\bm{0}^{p+1\top},-1)^{\top},~{}~{}j=1,...,A$$ $A\in\mathbb{N}$ is the smallest non-negative integer that satisfies $$h(A)\triangleq\lfloor\tau(N+A)\rfloor=N-k$$ (21) Lemma 6. (Properties of $h:\mathbb{N}\rightarrow\mathbb{N^{+}}$) 1. $0\leq h(A+1)-h(A)\leq 1$; 2. $h(A)>N$ when $A\geq\frac{1-\tau}{\tau}N+\frac{1}{\tau}$; 3. For any integer $T\in[\lfloor\tau N\rfloor,N]$, there exist $A=\mathcal{O}(N)$ such that $h(A)=T$. Combine Lemma 6 and the fact $\lceil\tau N\rceil\leq N-k\leq N$ we know that the size of dataset constructed above is linearly related to $N$ and $p$. Now the claim in Lemma 5 is also true in this case, we give a proof sketch below. Proof. We follow the same definitions of $f_{0}$ and $\mathcal{T}$ as in the proof of Case-1. Define $\bm{w_{1}}=\frac{\lambda(\bm{w}_{0}^{\top},b_{0},1)^{\top}}{||(\bm{w}_{0}^{% \top},b_{0},1)^{\top}||}$. Now $\bm{w}_{1}\in\mathcal{H}^{d}$ and we have $$\displaystyle\bm{w}_{1}^{T}\bm{x}^{+}-\bm{w}_{1}^{T}\bm{x}_{i}^{-}$$ $$\displaystyle=$$ $$\displaystyle(\bm{w}_{0}^{\top},b_{0})(y_{i}\bm{x_{i}}^{\top},y_{i})^{\top},~{% }~{}~{}\forall i=1,...,N$$ $$\displaystyle\bm{w}_{1}^{T}\bm{x}^{+}-\bm{w}_{1}^{T}\bm{x}_{j}^{-}$$ $$\displaystyle=$$ $$\displaystyle 1,~{}~{}~{}\forall j=N+1,...N+A$$ Thus, (20) holds for at most $N-k$ different values of $t$ in $\{1,...,N+A\}$. This implies $$\displaystyle\bm{w}_{1}^{T}\bm{x}^{+}-\bm{w}_{1}^{T}\bm{x}_{[N-k+1]}^{-}>0$$ $$\displaystyle\Rightarrow$$ $$\displaystyle L(\bm{w}_{1}^{T}\bm{x}^{+}-\bm{w}_{1}^{T}\bm{x}_{[N-k+1]}^{-})<L% (0)$$ $$\displaystyle\Rightarrow$$ $$\displaystyle L(\bm{w}_{1}^{T}\bm{x}^{+}-\bm{w}_{1}^{T}\bm{x}_{[\lfloor\tau(N+% A)\rfloor+1]}^{-})<L(0)$$ $$\displaystyle\Rightarrow$$ $$\displaystyle\min\limits_{\bm{w}\in\mathcal{H}^{d}}L(\bm{w}^{T}\bm{x}^{+}-\bm{% w}^{T}\bm{x}_{[\lfloor\tau n\rfloor+1]}^{-})<L(0)$$ Above proof can be easily reversed for another direction, we omit the details. ∎ Combine both Case-1 and Case-2, we complete the reduction from DP-MAP to $\tau$-$OPT_{L}^{\lambda}$. Since DP-MAP is NP-Complete, we conclude that $\tau$-$OPT_{L}^{\lambda}$ must be NP-hard. Remark 1. It is clear that above reduction can not be used for $\tau=0$. Indeed, for convex surrogate $L(\cdot)$ the objective of $0$-$OPT_{L}^{\lambda}$ is convex and its global minimum can be efficiently obtained, see (Li et al., 2014) for more details. Remark 2. In problem definition and above proof we implicitly suppose that the minimum of $\tau$-$OPT_{L}^{\lambda}$ can be attained. In general cases, by considering the decision problem of $\tau$-$OPT_{L}^{\lambda}$ one can complete a very similar reduction and we omit details for simplicity here. 6.2 Proof of Proposition 2 $\bar{R}_{0}\leq\bar{R}_{1}$ can be easily obtained by combining the relationship between mean and maximum value and the definition that $L(\cdot)$ is an upper bound of $\mathbb{I}(\cdot)$. We now prove $\bar{R}_{1}\leq\bar{R}_{2}$. Define $k=\lfloor\tau n\rfloor+1$, we have $$\displaystyle kL(0)\geq k>\tau n$$ $$\displaystyle\geq$$ $$\displaystyle\sum_{j=1}^{n}L(b-f(\bm{x}_{j}^{-}))~{}\text{(Constraint on FPR)}$$ $$\displaystyle\geq$$ $$\displaystyle\sum_{j=1}^{k}L(b-f(\bm{x}_{[j]}^{-}))~{}\text{(Nonnegativity of % }L\text{)}$$ $$\displaystyle\geq$$ $$\displaystyle kL(\frac{1}{k}\sum_{j=1}^{k}(b-f(\bm{x}_{[j]}^{-})))~{}\text{(% Jensen's Inequality)}$$ $$\displaystyle=$$ $$\displaystyle kL(b-\frac{1}{k}\sum_{j=1}^{k}f(\bm{x}_{[j]}^{-}))$$ $$\displaystyle\Leftrightarrow$$ $$\displaystyle b>\frac{1}{k}\sum_{j=1}^{k}f(\bm{x}_{[j]}^{-})~{}\text{(% Monotonicity of }L\text{)}$$ Thus $$\displaystyle\bar{R}_{2}$$ $$\displaystyle=$$ $$\displaystyle\min_{b\in\mathbb{R}}\frac{1}{m}\sum_{i=1}^{m}L(f(\bm{x}_{i}^{+})% -b)$$ $$\displaystyle\geq$$ $$\displaystyle\frac{1}{m}\sum\nolimits_{i=1}^{m}L(f(\bm{x}_{i}^{+})-\frac{1}{% \lfloor\tau n\rfloor+1}\sum_{j=1}^{\lfloor\tau n\rfloor+1}f(\bm{x}_{[j]}^{-}))$$ $$\displaystyle=$$ $$\displaystyle\bar{R}_{1}.$$ 6.3 Proof of Theorem 2 Since truncated quadratic loss is non-increasing and differentiable, it can be rewritten in its convex conjugate form, that is $$l(z)=\max\limits_{\alpha\leq 0}\ \{\alpha z-l_{*}(\alpha)\}$$ (22) where $l_{*}(\alpha)$ is the convex conjugate of $l$. Based on this, we can rewrite the problem (6) as $$\displaystyle\min\limits_{\bm{w}}\max_{\alpha\leq 0}\sum_{i=1}^{m}\alpha_{i}(% \bm{w}^{\top}\bm{x}_{i}^{+}-\frac{1}{k}\sum_{j=1}^{k}\bm{w}^{\top}\bm{x}_{[j]}% ^{-})-\sum_{i=1}^{m}l_{*}(\alpha_{i})+\frac{mR}{2}||\bm{w}||^{2}$$ where $\bm{\alpha}=(\alpha_{1},...,\alpha_{m})^{\top}$ is the dual variable. On the other hand, it is easy to verify that, for $\bm{t}=(t_{1},...,t_{n})^{\top}\in\mathbb{R}^{n}$, $$\sum_{i=1}^{k}t_{[k]}=\max\limits_{\bm{p}\in\Omega}\ \bm{p}^{\top}\bm{t}$$ with $\Omega=\{\bm{p}\mid\bm{0}\leq\bm{p}\leq\bm{1},\bm{1_{n}}^{\top}\bm{p}=k\}$. By substituting this into (6.3), the problem becomes $$\displaystyle\min\limits_{\bm{w}}\max\limits_{\bm{\alpha}\leq\bm{0}^{m},\bm{p}% \in\Omega}\sum_{i=1}^{m}\alpha_{i}(\bm{w}^{\top}\bm{x}_{i}^{+}-\frac{1}{k}\sum% _{j=1}^{n}p_{j}\bm{w}^{\top}\bm{x}_{j}^{-})-\sum_{i=1}^{m}l_{*}(\alpha_{i})+% \frac{mR}{2}||\bm{w}||^{2}$$ Now, define $\beta_{j}=\frac{1}{k}p_{j}\sum_{i=1}^{m}\alpha_{i}$, above problem becomes $$\displaystyle\min\limits_{\bm{w}}\max\limits_{\bm{\alpha}\leq\bm{0}^{m},\bm{% \beta}\leq\bm{0}^{n}}$$ $$\displaystyle\sum_{i=1}^{m}\alpha_{i}\bm{w}^{\top}\bm{x}_{i}^{+}-\sum_{j=1}^{n% }\beta_{j}\bm{w}^{\top}\bm{x}_{j}^{-}-\sum_{i=1}^{m}l_{*}(\alpha_{i})+\frac{mR% }{2}||\bm{w}||^{2}$$ $$\displaystyle s.t.$$ $$\displaystyle\sum_{i=1}^{m}\alpha_{i}=\sum_{j=1}^{n}\beta_{j},\beta_{j}\geq% \frac{1}{k}\sum_{i=1}^{m}\alpha_{i}$$ Notice that this replacement is able to keep the two problems equivalent. Since the objective above is convex in $\bm{w}$, and jointly concave in $\bm{\alpha}$ and $\bm{\beta}$, also its feasible domain is convex; hence it satisfies the strong max-min property (Boyd and Vandenberghe, 2004), the min and max can be swapped. After swapping, we first consider the inner minimization subproblem over $\bm{w}$, that is $$\min\limits_{\bm{w}}\sum_{i=1}^{m}\alpha_{i}\bm{w}^{\top}\bm{x}_{i}^{+}-\sum_{% j=1}^{n}\beta_{j}\bm{w}^{\top}\bm{x}_{j}^{-}+\frac{mR}{2}||\bm{w}||^{2}$$ Here we omit items which does not depend on $\bm{w}$. This is an unconstrained quadratic programming problem, whose solution is $\bm{w}^{*}=-\frac{1}{mR}(\bm{\alpha}^{\top}X^{+}-\bm{\beta}^{\top}\bm{X}^{-})^% {\top}$, and the minimum value is given as $$-\frac{1}{2mR}||\bm{\alpha}^{\top}\bm{X}^{+}-\bm{\beta}^{\top}\bm{X}^{-}||^{\top}$$ (23) Then, we consider the maximization over $\bm{\alpha}$ and $\bm{\beta}$. By replacing them with $-\bm{\alpha}$ and $-\bm{\beta}$, we can obtain the conclusion of Theorem 2. ∎ 6.4 Proof of Theorem 3 Let $\lambda,\mu,u_{i}\geq 0,v_{i}\geq 0,\omega_{i}\geq 0$ be dual variables, and let $C=\sum_{i=1}^{m}\alpha_{i}^{0}=\sum_{j=1}^{n}\beta_{j}^{0}$. Then the Lagrangian function of (9) can be written as $$\displaystyle\mathcal{L}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}||\bm{\alpha}-\bm{\alpha_{0}}||^{2}+\frac{1}{2}||\bm{% \beta}-\bm{\beta_{0}}||^{2}+\lambda(\sum_{i=1}^{m}\alpha_{i}^{0}-C)+$$ $$\displaystyle\mu(\sum_{j=1}^{n}\beta_{j}^{0}-C)-\sum_{i=1}^{m}u_{i}\alpha_{i}^% {0}-\sum_{j=1}^{n}v_{i}\beta_{j}^{0}+\sum_{j=1}^{n}(\beta_{j}^{0}-\frac{C}{k}).$$ The KKT conditions are $$\displaystyle\frac{\partial\mathcal{L}}{\partial\alpha_{i}^{*}}$$ $$\displaystyle=$$ $$\displaystyle\alpha_{i}^{*}-\alpha_{i}^{0}+\lambda-u_{i}=0$$ (24) $$\displaystyle\frac{\partial\mathcal{L}}{\partial\beta_{j}^{*}}$$ $$\displaystyle=$$ $$\displaystyle\beta_{j}^{*}-\beta_{j}^{0}+\mu-v_{j}+\omega_{j}=0$$ (25) $$\displaystyle\frac{\partial\mathcal{L}}{\partial C}$$ $$\displaystyle=$$ $$\displaystyle-\lambda-\mu-\frac{1}{k}\sum_{j=1}^{n}\omega_{j}=0$$ (26) $$\displaystyle 0$$ $$\displaystyle=$$ $$\displaystyle u_{i}\alpha_{i}^{*}$$ (27) $$\displaystyle 0$$ $$\displaystyle=$$ $$\displaystyle v_{j}\beta_{j}^{*}$$ (28) $$\displaystyle 0$$ $$\displaystyle=$$ $$\displaystyle\omega_{j}(\beta_{j}^{*}-\frac{C}{k})$$ (29) $$\displaystyle C$$ $$\displaystyle=$$ $$\displaystyle\sum_{i=1}^{m}\alpha_{i}^{*}$$ (30) $$\displaystyle C$$ $$\displaystyle=$$ $$\displaystyle\sum_{j=1}^{n}\beta_{j}^{*}$$ (31) $$\displaystyle\bm{0}$$ $$\displaystyle\leq$$ $$\displaystyle\bm{\alpha}^{*},\bm{\beta}^{*},\bm{u},\bm{v},\bm{\omega}$$ (32) Consider $\beta_{i}^{*}$. By (25) and (28), we know that if $v_{j}=0$, then $\beta_{j}^{*}=\beta_{j}^{0}-\mu-\omega_{j}\geq 0$; else if $v_{j}>0$, then $0=\beta_{j}^{*}>\beta_{j}^{0}-\mu-\omega_{j}$. This implies that $\beta_{j}^{*}=[\beta_{j}^{0}-\mu-\omega_{j}]_{+}$. Following similar analysis we can show that $\alpha_{i}^{*}=[\alpha_{i}^{0}-\lambda]_{+}$. Further, by (29), we have that if $\omega_{i}=0$, then $\beta_{j}^{*}=[\beta_{j}^{0}-\mu]_{+}\leq\frac{C}{k}$; else if $\omega_{i}>0$, then $\frac{C}{k}=\beta_{j}^{*}<[\beta_{j}^{0}-\mu]_{+}$. Thus $\beta_{j}^{*}=\min\{[\beta_{j}^{0}-\mu]_{+},\frac{C}{k}\}$. Substituting the expression of $\beta_{j}^{*}$ and $\alpha_{i}^{*}$ into (30) and (31), we have that both constraints (10), (11) and the closed form solution of $\bm{\alpha}$ and $\bm{\beta}$ holds. Now we verify (12). Case 1. First consider $C>0$, we have that if $\omega_{j}=0$, then $\frac{C}{k}\geq\beta_{j}^{*}=\beta_{j}^{0}-\mu$; else if $\omega>0$, then $0<\frac{C}{k}=\beta_{j}^{*}=[\beta_{j}^{0}-\mu-\omega_{j}]_{+}$ and thus $\beta_{j}^{0}-\mu-\omega_{j}=\frac{C}{k}$. To sum up we know that $\omega_{j}=[\beta_{j}^{0}-\mu-\frac{C}{k}]_{+}$, and thus by (26), (12) holds. Case 2. Now Suppose $C=0$. If this is the case, according to (30) and (31), we have $\bm{\alpha}^{*}=\bm{\beta}^{*}=\bm{0}$. By a trivial discussion, above KKT conditions can be reduced to: $$\displaystyle\lambda$$ $$\displaystyle\geq$$ $$\displaystyle\alpha_{[1]}^{0}$$ (33) $$\displaystyle 0$$ $$\displaystyle\geq$$ $$\displaystyle\lambda+\mu+\frac{1}{k}\sum_{j=1}^{n}[\beta_{j}^{0}-\mu]_{+}$$ (34) Notice that there is no any upper bounded constraint on $\lambda$. Thus, if $\lambda$ and $\mu$ satisfy the simplified KKT condition, by choosing a large enough $\lambda^{\prime}\geq\lambda$, both (34) and (12) hold, and optimal solution is still zero vector. This completes the proof of necessity. At last, notice that KKT condition is the necessary and sufficient conditions in the case of convex problem, and the above transformations are reversible, we complete the proof of sufficiency. ∎ 6.5 Proof of Lemma 1 First suppose $(\bm{\alpha}^{*},\bm{\beta}^{*})=(\bm{0}^{m},\bm{0}^{n})$. Denote corresponding dual variables by $C^{*},\lambda^{*}$ and $\mu^{*}$. First, we have $$\bm{\alpha}^{*}=\bm{0}^{m}\Leftrightarrow\lambda^{*}\geq\alpha_{[1]}^{0}$$ (35) Moreover, $(\bm{\alpha}^{*},\bm{\beta}^{*})=(\bm{0}^{m},\bm{0}^{n})$ implies $C^{*}=0$, and equality (11) is automatically holds for arbitrary $\mu\in\mathbb{R}$. Thus the unique constraint on $\mu^{*}$ is (12), i.e. $$k\lambda^{*}+k\mu^{*}+\sum_{j=1}^{n}[\beta_{j}^{0}-\mu^{*}]_{+}=0.$$ (36) Consider $f(\mu)=k\mu+\sum_{j=1}^{n}[\beta_{j}^{0}-\mu]_{+}$. It is easy to verify that $f(\mu)$ is continuous and piecewise-linear on $\mathbb{R}$. Moreover, if $\mu\in B_{t}$, we can write $f(\mu)$ as $$\displaystyle f(\mu)=\sum_{i=1}^{t}\beta_{[i]}^{0}+(k-t)\mu$$ (37) Thus, $f(\mu)$ is strictly decreasing in $B_{k+1}^{n+1}$, strictly increasing in $B_{0}^{k}$, and the minimum is achieved in $B_{k}$, which is $\sum_{i=1}^{k}\beta_{[i]}^{0}$. Combine this conclusion with (35) and (36), we have $$\displaystyle k\alpha_{[1]}^{0}+\sum_{j=1}^{k}\beta_{[i]}^{0}$$ $$\displaystyle=$$ $$\displaystyle\min\limits_{\lambda^{*},\mu}\{k\lambda^{*}+k\mu+\sum_{j=1}^{n}[% \beta_{j}^{0}-\mu]_{+}\}$$ $$\displaystyle\leq$$ $$\displaystyle k\lambda^{*}+k\mu^{*}+\sum_{j=1}^{n}[\beta_{j}^{0}-\mu^{*}]_{+}$$ $$\displaystyle=$$ $$\displaystyle 0.$$ This proves the necessity. On the other hand, if $k\alpha_{[1]}^{0}+\sum_{j=1}^{k}\beta_{[j]}^{o}\leq 0$, by setting $C=0,\mu=\beta_{[k]}^{0}$ and $\lambda=-\frac{1}{k}\sum_{j=1}^{k}\beta_{j}^{0}$, one can check that all the optimal conditions in theorem 3 can be satisfied, and corresponding $(\bm{\alpha}^{*},\bm{\beta}^{*})=(\bm{0},\bm{0})$. This completes the proof.$\qed$ 6.6 Redefine $\mu(C)$ As we mentioned in footnote, we need to redefine function $\mu(C)$ to ensure its one-valued property. We begin by introducing following lemma. Lemma 7. Consider (11). Denote $\ C_{0}=k(\beta_{[k]}^{0}-\beta_{[k+1]}^{0})$. 1. If $\ C>C_{0}$, then there exists a unique $\mu$ satisfying (11). 2. If $\ 0<C\leq C_{0}$, then arbitrary $\mu\in[\beta_{[k+1]}^{0},\beta_{[k]}^{0}-\frac{C}{k}]$ can satisfy (11). Proof. For a fixed $C>0$, consider $$g(\mu)=\sum_{j=1}^{n}\min\{[\beta_{j}^{0}-\mu]_{+},\frac{C}{k}\}-C$$ (38) Obviously it is a continuous and decreasing function, and the range is $[-C$,$\infty)$. By intermediate value theorem, $g=0$ must have a root. Another fact is that $g$ is piecewise-linear and has at most $2n$ breakpoints: $\{\beta_{i}^{0},\beta_{i}^{0}-\frac{C}{k}\}_{i=1}^{n}$. Thus if the root of $g$ is not unique for some $C$, all of these roots must be located on a continuous segment with slope 0. Otherwise, if there exists a root only located on segment whose slope is not 0, we can ensure no other root exists. Let $\mu^{\prime}$ be a root of $g$, we first show that $\mu^{\prime}\in(\beta_{[k+1]}^{0}-\frac{C}{k},\beta_{[k]}^{0})$. Notice that $$\begin{split}&\displaystyle g(\beta_{[k]}^{0})=\sum_{j=k}^{n}0+\sum_{j=1}^{k-1% }\min\{[\beta_{[j]}^{0}-\beta_{[k]}^{0}]_{+},\frac{C}{k}\}-C\leq\sum_{j=1}^{k-% 1}\frac{C}{k}-C=-\frac{C}{k}<0,\\ &\displaystyle g(\beta_{[k+1]}^{0}-\frac{C}{k})\geq\sum_{j=1}^{k+1}\min\{[% \beta_{[j]}^{0}-\beta_{[k+1]}^{0}+\frac{C}{k}]_{+},\frac{C}{k}\}-C\geq\sum_{j=% 1}^{k+1}\frac{C}{k}-C=\frac{C}{k}>0.\end{split}$$ Thus above claim holds by the intermediate value theorem and the monotonicity of $g$. Suppose $\mu^{\prime}\in B_{i}$ and $\mu^{\prime}+\frac{C}{k}\in B_{j}$, we have $j\leq k\leq i$, and $g$ can be written as $$0=g(\mu^{\prime})=\sum_{t=j+1}^{i}\beta_{[t]}^{0}-(i-j)\mu^{\prime}-(1-\frac{j% }{k})C$$ (39) Now we can prove the claims in Lemma 7. Case 1. If $C>C_{0}$, we know that $\beta_{[k+1]}^{0}>\beta_{[k]}^{0}-\frac{C}{k}$, thus $B_{k}$ and $B_{k}-\frac{C}{k}$ are disjoint. This means $i\neq j$ (once $i=j$, we must get $i=k=j$, and thus there exists $\mu^{\prime}$ belongs to $B_{k}$ and $B_{k}-\frac{C}{k}$ simultaneously, that is a contraction), and by (39) we know the slope which $\mu^{\prime}$ lies on is not 0, thus $\mu^{\prime}$ is the unique root of $g$. Case 2. If $C\leq C_{0}$, we know that $\beta_{[k+1]}^{0}\leq\beta_{[k]}^{0}-\frac{C}{k}<\beta_{[k]}^{0}$, and thus for all $\mu^{\prime}\in[\beta_{[k+1]}^{0},\beta_{[k]}^{0}-\frac{C}{k}]$, $i=k=j$ and (39) is a identities. This means that $\mu^{\prime}=[\beta_{[k+1]}^{0},\beta_{[k]}^{0}-\frac{C}{k}]$. This completes the proof. ∎ Remark. Note that in fact in the case of $C>C_{0}$, we can get a stronger bound on $\mu^{\prime}$: $\mu^{\prime}\in(\beta_{[k]}^{0}-\frac{C}{k},\beta_{[k+1]}^{0})$ and thus $j<k<i$. This is based on the fact that $$\displaystyle g(\beta_{[k+1]}^{0})$$ $$\displaystyle=$$ $$\displaystyle\sum_{j=k+1}^{n}0+(\beta_{[k]}^{0}-\beta_{[k+1]}^{0})+\sum_{j=1}^% {k-1}\min\{[\beta_{[j]}^{0}-\beta_{[k]}^{0}]_{+},\frac{C}{k}\}-C$$ $$\displaystyle<$$ $$\displaystyle\frac{C}{k}+\sum_{j=1}^{k-1}\frac{C}{k}-C=0,$$ $$\displaystyle g(\beta_{[k]}^{0}-\frac{C}{k})$$ $$\displaystyle\geq$$ $$\displaystyle\min\{[\beta_{[k+1]}^{0}-\beta_{[k]}^{0}+\frac{C}{k}]_{+},\frac{C% }{k}\}+\sum_{j=1}^{k}\min\{[\beta_{[j]}^{0}-\beta_{[k]}^{0}+\frac{C}{k}]_{+},% \frac{C}{k}\}-C$$ $$\displaystyle>$$ $$\displaystyle 0+\sum_{j=1}^{k}\frac{C}{k}-C=0.\qed$$ Based on Lemma 7, we can redefine $\mu(C)$ as follow. $$\displaystyle\mu(C)$$ $$\displaystyle=$$ $$\displaystyle\left\{\begin{array}[]{ll}\beta_{[k+1]}^{0}&C\in(0,C_{0})\\ \\ \mu\ satisfies\ (\ref{def mu})&C\in[C_{0},+\infty]\end{array}\right.$$ (40) This function is one-valued, and both of our discussions below are based on this new formulation. 6.7 Proof of Lemma 2 When $C^{*}>0$, it is obvious that $\lambda^{*}<\alpha_{[1]}^{0}$. Now we consider the claim of lower bound. On the one hand, if $\mu^{*}+\frac{C^{*}}{k}>\beta_{[1]}^{0}$, we have that for all $j\leq n$, $\beta_{j}^{0}\leq\beta_{[1]}^{0}<\mu^{*}+\frac{C^{*}}{k}$. Thus $$\displaystyle 0=f(C^{*})=k\lambda^{*}+k\mu^{*}+\sum_{j=1}^{n}[\beta_{j}^{0}-% \mu^{*}-\frac{C^{*}}{k}]_{+}=k(\lambda^{*}+\mu^{*})$$ and then $$\displaystyle 0<C^{*}=\sum_{j=1}^{n}\min\{[\beta_{j}^{0}-\mu^{*}]_{+},\frac{C^% {*}}{k}\}=\sum_{j=1}^{n}[\beta_{j}^{0}+\lambda^{*}]_{+}$$ The last equality implies that $\lambda^{*}>-\beta_{[1]}^{0}$. On the other hand, consider the situation of $\mu^{*}+\frac{C^{*}}{k}\leq\beta_{[1]}^{0}$. According to the proof of Lemma 7, we know that $\mu^{*}+\frac{C^{*}}{k}>\beta_{[k+1]}^{0}$, thus $$\displaystyle 0=f(C^{*})$$ $$\displaystyle=$$ $$\displaystyle k\lambda^{*}+k\mu^{*}+\sum_{j=1}^{n}[\beta_{j}^{0}-\mu^{*}-\frac% {C^{*}}{k}]_{+}$$ $$\displaystyle=$$ $$\displaystyle k\lambda^{*}+k\mu^{*}+\sum_{j=1}^{k}[\beta_{j}^{0}-\mu^{*}-\frac% {C^{*}}{k}]_{+}$$ $$\displaystyle\leq$$ $$\displaystyle k\lambda^{*}+k\mu^{*}+k[\beta_{[1]}^{0}-\mu^{*}-\frac{C^{*}}{k}]% _{+}$$ $$\displaystyle=$$ $$\displaystyle k\lambda^{*}+k\mu^{*}+k(\beta_{[1]}^{0}-\mu^{*}-\frac{C^{*}}{k})$$ $$\displaystyle=$$ $$\displaystyle k\lambda^{*}+k\beta_{[1]}^{0}-C^{*}$$ $$\displaystyle<$$ $$\displaystyle k\lambda^{*}+k\beta_{[1]}^{0}$$ This also means $\lambda^{*}>-\beta_{[1]}^{0}$. ∎ 6.8 Proof of Lemma 3 Our proof is based on a detailed analysis of each function’s sub-gradient. The correctness of claim 1 is verified in (Liu and Ye, 2009), so we only focus on claim 2, 3, 4. Due to space limitation, we only detail the key points. Consider $\mu(C)$. First, according to Lemma 7, we know that $\mu(C)$ is well defined in $[0,\infty)$. Now let us claim that $\mu(C)$ is continuous. It is not difficult to check that $\mu(C)$ is piecewise-linear, and continuous at both of these breakpoints. Thus, $\mu(C)$ is continuous. In order to verify that $\mu(C)$ is decreasing with $C$, we only need to show that the slope of any segment of $\mu$ is less than 0. Like the proof of Lemma 7, suppose that $\mu(C)\in B_{i}$ and $\mu(C)+\frac{C}{k}\in B_{j}$, we have $j\leq k\leq i$, and obtain a linear relation between $C$ and $\mu$: $$\displaystyle(i-j)\mu=\sum_{t=j+1}^{i}\beta_{[t]}^{0}-(1-\frac{j}{k})C$$ Thus, if $C>C_{0}$, according to the remark of Lemma 7 we know that $i>k>j$. and the slope of $\mu$ is $-\frac{k-j}{k(i-j)}<0$. Else in the case of $C\leq C_{0}$, in terms of the definition of $\mu$ we know corresponding slope is 0. In conclusion, we can ensure that $\mu(C)$ is strictly decreasing in $[C_{0},+\infty)$, and decreasing in $(0,+\infty)$. Similar analysis shows that $\delta(C)$ is also piecewise-linear, has at most $O(n)$ breakpoints, and the slope of each segment is $\frac{i-k}{k(i-j)}(i\neq j)$, which is strictly large than 0. In the case of $C\leq C_{0}$, the slope is $\frac{1}{k}>0$, leads to the conclusion of $\delta$ is strictly increasing in $(0,+\infty)$. At last, consider $f(\lambda)$. Because both $\mu$ and $-\tau$ are decreasing with $\lambda$, it is obviously to see that $f$ is strictly decreasing in $(-\infty,+\infty)$. Now we prove the convexity of $f(C)$. We prove that both $\lambda(C)$ and $T(C)=k\mu+\sum_{j=1}^{n}[\beta_{j}^{0}-\delta]_{+}$ are convex function. The convexity of $\lambda$ is guaranteed in (Liu and Ye, 2009). Now we only discuss the convexity of $T(C)$. If $C\geq C_{0}$, reuse the definition of $i$ and $j$ above, and define $x=i-k>0$, $y=k-j>0$, one can verify that, the sub-gradient of $f(C)$ is $$\displaystyle f^{\prime}=\frac{xy}{k(x+y)}-1=\frac{1}{\frac{k}{x}+\frac{k}{y}}% -1>-1$$ Following the conclusions that $\mu$ is decreasing with $C$, and $\delta=\mu+\frac{C}{k}$ is increasing with $C$, we know that both $x$ and $y$ is increasing with $C$. Thus $f^{\prime}$ is increasing with $C$, which means that $f$ is convex in $[C_{0},+\infty)$. On the other hand, if $C\leq C_{0}$, one can easily check that $T(C)=-C$. Thus this moment $f^{\prime}=-1$, which is larger than the sub-gradient of $f$ in the case of $C>C_{0}$. Thus $f^{\prime}$ is increasing in $(0,+\infty)$, and $f$ is convex in $(0,+\infty)$. ∎ 6.9 Proof of Theorem 5 For the convenience of analysis, we consider the constrained version of the optimization problem (6), that is $$\min\limits_{w\in W}L_{\bar{k}}=\frac{1}{m}\sum_{i=1}^{m}l(\bm{w}^{\top}\bm{x_% {i}^{+}}-\frac{1}{k}\sum_{j=1}^{k}\bm{w}^{\top}\bm{x_{[j]}^{-}})$$ (41) where $W=\{\bm{w}\in\mathbb{R}^{d}\mid||\bm{w}||\leq\rho\}$ is a domain and $\rho>0$ specifies the size of the domain that plays similar role as the regularization parameter $R$. First, we denote $G$ as the Lipschitz constant of the truncated quadratic loss $l(z)$ on the domain $[-2\rho,2\rho]$, and define the following two functions based on $l(z)$, i.e. $$\displaystyle h_{l}(\bm{x},\bm{w})$$ $$\displaystyle=$$ $$\displaystyle\mathbb{E}_{\bm{x}-\thicksim P-}[l(\bm{w}^{\top}\bm{x}-\bm{w}^{% \top}\bm{x}^{-})],$$ (42) $$\displaystyle P_{l}(\bm{w},\tau)$$ $$\displaystyle=$$ $$\displaystyle\mathbb{P}_{\bm{x}^{+}\thicksim P^{+}}(h_{l}(\bm{x}_{i}^{+}\bm{w}% )\geq\tau)$$ (43) The lemma below relates the empirical counterpart of $P_{l}$ with the loss $L_{\bar{k}}$. Lemma 8. With a probability at least $1-e^{-s}$, for any $\bm{w}\in W$, we have $$\frac{1}{m}\sum_{i=1}^{m}\mathbb{I}(h_{l}(\bm{x}_{i}^{+},\bm{w})\geq\delta)% \leq L_{\bar{k}},\\ $$ (44) where $$\displaystyle\delta=\frac{4G(\rho+1)}{\sqrt{n}}+\frac{5\rho(s+\log(m))}{3n}+2G% \rho\sqrt{\frac{2(s+\log(m))}{n}}+\frac{2G\rho(k-1)}{n}.$$ Proof. For any $\bm{w}\in W$, we define two instance sets by splitting $\mathcal{S}_{+}$, that is $$\displaystyle A(\bm{w})$$ $$\displaystyle=$$ $$\displaystyle\{\bm{x}_{i}^{+}\mid\bm{w}^{\top}\bm{x}_{i}^{+}>\frac{1}{k}\sum_{% i=1}^{k}\bm{w}^{\top}\bm{x}_{[i]}^{-}+1\}$$ $$\displaystyle B(\bm{w})$$ $$\displaystyle=$$ $$\displaystyle\{\bm{x}_{i}^{+}\mid\bm{w}^{\top}\bm{x}_{i}^{+}\leq\frac{1}{k}% \sum_{i=1}^{k}\bm{w}^{\top}\bm{x}_{[i]}^{-}+1\}$$ For $\bm{x}_{i}^{+}\in A(W)$, we define $$||P-P_{n}||_{W}=\mathop{sup}\limits_{||\bm{w}||\leq\rho}|h_{l}(\bm{x}_{i}^{+},% \bm{w})-\frac{1}{n}\sum_{j=1}^{n}l(\bm{w}^{\top}\bm{x}_{i}^{+}-\bm{w}^{\top}% \bm{x}_{j}^{-})|$$ Using the Talagrand’s inequality and in particular its variant (specifically, Bousquet bound) with improved constants derived in (Bousquet, 2002), we have, with probability at least $1-e^{-s}$, $$\displaystyle||P-P_{n}||_{W}\leq\mathbb{E}||P-P_{n}||_{W}+\frac{2s\rho}{3n}+% \sqrt{\frac{2s}{n}(2\mathbb{E}||P-P_{n}||_{W}+\sigma_{P}^{2}(W))}.$$ We now bound each item on the right hand side of (6.9). First, we bound $\mathbb{E}||P-P_{n}||_{W}$ as $$\displaystyle\mathbb{E}||P-P_{n}||_{W}$$ $$\displaystyle=$$ $$\displaystyle\frac{2}{n}\mathbb{E}[\mathop{sup}\limits_{||\bm{w}||\leq\rho}% \sum_{j=1}^{n}\sigma_{j}l(\bm{w}^{\top}(\bm{x}_{i}^{+}-\bm{x}_{j}^{-}))]$$ $$\displaystyle\leq$$ $$\displaystyle\frac{4G}{n}\mathbb{E}[\mathop{sup}\limits_{||\bm{w}||\leq\rho}% \sum_{j=1}^{n}\sigma_{j}(\bm{w}^{\top}(\bm{x}_{i}^{+}-\bm{x}_{j}^{-}))]$$ $$\displaystyle\leq$$ $$\displaystyle\frac{4G\rho}{\sqrt{n}}$$ where $\sigma_{j}$’s are Rademacher random variables, the first inequality utilizes the contraction property of Rademacher complexity, and the last follows from Cauchy-Schwarz inequality and Jensen’s inequality. Next, we bound $\sigma_{P}^{2}(W)$, that is, $$\sigma_{P}^{2}(W)=\mathop{sup}\limits_{||\bm{w}||\leq\rho}h_{l}^{2}(\bm{x},\bm% {w})\leq 4G^{2}\rho^{2}$$ By putting these bounds into (6.9), we have $$\displaystyle||P-P_{n}||_{W}$$ $$\displaystyle\leq$$ $$\displaystyle\frac{4G\rho}{\sqrt{n}}+\frac{2s\rho}{3n}+\sqrt{\frac{2s}{n}(4G^{% 2}\rho^{2}+\frac{8G\rho}{\sqrt{n}})}$$ $$\displaystyle\leq$$ $$\displaystyle\frac{4G(\rho+1)}{\sqrt{n}}+\frac{5s\rho}{3n}+2G\rho\sqrt{\frac{2% s}{n}}$$ Notice that for any $x_{i}^{+}\in A(W)$, there are at most $k-1$ negative instances have higher score than it, we thus have $$\sum_{j=1}^{n}l(\bm{w}^{\top}\bm{x}_{i}^{+}-\bm{w}^{\top}\bm{x}_{j}^{-})\leq 2% G\rho(k-1)$$ Consequently, by the definition of $||P-P_{n}||_{W}$ we have, with probability $1-e^{-s}$, $$\displaystyle|h_{l}(\bm{x}_{i}^{+},\bm{w})|\leq||P-P_{n}||_{W}+\frac{1}{n}\sum% _{j=1}^{n}l(\bm{w}^{\top}\bm{x}_{i}^{+}-\bm{w}^{\top}\bm{x}_{j}^{-})$$ $$\displaystyle\leq\frac{4G(\rho+1)}{\sqrt{n}}+\frac{5s\rho}{3n}+2G\rho\sqrt{% \frac{2s}{n}}+2G\rho\frac{k-1}{n}$$ Using the union bound over all $\bm{x}_{i}^{+}$’s, we thus have, with probability $1-e^{-s}$, $$\sum_{\bm{x}_{i}^{+}\in A(\bm{w})}\mathbb{I}(h_{l}(\bm{x}_{i}^{+},\bm{w})\geq% \delta)=0$$ (45) where $\delta$ is in (8). Hence, we can complete the proof by $|B(\bm{w})|\leq mL_{\bar{k}}$. ∎ Based on Lemma 8, we are at the position to prove Theorem 5. Let $S(W,\epsilon)$ be a proper $\epsilon$-net of $W$ and $N(\rho,\epsilon)$ be the corresponding covering number. According to standard result, we have $$\log N(\rho,\epsilon)\leq d\log(\frac{9\rho}{\epsilon}).$$ (46) By using concentration inequality and union bound over $\bm{w}^{\prime}\in S(W,\epsilon)$, we have, with probability at least $1-e^{-s}$, $$\displaystyle\mathop{sup}\limits_{\bm{w}^{\prime}\in S(W,\epsilon)}P_{l}(\bm{w% }^{\prime},\delta)-\frac{1}{m}\sum_{i=1}^{m}\mathbb{I}(h_{l}(\bm{x}_{i}^{+},% \bm{w}^{\prime})\geq\delta)\leq\sqrt{\frac{2(s+d\log(9\rho/\epsilon))}{m}}$$ Let $\bm{d}=\bm{x}^{+}-\bm{x}^{-}$ and $\epsilon=\frac{1}{2\sqrt{m}}$. For $\bm{w}^{*}\in W$, there exists $\bm{w}^{\prime}\in S(W,\epsilon)$ such that $||\bm{w}^{\prime}-\bm{w}^{*}||\leq\epsilon$, it holds that $$\displaystyle\mathbb{I}(\bm{w}^{*\top}\bm{d}\leq 0)=\mathbb{I}(\bm{w^{\prime}}% ^{\top}\bm{d}\leq(\bm{w}^{\prime}-\bm{w}^{*})^{\top}\bm{d})\leq\mathbb{I}(\bm{% w}^{\prime\top}\bm{d}\leq\frac{1}{\sqrt{m}})\leq 2l(\bm{w}^{\prime\top}\bm{d}).$$ where the last step is based on the fact that $l(.)$ is decreasing and $l(1/\sqrt{m})\geq 1/2$ if $m\geq 12$. We thus have $h_{b}(\bm{x}^{+},\bm{w}^{*})\leq 2h_{l}(\bm{x}^{+},\bm{w}^{\prime})$ and therefore $P_{b}(\bm{x}^{*},\delta)\leq P_{l}(\bm{x}^{\prime},\delta/2)$. As a consequence, from (6.9), Lemma 8 and the fact $$\displaystyle L_{\bar{k}}(\bm{w}^{\prime})\leq L_{\bar{k}}(\bm{w}^{*})+\frac{G% \rho}{\sqrt{m}}$$ We have, with probability at least $1-2e^{-s}$ $$\displaystyle P_{b}(\bm{w}^{*},\delta)$$ $$\displaystyle\leq$$ $$\displaystyle L_{\bar{k}}(\bm{w}^{*})+\frac{G\rho}{\sqrt{m}}+\sqrt{\frac{2s+2d% \log(9\rho)+d\log m}{m}},$$ where $\delta$ is as defined in (8), and the conclusion follows by hiding constants.∎ 7 Conclusion In this paper, we focus on learning binary classifier under the specified tolerance $\tau$. To this end, we have proposed a novel ranking method which directly optimizes the probability of ranking positive samples above $1-\tau$ percent of negative samples. The ranking optimization is then efficiently solved using projected gradient method with the proposed linear time projection. Moreover, an out-of-bootstrap thresholding is applied to transform the learned ranking model into a classifier with a low false-positive rate. We demonstrate the superiority of our method using both theoretical analysis and extensive experiments on several benchmark datasets. Acknowledgments This research was mainly done when the first author was an intern at iDST of Alibaba. This work is supported by the National Natural Science Foundation of China (NSFC) (61702186, 61672236, 61602176, 61672231), the NSFC-Zhejiang Joint Fund for the Integration of Industrialization and Information (U1609220), the Key Program of Shanghai Science and Technology Commission (15JC1401700) and the Joint Research Grant Proposal for Overseas Chinese Scholars (61628203). References Ben-David et al. (2003) Shai Ben-David, Nadav Eiron, and Philip M Long. On the difficulty of approximately maximizing agreements. Journal of Computer and System Sciences, 66(3):496–514, 2003. Ben-Hur et al. (2001) Asa Ben-Hur, David Horn, Hava T Siegelmann, and Vladimir Vapnik. Support vector clustering. JMLR, 2(Dec):125–137, 2001. Bousquet (2002) Olivier Bousquet. A bennett concentration inequality and its application to suprema of empirical processes. Comptes Rendus Mathematique, 334(6):495–500, 2002. Boyd and Vandenberghe (2004) Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. Collobert et al. (2006) Ronan Collobert, Fabian Sinz, Jason Weston, and Léon Bottou. Trading convexity for scalability. In ICML, pages 201–208, 2006. Davenport et al. (2006) Mark A Davenport, Richard G Baraniuk, and Clayton D Scott. Controlling false alarms with support vector machines. In ICASSP, volume 5, pages V–V, 2006. Davenport et al. (2010) Mark A Davenport, Richard G Baraniuk, and Clayton D Scott. Tuning support vector machines for minimax and neyman-pearson classification. TPAMI, 32(10):1888–1898, 2010. Drucker et al. (1999) Harris Drucker, Donghui Wu, and Vladimir N Vapnik. Support vector machines for spam categorization. IEEE Transactions on Neural Networks, 10(5):1048–1054, 1999. Gasso et al. (2011) Gilles Gasso, Aristidis Pappaioannou, Marina Spivak, and Léon Bottou. Batch and online learning algorithms for nonconvex neyman-pearson classification. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):28, 2011. Huang et al. (2010) Haiyan Huang, Chun-Chi Liu, and Xianghong Jasmine Zhou. Bayesian approach to transforming public gene expression repositories into disease diagnosis databases. PNAS, 107(15):6823–6828, 2010. Joachims (1996) Thorsten Joachims. A probabilistic analysis of the rocchio algorithm with tfidf for text categorization. Technical report, DTIC Document, 1996. Kiwiel (2005) Krzysztof C Kiwiel. On floyd and rivest’s select algorithm. Theoretical Computer Science, 347(1-2):214–238, 2005. Kiwiel (2007) Krzysztof C Kiwiel. On linear-time algorithms for the continuous quadratic knapsack problem. Journal of Optimization Theory and Applications, 134(3):549–554, 2007. Lapin et al. (2015) Maksim Lapin, Matthias Hein, and Bernt Schiele. Top-k multiclass SVM. In NIPS, pages 325–333, 2015. Lehmann and Romano (2006) Erich L Lehmann and Joseph P Romano. Testing statistical hypotheses. Springer Science & Business Media, 2006. Li et al. (2014) Nan Li, Rong Jin, and Zhi-Hua Zhou. Top rank optimization in linear time. In NIPS, pages 1502–1510, 2014. Liu and Ye (2009) Jun Liu and Jieping Ye. Efficient euclidean projections in linear time. In ICML, pages 657–664, 2009. Liu and Zhou (2010) Xu-Ying Liu and Zhi-Hua Zhou. Learning with cost intervals. In KDD, pages 403–412, 2010. Mahdavi et al. (2013) Mehrdad Mahdavi, Tianbao Yang, and Rong Jin. Stochastic convex optimization with multiple objectives. In NIPS, pages 1115–1123. 2013. Masnadi-Shirazi and Vasconcelos (2007) Hamed Masnadi-Shirazi and Nuno Vasconcelos. Asymmetric boosting. In ICML, pages 609–619, 2007. Masnadi-Shirazi and Vasconcelos (2011) Hamed Masnadi-Shirazi and Nuno Vasconcelos. Cost-sensitive boosting. TPAMI, 33(2):294–309, 2011. Mozer et al. (2002) Michael C Mozer, Robert Dodier, Michael D Colagrosso, César Guerra-Salcedo, and Richard Wolniewicz. Prodding the roc curve: Constrained optimization of classifier performance. In NIPS, pages 1409–1415, 2002. Narasimhan and Agarwal (2013a) Harikrishna Narasimhan and Shivani Agarwal. A structural svm based approach for optimizing partial auc. In ICML, 2013a. Narasimhan and Agarwal (2013b) Harikrishna Narasimhan and Shivani Agarwal. On the relationship between binary classification, bipartite ranking, and binary class probability estimation. In NIPS, pages 2913–2921, 2013b. Nesterov (2003) Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic Publishers, 2003. Osuna et al. (1997) Edgar Osuna, Robert Freund, and Federico Girosi. Support vector machines: Training and applications. Technical Report AIM-1602, 1997. Patriksson (2008) Michael Patriksson. A survey on the continuous nonlinear resource allocation problem. European Journal of Operational Research, 185(1):1–46, 2008. Rigollet and Tong (2011) Philippe Rigollet and Xin Tong. Neyman-pearson classification, convexity and stochastic constraints. JMLR, 12(Oct):2831–2855, 2011. Scheirer et al. (2013) Walter J Scheirer, Anderson de Rezende Rocha, Archana Sapkota, and Terrance E Boult. Toward open set recognition. TPAMI, 35(7):1757–1772, 2013. Scott (2007) Clayton Scott. Performance measures for neyman–pearson classification. IEEE Transactions on Information Theory (TIT), 53(8):2852–2863, 2007. Scott and Nowak (2005) Clayton Scott and Robert Nowak. A neyman-pearson approach to statistical learning. IEEE Transactions on Information Theory (TIT), 51(11):3806–3819, 2005. Tong (2013) Xin Tong. A plug-in approach to neyman-pearson classification. JMLR, 14(Oct):3011–3040, 2013. Tsochantaridis et al. (2005) Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. Large margin methods for structured and interdependent output variables. JMLR, 6(Sep):1453–1484, 2005. Wu et al. (2008) Shan-Hung Wu, Keng-Pei Lin, Chung-Min Chen, and Ming-Syan Chen. Asymmetric support vector machines: low false-positive learning under the user tolerance. In KDD, pages 749–757, 2008. Yu et al. (2012) Yao-liang Yu, Özlem Aslan, and Dale Schuurmans. A polynomial-time form of robust regression. In NIPS, pages 2483–2491, 2012. Zhou and Zhou (2016) Yu-Hang Zhou and Zhi-Hua Zhou. Large margin distribution learning with cost interval and unlabeled data. TKDE, 28(7):1749–1763, 2016.
A characterization of the Khavinson-Shapiro conjecture via Fischer operators Hermann Render School of Mathematics and Statistics, University College Dublin, Belfield, Dublin 4, Ireland. Email: hermann.render@ucd.ie () Abstract The Khavinson-Shapiro conjecture states that ellipsoids are the only bounded domains in euclidean space satisfying the following property (KS): the solution of the Dirichlet problem for polynomial data is polynomial. In this paper we show that property (KS) for a domain $\Omega$ is equivalent to the surjectivity of a Fischer operator associated to the domain $\Omega.$ 1 Introduction 11footnotetext: 2010 Mathematics Subject Classification 31B05; 35J05. Keywords and phrases: Dirichlet problem, harmonic extension, Khavinson-Shapiro conjecture. In the 19th century ellipsoidal harmonics have been used to prove that for any polynomial $p$ of degree $\leq m$ there exists a harmonic polynomial $h$ of degree $\leq m$ such that $h\left(\xi\right)=p\left(\xi\right)$ for all $\xi\in\partial E$ where $\partial E$ is the boundary of an ellipsoid $E$ in the euclidean space $\mathbb{R}^{d}.$ It follows that an ellipsoid satisfies the following property defined for an arbitrary open subset $\Omega$ in $\mathbb{R}^{d}$: (KS) For any polynomial $p$ with real coefficients there exists a harmonic polynomial $h$ with real coefficients such that $h\left(\xi\right)=p\left(\xi\right)$ for all $\xi\in\partial\Omega.$ The Khavinson-Shapiro conjecture [12] states that ellipsoids are the only bounded domains $\Omega$ in $\mathbb{R}^{d}$ with property (KS). Obviously a domain $\Omega$ has property (KS) if and only if the Dirichlet problem for polynomial data (restricted to the boundary) have polynomial solutions; for the Dirichlet problem we refer to [2] and [7]. The Khavinson-Shapiro conjecture has been confirmed for large classes of domains but it is still unproven in its full generality, and we refer the interested reader to the expositions [8], [17], [14], [10], [15], [11] and for further ramifications in [13] originating from the work [16]. In this paper we want to characterize the property (KS) by using Fischer operators. In our context we shall mean by a Fischer operator222More generaly one can define a Fischer operator by $q\longmapsto P\left(D\right)\left(\psi q\right)$ where $P\left(D\right)$ is a linear partial differential operator with constant real coefficients. Fischer’s Theorem in [6] states that the Fischer operator is a bijection whenever $\psi\left(x\right)$ is a homogeneous polynomial equal to the polynomial $P\left(x\right).$ an operator of the form $$F_{\psi}\left(q\right):=\Delta\left(\psi q\right)\hbox{ for }q\in\mathbb{R}% \left[x\right]$$ where $\Delta$ is the Laplace operator $\frac{\partial^{2}}{\partial x_{1}^{2}}+....+\frac{\partial^{2}}{\partial x_{d% }^{2}}$ and $\psi$ is a fixed element in $\mathbb{R}\left[x\right]$, the set of all polynomials in $d$ variables with real coefficients. Fischer operators often allows elementary and short proofs of mathematical statements which usually require hard and deep analysis, see [20], [11]. For example, the statement that ellipsoids have property (KS) can be proven in a few lines using Fischer operators and elementary results in Linear Algebra, see [4], [5], [3], and for further generalizations see [1], [12]. In order to formulate our main result we need some technical definitions. The zero-set of a polynomial $f\in\mathbb{R}\left[x\right]$ is denoted by $Z\left(f\right)=\left\{x\in\mathbb{R}^{d}:f\left(x\right)=0\right\}.$ We say that a subset $Z$ of $\mathbb{R}^{d}$ is an admissible common zero set if there exist non-constant irreducible polynomials $f,g\in\mathbb{R}\left[x\right]$ such that (i) $f\neq\lambda g$ for all $\lambda\in\mathbb{R}$ and (ii) $Z=Z\left(f\right)\cap Z\left(g\right).$ For dimension $d=2$ it is well known that an admissible common zero set is finite, see [19, p. 2]. For arbitrary dimension $d$ it is intuitively clear that an admissible common zero set has ”dimension” $\leq d-2$ at each point. We say that an open set $\Omega$ in $\mathbb{R}^{d}$ is admissible if for any $x\in\partial\Omega$, any open neighborhood $V$ of $x$ and for any finite family of admissible common zero sets $Z_{1},...,Z_{r}$ the set $$\left[\partial\Omega\cap V\right]\diagdown\bigcup_{j=1}^{r}Z_{j}$$ is non-empty. For dimension $d=2$ it is easy to see that an open set $\Omega$ is admissible if each point $x\in\partial\Omega$ is not isolated in $\partial\Omega$. For arbitrary dimension it seems to be difficult to formulate a precise topological condition but it is intuitively clear that a domain $\Omega$ is admissible if each point in the boundary $\partial\Omega$ has a neighborhood of dimension $d-1.$ The following is now the main result of this paper: Theorem 1 Let $\Omega$ be an open admissible subset of $\mathbb{R}^{d}$. Then property (KS) holds for $\Omega$ if and only if there exists a non-constant polynomial $\psi\in\mathbb{R}\left[x\right]$ such that (i) $\partial\Omega\subset\psi^{-1}\left\{0\right\}$ and (ii) the Fischer operator $F_{\psi}:$ $\mathbb{R}\left[x\right]\rightarrow\mathbb{R}\left[x\right]$ defined by $F_{\psi}\left(q\right):=\Delta\left(\psi q\right)$ for $q\in\mathbb{R}\left[x\right]$ is surjective. An immediate consequence of Theorem 1 is that the Khavinson-Shapiro conjecture is true for all admissible bounded domains if the following purely algebraic conjecture of M. Chamberland and D. Siegel formulated in [5] is true: (CS) The surjectivity of the Fischer operator $F_{\psi}:$ $\mathbb{R}\left[x\right]\rightarrow\mathbb{R}\left[x\right]$ implies that the degree of $\psi$ is $\leq 2.$ We refer to [15] and [18] for more details on conjecture (CS) and related results. It should be emphasized that the polynomial $\psi$ in conjecture (CS) has real coefficients. In [9] it is shown for dimension $d=3$ that for any non-constant polynomial $\varphi\left(z\right)=a_{0}+a_{1}z+\cdots+a_{n}z^{n}$ in the complex variable $z$ the operator $F_{\psi}:\mathbb{C}\left[x\right]\rightarrow\mathbb{C}\left[x\right]$ defined by $$F_{\psi}\left(q\right)=\Delta\left[\left(x_{3}-\varphi\left(x_{1}+ix_{2}\right% )\right)^{2}q\left(x_{1},x_{2},x_{3}\right)\right]$$ for $q\in\mathbb{C}\left[x\right]$ is surjective where $\mathbb{C}\left[x\right]$ is the set of all polynomials with complex coefficients. 2 Proof of Theorem 1 From [15] we cite the following result which is known as the Fischer decomposition of a polynomial. Proposition 2 Suppose $\psi$ is a polynomial. Then the operator $F_{\psi}\left(q\right):=\Delta\left(\psi q\right)$ is surjective if and only if every polynomial $f$ can be decomposed as $f=\psi q_{f}+h_{f}$, where $q_{f}$ is a polynomial and $h_{f}$ is harmonic polynomial Proof of Theorem 1: The sufficiency is easy: By assumption, there exists $\psi\in\mathbb{R}\left[x\right]$ with $\partial\Omega\subset\psi^{-1}\left\{0\right\}$ such that $F_{\psi}$ is surjective. According to Proposition 2 there exists for each polynomial $f$ a polynomial $q$ and a harmonic polynomial $u$ such that $f=\psi q+u$. For $\xi\in\partial\Omega\subset\psi^{-1}\left\{0\right\}$ it follows that $f\left(\xi\right)=u\left(\xi\right)$. Thus $u$ is a polynomial solution to the Dirichlet problem of the domain $\Omega$ and property (KS) is satisfied. Now assume that (KS) holds. Then there exists a harmonic polynomial $u\in\mathbb{R}\left[x\right]$ such that $\left|\xi\right|^{2}=u\left(\xi\right)$ for all $\xi\in\partial\Omega.$ Define the polynomial $Q\left(x\right)=\left|x\right|^{2}-u\left(x\right).$ Then $$\partial\Omega\subset Q^{-1}\left(0\right)$$ (1) and $Q$ is a non-constant polynomial of degree $\geq 2$ since $\Delta\left(Q\right)\neq 0.$ We factorize $Q\left(x\right)$ in irreducible factors, so $$Q\left(x\right)=\left|x\right|^{2}-u\left(x\right)=f_{1}^{m_{1}}...f_{r}^{m_{r}}$$ where $f_{k}$ is not a scalar multiple of $f_{l}$ for $k\neq l,$ and $m_{k}\geq 1$ is the multiplicity of $f_{k}.$ It follows that $$\partial\Omega\subset\bigcup_{k=1}^{r}f_{k}^{-1}\left(0\right).$$ (2) Then $Z\left(f_{k}\right)\cap Z\left(f_{l}\right)$ is an admissible common zero set for $k\neq l.$ Let $Z$ be the (finite) union of the sets $Z\left(f_{k}\right)\cap Z\left(f_{l}\right)$ with $k\neq l.$ As $\Omega$ is admissible there exists $x\in\partial\Omega\setminus Z,$ and now (2) implies that $$I:=\left\{k\in\left\{1,...r\right\}:\exists x\in\partial\Omega\setminus Z\hbox% { with }f_{k}\left(x\right)=0\right\}$$ is non-empty. Let us define $$\psi:=\prod_{k\in I}f_{k}\left(x\right).$$ We want to show that $F_{\psi}$ is surjective. By Proposition 2 it suffices to show that for any polynomial $f$ there is a harmonic polynomial $u$ and a polynomial $q$ such that $f=\psi q+u.$ By property (KS) there exists a harmonic polynomial $u$ such that $f\left(\xi\right)=u\left(\xi\right)$ for all $\xi\in\partial\Omega.$ Define $g\left(x\right)=f\left(x\right)-u\left(x\right),$ so $g$ vanishes on $\partial\Omega.$ If we can show that each $f_{k}$ with $k\in I$ divides $g$ then, by irreducibility of $f_{k}$ and the condition that $f_{k}\neq\lambda f_{j}$ for $k\neq j,$ we infer that $\psi=\prod\limits_{k\in I}f_{k}$ divides $g,$ say $g=\psi q$ for some polynomial $q.$ Then $f-u=g=q\psi$ and we are done. Let $k\in I$ be fixed and $g$ as above. Then there exists $x\in\partial\Omega\setminus Z$ with $f_{k}\left(x\right)=0.$ Then $f_{l}\left(x\right)\neq 0$ for all $l\neq k$ since otherwise $x$ would be in $Z.$ By continuity there is an open neighborhood $V$ of $x$ such that $f_{l}\left(y\right)\neq 0$ for all $y\in V$ and $l\neq k.$ Then we conclude from (2) that $$\partial\Omega\cap V\subset f_{k}^{-1}\left(0\right).$$ (3) Let us write $g=g_{1}^{m_{1}}\cdots g_{s}^{m_{r}}$ where $g_{1},...,g_{s}$ are irreducible polynomials such that $g_{j}\neq\lambda g_{l}$ for $j\neq l.$ If $f_{k}=\lambda g_{j}$ for some $j\in\left\{1,...,s\right\}$ we see that $f_{k}$ divides $g.$ Assume that this is not the case and define $Z_{k}$ as the (finite) union of the admissible sets $Z\left(g_{j}\right)\cap Z\left(f_{k}\right)$ for $j=1,...s.$ Since $\Omega$ is admissible there exists $y\in\partial\Omega\cap V\diagdown\left(Z\cup Z_{k}\right)$. The inclusion (3) shows that $f_{k}\left(y\right)=0,$ and since $g$ vanishes on $\partial\Omega$ there exists $j\in\left\{1,..,s\right\}$ such that $g_{j}\left(y\right)=0.$ Hence $y\in Z\left(g_{j}\right)\cap Z\left(f_{k}\right)\subset Z_{k}.$ Now we obtain a contradiction since $y\in\partial\Omega\cap V\diagdown\left(Z\cup Z_{k}\right).$ It remains to prove that $\partial\Omega$ is contained in $\psi^{-1}\left(0\right).$ If $j\in\left\{1,...r\right\}\setminus I$ then it follows from the definition of $I$ that $f_{j}\left(x\right)\neq 0$ for all $x\in\partial\Omega\setminus Z.$ This fact and (2) imply that $$\partial\Omega\setminus Z\subset\bigcup_{k\in I}f_{k}^{-1}\left(0\right)=:F.$$ (4) Let $x\in\partial\Omega.$ Since $\Omega$ is admissible there exists for any ball $V$ with center $x$ and radius $1/m$ an element $x_{m}\in(\partial\Omega\cap V)\diagdown Z.$ Then (4) shows that $x_{m}\in F.$ Since $x_{m}$ converges to $x$ and $F$ is closed we infer that $x\in F.$ Thus $\partial\Omega\subset F.$ References [1] D. H. Armitage, The Dirichlet problem when the boundary function is entire, J. Math. Anal. Appl. 291 (2004), 565–577. [2] D. H. Armitage, S. J. Gardiner, Classical Potential Theory, Springer, London 2001. [3] S. Axler, P. Gorkin, K. Voss, The Dirichlet problem on quadratic surfaces, Math. Comp. 73 (2003), 637–651. [4] J. A. Baker, The Dirichlet problem for ellipsoids, Amer. Math. Monthly, 106 (1999), 829–834. [5] M. Chamberland, D. Siegel, Polynomial solutions to Dirichlet problems, Proc. Amer. Math. Soc.v129 (2001), 211–217. [6] E. Fischer, Über die Differentiationsprozesse der Algebra, J. für Math. (Crelle Journal) 148 (1917), 1–78. [7] S.J. Gardiner, The Dirichlet problem with non-compact boundary, Math. Z. 213 (1993), 163–170. [8] L. Hansen, H.S. Shapiro, Functional Equations and Harmonic Extensions, Complex Var. 24 (1994), 121–129. [9] D. Khavinson, Singularities of harmonic functions in $\mathbb{C}^{n}$, Proc. Symp. Pure Applied Math., Amer. Math. Soc., Providence, RI, 52 (1991), Part 3, 207–217. [10] D. Khavinson, E. Lundberg, The search for singularities of solutions to the Dirichlet problem: recent developments, Hilbert spaces of analytic functions, 121–132, CRM Proc. Lecture Notes, 51, Amer. Math. Soc., Providence, RI, 2010. [11] D. Khavinson, E. Lundberg, A tale of ellipsoids in Potential Theory, Notices Amer. Math. Soc. 61 (2014), 148–156. [12] D. Khavinson, H. S. Shapiro, Dirichlet’s Problem when the data is an entire function, Bull. London Math. Soc. 24 (1992), 456–468. [13] D. Khavinson, N. Stylianopoulos, Recurrence relations for orthogonal polynomials and algebraicity of solutions of the Dirichlet problem, “Around the Research of Vladimir Maz’ya II, Partial Differential Equations”, 219–228, Springer, 2010. [14] E. Lundberg, Dirichlet’s problem and complex lightning bolts, Comp. Meth. Funct. Theory, 9 (2009), 111–125. [15] E. Lundberg, H. Render, The Khavinson-Shapiro conjecture and polynomial decompositions, J. Math. Anal. Appl. (2011), 506–513. [16] M. Putinar, N. Stylianopoulos, Finite-term relations for planar orthogonal polynomials, Complex Anal. Oper. Theory 1 (2007), 447–456. [17] H. Render, Real Bargmann spaces, Fischer decompositions and sets of uniqueness for polyharmonic functions, Duke Math. J. 142 (2008), 313–352. [18] H. Render, The Khavinson-Shapiro conjecture for domains with a boundary consisting of algebraic hypersurfaces, submitted. [19] I.R. Shafarevich, Basic Algebraic Geometry, Springer 1994, 2nd edition, [20] H.S. Shapiro, An algebraic theorem of E. Fischer and the Holomorphic Goursat Problem, Bull. London Math. Soc. 21 (1989), 513–537.
Universally Optimal Noisy Quantum Walks on Complex Networks Filippo Caruso LENS and Dipartimento di Fisica e Astronomia, Università di Firenze, Via Nello Carrara 1, I-50019 Sesto Fiorentino, Italy QSTAR, Largo Enrico Fermi 2, I-50125 Firenze, Italy filippo.caruso@lens.unifi.it Abstract Transport properties play a crucial role in several fields of science, as biology, chemistry, sociology, information science, and physics. The behavior of many dynamical processes running over complex networks is known to be closely related to the geometry of the underlying topology, but this connection becomes even harder to understand when quantum effects come into play. Here, we exploit the formalism of quantum stochastic walks to investigate the capability to quickly and robustly transmit energy (or information) from two distant points in very large complex structures, remarkably assisted by external noise and quantum features as coherence. An optimal mixing of classical and quantum transport is, very surprisingly, universal for a large class of complex networks. This universal optimality turns out to be also extremely robust with respect to geometry changes. These results might pave the way for designing optimal bio-inspired geometries of efficient transport nanostructures that can be used for solar energy and also quantum information and communication technologies. pacs: 03.65.Yz, 05.60.Gg, 03.67.-a 1 Introduction The static properties and dynamical performances of complex network structures have been extensively studied in classical statistical physics [1]. In the last years there is a growing interest in fully understanding how these studies can be generalized when dealing with quantum mechanical systems, e.g. with microscopic and fragile quantum coherence affecting macroscopic and robust transport behavior. Very recently, it has been found that quantum effects play a crucial role in the remarkably efficient energy transport phenomena in light-harvesting complexes [2, 3, 4, 5, 6, 7, 8], also with a fundamental contribution of the external noisy environment [9, 10, 11, 12, 13]. Indeed, an electronic exciton seems to behave as a quantum walker over these protein structures, exploiting quantum superposition and interference to enhance its capability to travel from an antenna complex, where light is absorbed, to the reaction center, where this energy is converted into a more available chemical form. More in general, quantum walks [14] are becoming more and more popular recently, since they have potential applications, not only in energy transport [15], but also, for instance, in quantum information theory [16, 17, 18, 19, 20, 21, 22] since they may lead to quantum algorithms with polynomial as well exponential speedups [23], e.g. Grover search algorithm [24], universal models for quantum computation [25], state transfer in spin and harmonic networks [26, 27, 28, 29], noise-assisted quantum communication [29], and including also very recent proposals for google page ranking [30, 31]. The generalization of classical random walks into the quantum domains had led to different variants, as, mainly, discrete-time quantum walks [14], based on an additional action of a quantum ‘coin’, and continuous-time quantum walks obtained, basically, by mapping the classical transfer matrix into a system Hamiltonian [17]; notice, however, that these two types of walks are shown to be connected by a precise limiting procedure [32]. On top of that, this growing interest has been especially stimulated by the increasing number of fascinating and challenging experiments demonstrating the basis features of quantum walks. They have been implemented, for instance, with NMR [33, 34], trapped ions [35, 36], neutral atoms [37], and several photonic schemes as waveguide structures [38], bulk optics [39, 40], fiber loop configurations [41, 42, 43], and miniaturized integrated waveguide circuits [44, 45, 46]. Most of these experiments included a single walker moving on a line and, only very recently, they have implemented optical quantum walks on a square lattice by laser pulses [42] and single photons [43]. Two-walker experiments on a line have been reported in Refs. [44, 45, 46], motivated also by the fact that the introduction of multiple walkers allows one to map a quantum walk on a line into higher dimensional lattices [47]. Therefore, given these first experimental attempts for higher-dimensional quantum walks, it turns out to be timely and interesting to investigate more in detail the transport features of quantum walkers over large complex networks. Furthermore, motivated by our previous results on noise-enhanced transfer of energy and information over light-harvesting complexes and communication networks [12, 29], the additional mixing of quantum features with classical transport behaviour is worthwhile of being deeper understood when varying the underlying (high-dimensional) network topology. A recent study on the comparison of quantum and classical energy transfer, by noisy cellular automata, can be found in Ref. [48] - for a review on continuous-time quantum walks on complex networks see also Ref. [15]. Concerning the role of decoherence in the mixing time of both discrete- and continuous-time quantum walks, especially for chains, cycles and hypercubes, a review paper is in Ref. [21]. Moreover, in the context of light-harvesting phenomena, the role of geometry has been also investigated in terms of structure optimization [49], in presence of disordered systems [50], and to propose design principles for biomimetic structures [51]. The outline of the paper is the following. In Sec. 2.1 we remind the standard formalism of complex networks, that are completely defined by their so-called adjacency matrices and spectral properties [52]. In particular, one can define statistical measures of distances between nodes in terms of shortest path lengths and the corresponding maximum (or graph diameter), and local properties as clustering coefficient measuring the connectivity of each vertex. In Sec. 2.2, we describe the formalism of quantum stochastic walks, in terms of well defined master equations, generalizing classical (random) and quantum walks by including all possible transition elements from a vertex as given by the connectivity of the graph (adjacency matrix) [53]. This formalism includes also, a special case, a model showing both classical and quantum walk behaviours as extreme cases and the classical-quantum transition in the other regimes. Our figure of merit for the transport efficiency of each graph is also defined. After these introductory sections, we investigate this model for a family of large complex networks, comparing the corresponding transport performances – see Sec. 3. In particular, we find that the same mixing of classical and quantum behaviours leads to the optimal transfer efficiency for all studied graphs. This optimal universality is shown to be very robust when one modifies or deletes some links of the structure. Finally, some conclusions and outlook are discussed in Sec. 4. 2 The Model Here we briefly introduce the main notions of complex networks, together with some well-defined geometric measures, and then the formalism of quantum stochastic walks that has been considered in this paper in order to implement the transport dynamics over these graphs. 2.1 Network topology: definitions and measures The structure of a complex network or graph $G$ can be described by a pair $G=(V,E)$, with $V(G)$ being a non-empty and finite set whose elements are called vertices (or nodes) and $E(G)$ being a non-empty set of unordered pairs of vertices, called edges (or links). Let us denote with $N$ the number of nodes of the graph $G$, i.e. the number of elements in $V(G)$, and with $L$ the number of links or elements of $E(G)$. A graph is completely defined by its adjacency matrix $A$ as follows $$[A]_{i,j}:=\left\{\begin{array}[]{lll}1&&\mbox{if $\{V_{i},V_{j}\}$ $\in$ $E(G% )$}\\ \\ 0&&\mbox{if $\{V_{i},V_{j}\}$ $\notin$ $E(G)$.}\end{array}\right.$$ (1) Moreover, if $\{V_{i},V_{j}\}$ $\in$ $E(G)$, the vertices $V_{i}$ and $V_{j}$ are said to be adjacent or neighbor. The number of adjacent vertices or neighbors to the vertex $V_{i}$ is denoted as $d_{i}$ and is called the degree (or connectivity) of $V_{i}$, i.e. $d_{i}=\sum_{j=1}^{N}A_{ij}$. Notice that a graph is said to be regular if each of its nodes has the same degree, i.e. $d_{i}=d$ for any $i$. In the following, we will consider only connected graphs, where there does exist always a path connecting any two nodes, since the unconnected subgraphs or isolated vertices do not play any role for the transport dynamics occurring over the rest of the network. Moreover, we restrict to consider graphs without loops, i.e. without edges of the form $\{V_{i},V_{i}\}$, i.e. $A_{ii}=0$ for any $i$. Although there are several measures of network topology in literature [1], here we focus on the ones that are more related to our study. First of all, the number of hops from the node $i$ to $j$ with length $k$ equals to $A^{k}_{ij}$. Then, the shortest path length ${\cal L}_{ij}$ between $i$ and $j$ is the minimum number of steps (geodesic lengths) to go from node $i$ to node $j$, i.e. $${\cal L}_{ij}=\min k:\ A^{k}_{ij}>0\ \text{while}\ A^{m}_{ij}=0\ \text{for}\ m% <k\;.$$ (2) The so-called characteristic path length $\cal{L}$ is hence defined as the average of the shortest paths between all possible pairs of nodes, i.e. ${\cal L}=1/\bar{L}\sum_{ij}{\cal L}_{ij}$ with $\bar{L}=N(N-1)/2$ being the total number of node pairs. Furthermore, the largest ${\cal L}_{ij}$ is defined as the diameter $D$ of the graph, that is the largest distance (or longest, shortest path) between any two vertices of a graph. It corresponds also to the the lowest integer $k$ for which $(A^{k})_{ij}\neq 0$ and $(A^{m})_{ij}=0$ if $m<k$ for each couple of nodes $i$ and $j$, i.e. $$D=\max_{i,j}{\cal L}_{ij}\;.$$ (3) Note that for any graph $1\leq D\leq N-1$. Furthermore this quantity is closely related to the spectral properties of $A$. A well-known theorem in spectral theory of complex networks states, indeed, that the number of distinct eigenvalues $\pi$ of the adjacency matrix $A$ is at least equal to $D+1$, i.e. $\pi\geq D+1$ [52]. On one side, the network with the smallest diameter is the complete or fully connected (FC) graph, where one has $D=1$, since all pairs of nodes are connected through a link. Indeed, the latter is the only graph whose adjacency matrix $A$ has only two different eigenvalues ($N-1$ with degeneracy $1$ and $-1$ with degeneracy $N-1$; note that the trace of $A$, and then the sum of its eigenvalues, has to be always $0$) and the bound above between $D$ and $\pi$ is tight. On the other side, for a given number of vertices $N$, the linear chain is the topology with the largest diameter ($D=N-1$). Finally, another measure of the graph connectivity, known as clustering coefficient $\cal C$, quantifies how well the neighborhood of a node is connected [54]. Given a node $i$, let us define $G_{i}$ as its neighborhood, i.e. the graph represented by the set of neighbors of the vertex $i$ and the relative interconnecting links. Then ${\cal C}_{i}$ is the local clustering coefficient of the node $i$ and is defined as ${\cal C}_{i}=2e_{i}/(d_{i}(d_{i}-1))$, with $e_{i}$ being the number of links in $G_{i}$ and $d_{i}$ the degree of the node $i$, i.e. it is the ratio of the number of links in $G_{i}$ ($e_{i}$) over its maximum possible number ($d_{i}(d_{i}-1)/2$). The (global) cluster coefficient $\cal C$ of the graph $G$ is then given by the average of ${\cal C}_{i}$ over all sites $i$, i.e. $${\cal C}=\frac{1}{N}\sum_{i=1}^{N}\frac{2e_{i}}{d_{i}(d_{i}-1)}\;,$$ (4) and it is always in the range $[0,1]$. 2.2 Transport dynamics: quantum stochastic walks Once we have introduced the topological structure of our model, we need to specify the corresponding dynamics. To start with, let us remind that a random walk is usually defined as a time-discrete process where at each step the walker jumps between two connected nodes of the graph $G$ with some probability described by the transition matrix $T=\{T_{ij}\}$ [55]. Usually, one has $T_{ij}=A_{ij}/{d_{i}}$ for classical random walks, that is indeed called as random walk normalized Laplacian matrix. More specifically, the Laplacian matrix $L$ is $L=A-D$, where $D$ is the diagonal matrix of the vertex degrees, i.e. $D=\{d_{i}\}$, while $T=D^{-1}A$. Given the occupation probability distribution $\vec{q}_{t}\equiv\{q_{i}^{t}\}$ of the walker over the nodes $V_{i}$ at a time $t$, the distribution at time $t+1$ is simply given by $\vec{q}_{t+1}=T\vec{q}_{t}$. The time-continuum version of such classical random walk (CRW) dynamics is then $$\frac{d}{dt}{\vec{q}}=(T-\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1})% \vec{q}\;.$$ (5) In both cases, the system ends up with a stationary (unique) distribution $\bar{q}_{i}=d_{i}/(2N)$, that is the left-eigenvector of $T$ associated to the eigenvalue equal to $1$. The convergence rate towards the steady state $\bar{q}$, also known as mixing rate $\tau_{mix}$, is proportional to the so-called spectral gap of the graph $G$, that is the difference between the absolute values of the two largest eigenvalues of $A$ ($\lambda_{1}$, $\lambda_{2}$), i.e. $\tau_{mix}\propto(\lambda_{1}-\lambda_{2})$ [52]. In other words, the larger the spectral gap is, the faster the walker converges to $\bar{q}$. Note that one always has $\lambda_{1}-\lambda_{2}\leq N$ and the bound is achieved for the FC graph. It is well known that well-connected graphs have small diameters and large spectral gaps, implying that the mixing rate $\tau_{mix}$ is larger (fast classical random walks) for graphs with larger $D$. There do exist several bounds showing that the mixing rate $\tau_{mix}$ does monotonically increase with $D$, apart from some constants. In order to study the transport properties of such networks including also quantum coherence effects, we use the general framework of continuous-time quantum stochastic walks (QSW) [53], based on the Kossakowski-Lindblad master equation, allowing us to interpolate between classical random walks (CRW) and quantum walks (QW) [20]. More specifically, the evolution of our initial state, represented by the density operator $\rho$, follows: $$\frac{d\rho}{dt}=-(1-p)i[H,\rho]+p\sum_{ij}\left(L_{ij}\rho{L}^{\dagger}_{ij}-% \frac{1}{2}\{{L}^{\dagger}_{ij}L_{ij},\rho\}\right)\;,$$ (6) where $H$ is the Hamiltonian describing the quantum coherent dynamics, while the operators $L_{ij}$ are responsible for the irreversibility. Note the parameter $p\in[0,1]$ quantifying the interplay between coherent (unitary) dynamics and incoherent (irreversible) one. In order to match with the limit of usual CRWs by choosing $p=1$, we assume $L_{ij}=T_{ij}|i\rangle\langle j|$ with $|i\rangle$ being the site basis, recovering Eq. (5) for the diagonal elements of $\rho$, i.e. $\vec{q}_{t}\equiv\{\rho_{ii}(t)\}$. On the other side, $p=0$ corresponds to the pure QW master equation, where we choose $H=A$. This corresponds to a simplified model of a system (for instance, a light-harvesting complex), where all the energies and couplings are the same (homogeneous graphs) [56]. For simplicity, we neglect the presence of losses and we restrict to the case where only one excitation is present in the network. Note that this formalism has been also used in a different context to propose a more efficient quantum navigation algorithm to rank elements of large (google page) networks [30, 31]. To study the transfer efficiency of the QSW over the complex network $G$, on the right-side of Eq. (6) we add another Lindblad term $L_{N+1}=\Gamma\ [\ 2|N+1\rangle\langle N|\ \rho\ |N\rangle\langle N+1|-\{|N% \rangle\langle N|,\ \rho\}]$, with $\Gamma$ being the irreversible transfer rate from the site $N$ of the graph into some external node $N+1$ (trapping site) where the energy is continuously and irreversibly stored. Hence, the transfer efficiency of the graph will be measured by [10, 12, 13, 29] $${\cal E}(p,t)=2\Gamma\int_{0}^{t}\rho_{NN}(p,t^{\prime})\mathrm{d}t^{\prime}\;.$$ (7) Hence, our figure of merit will be: $${\cal E}(p)\equiv{\cal E}(p,\bar{t})\;\text{with}\;\bar{t}:\;\exists\bar{p}\in% [0,1]\;\text{with}\;{\cal E}(\bar{p},\bar{t})\simeq 1\;,$$ (8) usually corresponding to take $\bar{t}\gg N||A_{ij}||^{-1}$ with $||A_{ij}||$ being the strength of the coupling rates, i.e. considering the trapping site population after the initial transient behavior. 3 Results In the following we will investigate the model described above for a large class of complex networks, including up to thousands of nodes, comparing the relative transfer efficiency ${\cal E}(p)$. 3.1 Regular graphs: chains and square lattices The simplest geometry, to start with, is represented by a linear chain connecting node $1$ to node $N$ ($D=N-1$). As shown in Fig. 1, when the excitation initially is on site $1$ and in absence of disorder (homogeneous chain), the quantum limit ($p=0$) provides the optimal transport efficiency, in agreement with previous results in Refs. [10, 12]. However, if it is initially localized on the center site of the chain, the efficiency becomes optimal at the value of $p\sim 0.1$. This does hold also when we average ${\cal E}(p)$ over any possible choice of site where all energy is at time $0$. A $d\times d$ regular square lattice is a grid graph whose $N=d^{2}$ vertices are on a square grid ($D=2d-2$) and are linked only by nearest neighbor edges – see Fig. 2. It is considered as a regular graph since each vertex has degree $4$ (excluding the boundary nodes). This network is typically characterized by high values of $\cal L$ but small clustering coefficients $\cal C$. As for linear chains, the optimal transport occurs for an intermediate mixing of quantum and classical effects, particularly for $p\sim 0.1$ – see Fig. 3. Here and in the following, the decreasing behavior of ${\cal E}(p)$, for $p$ larger than the optimal one, is intuitively expected because of quantum Zeno effects suppressing the transport dynamics. 3.2 Small-World topologies Here we consider another topology, known as small-world (SW) network, that has been extensively investigated in literature [1] as describing a plethora of realistic complex networks, ranging from world-wide web to protein structures. In particular, a small-world network is a graph which interpolates between a regular square lattice and a random graph (Sec. 3.3), i.e. in which the distance between any two vertices is of the order of that for a random graph but, at the same time, the concept of neighborhood is preserved, as for regular lattices. In other words, this graph is like a square lattice with the introduction of a few long-range edges creating short-cuts between distant nodes [57] – see Fig. 4. It can be constructed following the method proposed in Ref. [57], but here we follow a slightly different algorithm allowing us to keep fixed the degree of each vertex as in Ref. [58]. We start from a regular square lattice and we randomly choose a vertex $V_{i_{1}}$ and the edge $V_{i_{1}}-V_{i_{2}}$ that connects vertex $V_{i_{1}}$ to its nearest neighbor $V_{i_{2}}$, at random. With probability $r$ this edge is rewired and with probability $1-r$ it is left in place. If the edge has been rewired, (a) we choose at random a second vertex $V_{j_{1}}$ and one of its edges, e.g. the edge $V_{j_{1}}-V_{j_{2}}$ connecting $V_{j_{1}}$ to $V_{j_{2}}$, and (b) we replace the couple of edges $V_{i_{1}}-V_{i_{2}}$ and $V_{j_{1}}-V_{j_{2}}$ with the couple $V_{i_{1}}-V_{j_{2}}$ and $V_{j_{1}}-V_{i_{2}}$, as in the inset of Fig. 5. This process is repeated by moving over the entire lattice considering each vertex in turn until one lap is completed. In such a way the limit case $r=1$ is a random graph with fixed degree equal to $4$. In the intermediate cases $0<r<1$ an increasing number of long-range edges turns up in the graph. In other terms, the introduction of a few long-range edges create short-cuts that connect vertices that otherwise would be much further apart. Strictly speaking, the characteristic path length $\cal L$ of the rewired graph decreases while increasing $r$ [1]. Therefore, we can use the term “small-world” to refer to a rewired lattice (with fixed degree) with the minimum number of rewired edges such that the characteristic path length $\cal L$ is almost as small as that one for the corresponding random graph, but the cluster coefficient $\cal C$ is still as high as for a regular lattice, i.e. much larger than $\cal C$ for random networks [57]. As shown also in Ref. [58], this is obtained already for very small values of $r$ ($r\simeq 0.01$), much before the random graph limit ($r=1$). Notice that this model undergoes a ‘genuine’ continuous phase transition as the density of shortcuts tends to zero with a characteristic length diverging as $r^{-1}$ [1]. Now we study the QSW dynamics over this geometry when varying the mixing $p$ of classical and quantum behavior. Again, as shown in Fig. 5, we find the optimality of $p\sim 0.1$ for different values of rewiring probabilities $r$. See also Ref. [59] for other studies of quantum transport on small-world networks. Furthermore, we analyze the connection, if any, between the value of $p$ for QSWs on a square lattice and the value of $r$ for CRW on a rewired small-world structure (SW-CRW) – see Fig. 6. In other words, we extract the effective value of rewiring probability $r_{e}$ providing the same transport efficiency corresponding to a given value of $p$. It turns out that these two quantities $p$ and $r_{e}$ are related and, in particular, follow a power law behavior (inset of Fig. 6). The more coherent is the lattice-QSW dynamics, the higher is the number of long-range links required in the SW-CRW model in order to achieve the same transferred energy into the sink. A comparison of the corresponding time evolutions ${\cal E}(p,t)$ of the transfer efficiency for a given value of $r_{e}$ and $p$ are shown in Fig. 7. Therefore, not only ${\cal E}(p)$ is the same in correspondence of the mapped $p$ and $r_{e}$, but also the full time evolutions ${\cal E}(p,t)$ follow the same behaviors. Besides, let us point out that there is a range of high values of transfer efficiency obtained for small $p$ (i.e., closer to the quantum limit), that no CRW can achieve no matter how many long-range links may be added. Examples of kinetic models of energy transport with nonlocal links were studied in Ref. [60]. Finally, we focus on the robustness of the transport efficiency optimality with respect to rewiring or deleting links. In the context of complex network theory, this is called static robustness and usually refers to the resilience of real graphs (as electric networks, WWW, social networks, etc.) to external attacks or random failures [1]. As shown in Figs. 8 and 9, the optimality of $p\sim 0.1$ is very robust against an increasing number of rewired or deleted links. This geometric robustness is also especially relevant when one is dealing with real physical systems behaving as noisy quantum walkers on imperfect structures. 3.3 Random and Scale-Free graphs Another important class of complex networks is represented by random graphs (RG) and scale-free (SF) networks [1]. The first ones are defined as structures with Poissonian distribution of the node degree, and can be constructed also as limiting case ($r\rightarrow 1$) of the small-world networks studied above – see example in the left panel of Fig. 10. They are also characterized by small values of both $\cal L$ and $\cal C$. A scale-free network is, instead, a graph with a power-law distribution $p(d)\sim d^{-\gamma}$ of the vertex degree $d$. It displays a small characteristic path length $\cal L$ as for a small-world network and for random graph, but it differs from them for having a power law degree distribution. This graph can be constructed in the following way. By using the preferential attachment growing procedure introduced by Barabási and Albert [61], one starts from $v+1$ all to all connected vertices and at each time step one adds a new vertex with $v$ edges. These $r$ edges point to old vertices with probability $q_{i}=\frac{d_{i}}{\sum_{j}d_{j}}$, where $d_{i}$ is the degree of the vertex $V_{i}$, as defined in Sec. 2.1. This procedure allows a selection of the $\gamma$ exponent of the power law degree scaling, with $\gamma=3$ in the thermodynamic limit (i.e., $N\longrightarrow\infty$) – see example in the right panel of Fig. 10. The $p$-dependence of the transfer efficiency ${\cal E}(p)$ for both graphs is shown in Fig. 11. Again, but still interestingly enough, the transport efficiency optimality is reached for $p\sim 0.1$. 3.4 Other Complex Networks Here we generalize the analysis above for a wider class of large complex networks. First of all, we find that, as expected, the transport efficiency ${\cal E}(p)$ of CRW dynamics is linearly dependent on the graph diameter $D$ introduced in the Sec. 2.1 – see Fig. 12. In other terms, the larger $D$ is, the lower is the transport efficiency ${\cal E}(p)$ on the corresponding network. However, it turns out that the latter can be enhanced by adding some quantum coherence ($p<1$) in the dynamics, as found above. Indeed, for a very large class of complex networks (including random graphs, small-worlds, scale-free, rings, stars, dendrimers, Kary trees, etc. [1]), with $D>3$, we find that the optimal transport efficiency is almost always achieved when $$p_{opt}\sim 0.1\;.$$ (9) The main exception is represented by FC graphs ($D=1$) that are optimal for $p=1$, i.e. for CRWs – see inset of Fig. 12. This can be intuitively explained by the fact that for graphs with very small $D$ (so high mixing rate $\tau_{mix}$), as FC networks, CRWs propagate extremely quickly, while they are very slow for a linear chain (largest $D$, so smallest $\tau_{mix}$). Vice versa, a basically opposite behaviour is observed for QWs, i.e. larger $D$ usually implies faster transport. The physical intuition behind it is that for large $D$ one has an higher number $\pi$ of different eigenstates of the adjacency matrix $A$ (since $\pi\geq D+1$), and so less energy trapped (or localized) states in the dynamics – see concept of invariant subspaces introduced in Ref. [12]. On the other side, a FC graph ($D=1$ and $\pi=2$) is the worst geometry for QWs; indeed, because of destructive interference effects, the transfer efficiency ${\cal E}(p=0)$ cannot be larger than $1/(N-1)$ [12]. However, by means of a universal mixing ($p\sim 0.1$) of, loosely speaking, $90\%$ of QWs and $10\%$ of CRWs, the energy transport becomes optimal and robust, irrespective of the particular underlying geometry of the complex network. 4 Conclusions and Outlook In this paper we extensively investigate the transport properties of quantum stochastic walks over a wide family of large complex networks. More specifically, we numerically calculate and compare the transport performances of these graphs by the transfer efficiency of a trapping site, also motivated by the structure of light-harvesting complexes where the interplay of quantum coherence and environmental noise has been recently shown to play a fundamental role in explaining the remarkably efficient as well fast exciton energy transfer. We find that, roughly speaking, the mixing of $90\%$ of quantum dynamics and $10\%$ of classical one, i.e. for $p\sim 0.1$, leads to the optimal transport efficiency for a large class of complex networks. On top of that, this optimum is not only universal but also robust with respect geometric changes as rewiring or deleting of links. However, there are some exceptions represented by very well connected graphs (e.g., FCs and graphs with $D\ll N$), where instead classical random walks, i.e. $p=1$, provide the optimal transport into the trapping site. This might be a consequence of the fact that for very well connected graphs there are many trapped states due to destructive interference (large invariant subspaces almost covering the full Hilbert space [12]), that are fully destroyed only at the classical limit. The transition behaviour from $p=1$ to $p\sim 0.1$ when making the network less and less connected ($D\gg 1$), and a deeper analysis of this universal and robust optimality, also based on a possible analytical derivation by means of Lieb-Robinson bounds [62, 63], will be investigated in a forthcoming paper. Finally, our results could be tested through already experimentally available benchmark platforms as cold atoms in optical lattices, artificial light-harvesting structures, and photonics-based architectures, hence also inspiring the design of optimally efficient transport nanostructures for novel solar energy and information quantum technologies. Acknowledgments This work has been supported by EU FP7 Marie–Curie Programme (Career Integration Grant) and by MIUR–FIRB grant (Project No. RBFR10M3SB). We acknowledge QSTAR for computational resources, based also on GPU-CUDA programming by NVIDIA Tesla C2075 GPU computing processors.   References [1] S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, D.-U. Hwang, Phys. Rep. 424, 175 (2006). [2] G. S. Engel, T. R. Calhoun, E. L. Read, T.-K. Ahn, T. Mancal, Y.-C. Cheng, R. E. Blankenship, and G. R. Fleming, Nature 446, 782 (2007). [3] H. Lee, Y.-C. Cheng, and G. R. Fleming, Science 316,1462 (2007). [4] E. Collini, C. Y. Wong, K. E. Wilk, P. M. G. Curmi, P. Brumer, and G. D. Scholes, Nature 463, 644 (2010). [5] G. Panitchayangkoon, D. Hayes, K. A. Fransted, J. R. Caram, E. Harel, J.Wen, R. E. Blankenship, and G. S. Engel, Proc. Natl. Acad. Sci. USA 107, 12766 (2010). [6] R. Hildner, D. Brinks, J. B. Nieder, R. J. Cogdell, N. F. van Hulst, Science 340, 1448 (2013). [7] M. Mohseni, Y. Omar, G. S. Engel, and M.B. Plenio eds., Quantum effects in biology (Cambridge University Press, Cambridge, 2013). [8] S.F. Huelga and M.B. Plenio, Contemp. Phys. 54, 181–207 (2013). [9] M. Mohseni, P. Rebentrost, S. Lloyd, and A. Aspuru-Guzik, J. Chem. Phys. 129, 174106 (2008). [10] M.B. Plenio and S.F. Huelga, New J. Phys. 10, 113019 (2008). [11] A. Olaya-Castro, C.F. Lee, F. Fassioli Olsen, and N.F. Johnson, Phys. Rev. B 78, 085115 (2008). [12] F. Caruso, A.W. Chin, A. Datta, S.F. Huelga, and M.B. Plenio, J. Chem. Phys. 131, 105106 (2009); Phys. Rev. A 81, 062346 (2010). [13] A.W. Chin, A. Datta, F. Caruso, S.F. Huelga, and M.B. Plenio, New J. Phys. 12, 065002 (2010). [14] Y. Aharonov, L. Davidovich and N. Zagury, Phys. Rev. A 48, 1687 (1993). [15] O. Mülken and A. Blumen, Phys. Rep. 502, 37-87 (2011). [16] M. Santha, Quantum walk based search algorithms, in Theory and Applications of Models of Computation, Lecture Notes in Computer Science, edited by M. Agrawal, D.Z. Du, Z.H. Duan, A.S. Li (Springer, Berlin, 2008), Vol. 4978, p. 31. [17] E. Farhi and S. Gutmann, Phys. Rev. A 58, 915-928 (1998). [18] A. Childs, E. Farhi, and S. Gutmann, Quant. Info. Proc. 1, 35 (2002). [19] A. Childs, Phys. Rev. Lett. 102, 180501 (2009). [20] J. Kempe, Contemp. Phys. 44, 307 (2003). [21] V. Kendon, Math. Struct. in Comp. Sci. 17, 1169–1220 (2007). [22] S.E. Venegas-Andraca, Quantum Inf. Process. 11, 1015 (2012). [23] A. Ambainis, Int. J. Quantum Inform. 01, 507 (2003). [24] L. K. Grover, Phys. Rev. Lett. 79, 325 (1997). [25] A. M. Childs, D. Gosset, and Z. Webb, Science 339, 791 (2013). [26] S. Bose, Phys. Rev. Lett. 91, 207901 (2003). [27] M. Christandl, N. Datta, A. Ekert, and A. J. Landahl, Phys. Rev. Lett. 92, 187902 (2004). [28] M.B. Plenio, J. Hartley and J. Eisert, New. J. Phys. 6, 36 (2004). [29] F. Caruso, S.F. Huelga, and M.B. Plenio, Phys. Rev. Lett. 105, 190501 (2010). [30] E. Sánchez-Burillo, J. Duch, J. Gómez-Gardeñes, and D. Zueco, Sci. Rep. 2, 605 (2012). [31] G. D. Paparo and M.A. Martin-Delgado, Sci. Rep. 2, 444 (2012); G.D. Paparo, M. Müller, F. Comellas, and M.A. Martin-Delgado, Sci. Rep. 3, 2773 (2013). [32] F.W. Strauch, Phys. Rev. A 74 (2006) 030301(R). [33] J. Du, H. Li, X. Xu, M. Shi, J.Wu, X. Zhou, and R. Han, Phys. Rev. A 67, 042316 (2003). [34] C. A. Ryan, M. Laforest, J. C. Boileau, and R. Laflamme, Phys. Rev. A 72, 062317 (2005). [35] H. Schmitz, R. Matjeschk, C. Schneider, J. Glueckert, M. Enderlein, T. Huber, and T. Schaetz, Phys. Rev. Lett. 103, 090504 (2009). [36] F. Zähringer, G. Kirchmair, R. Gerritsma, E. Solano, R. Blatt, and C. F. Roos, Phys. Rev. Lett. 104, 100503 (2010). [37] M. Karski, L. Frster, J.-M. Choi, A. Steffen, W. Alt, D. Meschede, and A. Widera, Science 325, 174 (2009). [38] H.B. Perets et al., Phys. Rev. Lett. 100, 170506 (2008). [39] M. A. Broome, A. Fedrizzi, B. P. Lanyon, I. Kassal, A. Aspuru-Guzik and A. G. White, Phys. Rev. Lett. 104, 153602 (2010). [40] T. Kitagawa et al., Nature Commun. 3, 882 (2012). [41] A. Schreiber, K. N. Cassemiro, V. Potocek, A. Gabris, P. J. Mosley, E. Andersson, I. Jex, and C. Silberhorn, Phys. Rev. Lett. 104, 050502 (2010). [42] A. Schreiber et al., Science 336, 55–58 (2012). [43] Y.-C. Jeong, C. Di Franco, H.-T. Lim, M. Kim, and Y.-H. Kim, Nat. Commun. 4, (2013). [44] A. Peruzzo et al., Science 329, 1500–1503 (2010). [45] J.O. Owens et al., New J. Phys. 13, 075003 (2011). [46] L. Sansoni et al., Phys. Rev. Lett. 108, 010502 (2012). [47] P. P. Rohde, A. Schreiber, M. Stefanak, I. Jex, A. Gilchrist, and C. Silberhorn, New J. Phys. 13, 013001 (2011). [48] M. Avalle and A. Serafini, Eprint arXiv:1311.0403 (2013). [49] E. Harel, J. Chem. Phys. 136, 174104 (2012). [50] M. Mohseni, A. Shabani, S. Lloyd, Y. Omar, H. Rabitz, J. Chem. Phys. 138, 204309 (2013). [51] M. Sarovar and K.B. Whaley, New J. Phys. 15, 013030 (2013). [52] P. Van Mieghem, Graph Spectra for Complex Networks (Cambridge University Press, 2011). [53] J.D. Whitfield, C.A. Rodríguez-Rosario, and A. Aspuru-Guzik, Phys. Rev. E 81, 022323 (2010). [54] D.J. Watts, S.H. Strogatz, Nature 393, 440 (1998). [55] G.H. Weiss, Aspects and Applications of the Random Walk, North-Holland, Amsterdam, 1994. [56] A more realistic model including static disorder in both energy and coupling terms, with also specific distance-dependent couplings referred to particular physical systems, will be analyzed in a forthcoming paper. [57] D.J. Watts, S.H. Strogatz, Nature 393, 440 (1998); D.J. Watts, Small Worlds (Princeton Univ. Press, Princeton, New Jersey, 1999). [58] F. Caruso, A. Pluchino, V. Latora, S. Vinciguerra, and A. Rapisarda, Phys. Rev. E 75, 055101(R) (2007); F. Caruso, V. Latora, A. Pluchino, A. Rapisarda, and B. Tadić, Eur. Phys. J. B 50, 243–247 (2006). [59] O. Mülken, V. Pernice, and A. Blumen, Phys. Rev. E 76, 051125 (2007). [60] J. Cao and R.J. Silbey, J. Phys. Chem. A 113 (50), 13825–13838 (2009). [61] A. L. Barabási and R. Albert, Science 286, 509 (1999). [62] M. Kliesch, C. Gogolin, and Jens Eisert, invited book chapter in [L. D. Site and V. Bach, eds., Many-Electron Approaches in Physics, Chemistry and Mathematics: A Multidisciplinary View (Springer)], Eprint arXiv:1306.0716 (2013). [63] M.J. Kastoryano and J. Eisert, J. Math. Phys. 54, 102201 (2013).
New H-band Galaxy Number Counts: A Large Local Hole in the Galaxy Distribution? W.J. Frith, N. Metcalfe & T. Shanks Dept. of Physics, Univ. of Durham, South Road, Durham DH1 3LE, UK E-mail:w.j.frith@durham.ac.uk (Accepted 2005. Received 2005; in original form 2005 ) Abstract We examine $H$-band number counts determined using new photometry over two fields with a combined solid angle of 0.30 deg${}^{2}$ to $H\approx 19$, as well as bright data ($H\leq 14$) from the 2 Micron All Sky Survey (2MASS). First, we examine the bright number counts from 2MASS extracted for the $\approx$4000 deg${}^{2}$ APM survey area situated around the southern galactic pole. We find a deficiency of $\approx$25 per cent at $H=13$ with respect to homogeneous predictions, in line with previous results in the $B$-band and $K_{s}$-band. In addition we examine the bright counts extracted for $|b|>$20${}^{\circ}$ (covering $\approx$27$\,$000 deg${}^{2}$); we find a relatively constant deficit in the counts of $\approx$15-20 per cent to $H=14$. We investigate various possible causes for these results; namely, errors in the model normalisation, unexpected luminosity evolution (at low and high redshifts), errors in the photometry, incompleteness and large-scale structure. In order to address the issue of the model normalisation, we examine the number counts determined for the new faint photometry presented in this work and also for faint data ($H$$\sim$$<$$\,$20) covering 0.39 deg${}^{2}$ from the Las Campanas Infra Red Survey (LCIRS). In each case a zeropoint is chosen to match that of the 2MASS photometry at bright magnitudes using several hundred matched point sources in each case. We find a large offset between 2MASS and the LCIRS data of 0.28$\pm$0.01 magnitudes. Applying a consistent zeropoint, the faint data, covering a combined solid angle of 0.69 deg${}^{2}$, is in good agreement with the homogeneous prediction used previously, with a best fit normalisation a factor of 1.095${}_{-0.034}^{+0.035}$ higher. We examine possible effects arising from unexpected galaxy evolution and photometric errors and find no evidence for a significant contribution from either. However, incompleteness in the 2MASS catalogue ($<10$ per cent) and in the faint data (likely to be at the few  per cent level) may have a significant contribution. Addressing the contribution from large-scale structure, we estimate the cosmic variance in the bright counts over the APM survey area and for $|b|>$20${}^{\circ}$ expected in a $\Lambda$CDM cosmology using 27 mock 2MASS catalogues constructed from the $\Lambda$CDM Hubble Volume simulation. Accounting for the model normalisation uncertainty and taking an upper limit for the effect arising from incompleteness, the APM survey area bright counts are in line with a rare fluctuation in the local galaxy distribution of $\approx 2.5\sigma$. However, the $|b|>$20${}^{\circ}$ counts represent a $4.0\sigma$ fluctuation, and imply a local hole which extends over the entire local galaxy distribution and is at odds with $\Lambda$CDM. The increase in faint near infrared data from the UK Infrared Deep Sky Survey (UKIDSS) should help to resolve this issue. keywords: galaxies: photometry - cosmology: observations - large-scale structure of the Universe - infrared: galaxies ††pagerange: New H-band Galaxy Number Counts: A Large Local Hole in the Galaxy Distribution?–New H-band Galaxy Number Counts: A Large Local Hole in the Galaxy Distribution?††pubyear: 2005 1 Introduction A recurring problem arising from the study of bright galaxy number counts has been the measured deficiency of galaxies around the southern galactic pole. This was first examined in detail by Shanks (1990) and subsequently by the APM galaxy survey (Maddox et al., 1990a), which observed a large deficit in the number counts ($\approx$50 per cent at $B=$16, $\approx$30 per cent at $B=$17) over a $\approx$4000 deg${}^{2}$ solid angle. If this anomaly was due solely to features in the galaxy distribution, this would be at odds with recent measurements of the variance of local galaxy density fluctuations (e.g. Hawkins et al., 2003; Frith et al., 2005b; Cole et al., 2005) or the expected linear growth of density inhomogeneities at large scales. Maddox et al. (1990b) examined possible causes of this deficiency. From redshift survey results over the APM survey area (Loveday et al., 1992), it was argued that a weak local under-density contributed to the observed deficiency at the $\sim$$<$$\,$10 per cent level at $B\approx$17. Instead, Maddox et al. (1990b) suggested that strong low redshift galaxy evolution was the dominant contribution. This phenomenon has also been suggested as a possible explanation for large deficiencies in the Sloan Digital Sky Survey (SDSS; Loveday, 2004), although models without such strong low redshift evolution provide predictions consistent with observed number redshift distributions (e.g. Broadhurst et al., 1988; Colless et al., 1990; Hawkins et al., 2003). In contrast, Shanks (1990) argued that evolution could not account for the observed slope and that large-scale structure was the principal cause of the deficiency in the counts. However, another possible contribution to the low counts might be errors in the APM photometry. Comparing the photographic APM photometry with $B$-band CCD data, Metcalfe et al. (1995) detected a small residual scale error in the APM survey zeropoints for $B$$\sim$$>$$\,$17. Correcting for this offset, the counts were now in good agreement with homogeneous predictions at faint magnitudes ($B$$\sim$$>$$\,$17.5); however, the problematic deficiency at brighter magnitudes remained. More recently, Busswell et al. (2004) used $B$-band CCD data over $\approx$337 deg${}^{2}$ within the APM survey area to provide the most accurate comparison to date with a sample of the APM survey photometry. The photometric zeropoint of this CCD data was in excellent agreement with the Millennium Galaxy Catalogue (Driver, 2003) and the Sloan Digital Sky Survey Early Data Release (Yasuda et al., 2001). However, a comparison with the APM photometry suggested a large offset of 0.31 magnitudes for $B<$17.35. Applying this to the APM survey counts, a deficiency of $\approx$25 per cent remained at $B=$16; Busswell et al. (2004) determined that such a deficiency in the local galaxy distribution would still be at odds with a $\Lambda$CDM form to the galaxy correlation function and power spectrum at large scales. In order to examine this issue independently, bright number counts have also been examined in the near infrared (Frith et al., 2003, 2004, 2005a). These wavelengths are particularly useful for such analysis as the number count predictions are fairly insensitive to the evolutionary model or the assumed cosmology at bright magnitudes (see Fig. 1); current observations are in remarkable agreement with predictions in the $K$-band to $K\approx$23 for example (McCracken et al., 2000) In particular, Frith et al. (2005a) examined $K_{s}$-band number counts selected from the 2 Micron All Sky Survey (2MASS; Jarrett, 2004). First, the counts over the APM survey area were determined; a similar deficiency was observed to the APM survey counts (with the zeropoint offset determined by Busswell et al. (2004) applied), with a $\approx$25 per cent deficit at $K_{s}=$12 compared to the no evolution model of Metcalfe et al. (2001). Using a $\Lambda$CDM form for the angular correlation function at large scales and assuming the observed counts were solely due to features in the local galaxy distribution, the observed counts represented a $5\sigma$ fluctuation. However, this result was complicated by the fact that the 2MASS $K_{s}$-band number counts for almost the entire survey ($|b|>$20${}^{\circ}$, covering $\approx$27$\,$000 deg${}^{2}$) were also low, with a constant deficiency of $\approx$20 per cent between $K_{s}=$10 and $K_{s}=$13.5. Did this surprising result perhaps indicate that the $K_{s}$-band Metcalfe et al. (2001) model normalisation was too high? Or, as suggested previously, could low redshift luminosity evolution significantly affect the bright counts? These issues were also addressed by Frith et al. (2005a): First, the Metcalfe et al. (2001) model was compared with faint $K$-band data collated from the literature. Fitting in the magnitude range 14$<K<$18 it was found that the best fit model normalisation was slightly too high, although not significantly (this magnitude range was used so as to avoid fluctuations in the counts arising from large-scale structure at bright magnitudes and significant effects from galaxy evolution at the faint end). Accounting for the normalisation uncertainty (of $\pm$6 per cent) the observed deficiency in the $K_{s}$-band counts over the APM survey area still represented a $\approx 3\sigma$ fluctuation. Second, the issue of low redshift luminosity evolution was also addressed: 2MASS galaxies below $K_{s}=13.5$ were matched with the Northern and Southern areas of the 2dF Galaxy Redshift Survey (2dGRS; Colless et al., 2001). The resulting $n(z)$, covering $>1000$ deg${}^{2}$ in total, was consistent with the no evolution model of Metcalfe et al. (2001). In addition, these $K_{s}$-band redshift distributions were used to form predictions for the number counts over the Northern and Southern 2dFGRS areas respectively. This was done by multiplying the luminosity function parameter $\phi^{*}$ (which governs the model normalisation) used in the Metcalfe et al. (2001) model by the relative density observed in the $K_{s}$-band $n(z)$ as a function of redshift. These ‘variable $\phi^{*}$ models’ were then compared with 2MASS counts extracted for the 2dFGRS areas in order to determine whether the observed counts were consistent with being due solely to features in the local galaxy distribution; the variable $\phi^{*}$ models were in good agreement with the number counts, indicating that low redshift luminosity evolution is unlikely to have a significant impact on the observed deficiency in the counts, in the $K_{s}$-band at least. In this paper we aim to address the issue of low, bright number counts in the near infrared $H$-band. In particular we wish to address a drawback to the $K_{s}$-band analysis of Frith et al. (2005a) - the issue of the number count model normalisation; while the $K_{s}$-band model used was compared with faint data and was found to be in good agreement, the level to which systematic effects, arising perhaps via zeropoint offsets between the bright and faint data or cosmic variance in the faint data, might affect the conclusions were uncertain. We address this issue in the $H$-band using new faint data covering 0.3 deg${}^{2}$ to $H=18$, calibrated to match the 2MASS zeropoint. In section 2, we first verify that the $H$-band counts provide number counts over the APM survey area which are consistent with the previous results in the $B$ and $K_{s}$-bands (Busswell et al., 2004; Frith et al., 2005a), and that the form of the counts is not significantly affected by low redshift luminosity evolution through comparisons with the variable $\phi^{*}$ models described above. In section 3, we provide details of the data reduction of the new faint $H$-band photometry. The associated counts are presented in section 4. In section 5 we discuss possible systematics affecting the bright number counts including the model normalisation and incompleteness. The conclusions follow in sections 6. 2 Bright $H$-band counts from 2MASS We wish to examine the form of bright number counts in the $H$-band in order to verify that the counts over the APM survey area ($\approx$4000 deg${}^{2}$ around the southern galactic pole) are comparable to those measured previously in the optical $B$-band and near infrared $K_{s}$-band (Busswell et al., 2004; Frith et al., 2005a). The near infrared has the advantage of being sensitive to the underlying stellar mass and is much less affected by recent star formation history than optical wavelengths. For this reason, number count predictions in the near infrared are insensitive to the evolutionary model at bright magnitudes. In Fig. 1 we show faint $H$-band data collated from the literature along with bright counts extracted from 2MASS over $\approx$27$\,$000 deg${}^{2}$. The 2MASS magnitudes are determined via the 2MASS $H$-band extrapolated magnitude; this form of magnitude estimator has previously been shown to be an excellent estimate of the total flux in the $K_{s}$-band (Frith et al., 2005b, c) through comparison with the total magnitude estimates of Jones et al. (2004) and the $K$-band photometry of Loveday (2000). Throughout this paper we use 2MASS $H$-band counts determined via this magnitude estimator. We also show two models in Fig. 1 corresponding to homogeneous predictions assuming no evolution and pure luminosity evolution models. These are constructed from the $H$-band luminosity function parameters listed in Metcalfe et al. (2005) and the $K+E$-corrections of Bruzual & Charlot (1993). At bright magnitudes the two are indistinguishable; only at $H$$\sim$$>$$\,$18 do the model predictions begin to separate. The faint data is in good agreement with both the no evolution and pure luminosity evolution predictions to $H\approx$26. Before examining the $H$-band counts over the APM survey area, we first verify that the bright counts are consistent with relatively insignificant levels of low redshift luminosity evolution in the manner carried out by Frith et al. (2005a) for the $K_{s}$-band counts. In the upper panels of Fig. 2 we show $H$-band $n(z)$ to the 2MASS limiting magnitude of $H=14$, determined through matched 2MASS and 2dFGRS galaxies over the 2dFGRS Northern (left hand) and Southern (right hand panels) declination strips (see Frith et al. (2005a) for further details of the matching technique). The solid lines indicate the expected homogeneous distribution constructed from the pure luminosity evolution predictions of Metcalfe et al. (2005) (there is no discernible difference between this and the no evolution prediction). In the lower panels we divide through by this prediction; these panels show the relative density as a function of redshift. The observed $n(z)$ are consistent with the expected trends, with relatively homogeneous distributions beyond $z=0.1$ (1 per cent and 8 per cent over-dense in the North and South respectively for $0.1\leq z\leq 0.2$). For this reason, Fig. 2 suggests that the level of luminosity evolution is relatively insignificant at low redshifts in the $H$-band; strong luminosity evolution produces an extended tail in the predicted $n(z)$ which is not observed in the data. As a further check against strong low redshift luminosity evolution, we can use the observed $n(z)$ to predict the expected $H$-band number counts over the 2dFGRS declination strips. This technique is described in detail in Frith et al. (2003, 2005a). To recap, we use the observed density (Fig. 2, lower panels), to vary the luminosity function normalisation ($\phi^{*}$) used in the Metcalfe et al. (2005) model as a function of redshift (for $z\leq 0.2$). We show these ‘variable $\phi^{*}$ models’ along with the 2MASS $H$-band counts extracted for the 2dFGRS strips in Fig. 3. In each case, the upper panels indicate the number count on a logarithmic scale; in the lower panels we divide through by the homogeneous prediction. In both the Northern and Southern 2dFGRS areas, the counts are in good agreement with the expected trend, defined by the corresponding variable $\phi^{*}$ model. This indicates that real features in the local galaxy distribution are the dominant factor in the form of the observed $H$-band number counts, and that strong low redshift luminosity evolution is unlikely to have a significant role in any under-density observed in the APM survey area. We are now in a position to examine the number counts over the APM survey area. In Fig. 4 we show counts extracted for the $\approx$4000 deg${}^{2}$ field along with the homogeneous and the Northern and Southern 2dFGRS variable $\phi^{*}$ models shown in Fig. 2. The form of the counts is in good agreement with the $B$ (Busswell et al., 2004) and $K_{s}$-band (Frith et al., 2005a) bright number counts measured over the APM survey area, with a deficiency of $\approx$25 per cent below $H=13$. In addition, the form of the counts is similar to that of the counts extracted from the 2dFGRS Southern declination strip and the corresponding variable $\phi^{*}$ model (this is also observed in the $B$ and $K_{s}$-band); this perhaps indicates that the form of the local galaxy distribution in the $\approx$600 deg${}^{2}$ 2dFGRS Southern declination strip is similar to that of the much larger APM survey area, with an under-density of $\approx$25 per cent to $z=$0.1. However, the 2MASS $H$-band counts over almost the entire survey ($|b|>$20${}^{\circ}$, $\approx$27$\,$000 deg${}^{2}$) are also deficient (as are the $K_{s}$-band counts), with a relatively constant deficit of $\approx$15-20 per cent to $H=14$ (Fig 3, right hand panels). The low $|b|>$20${}^{\circ}$ counts raise the question as to whether systematic effects are significant, or whether these counts are due to real features in the local galaxy distribution, as suggested by the agreement between the variable $\phi^{*}$ models and corresponding counts in Fig. 3. If the latter is true, then the size of the local hole would not only be much larger than previously suggested but would also represent an even more significant departure from the form of clustering at large scales expected in a $\Lambda$CDM cosmology. In the following two sections we address a possible source of systematic error using new faint $H$-band photometry - the model normalisation. Other possible causes for the low counts are also discussed in section 5. 3 New faint $H$-band data 3.1 Observations & data reduction Our data were taken during a three night observing run in September 2004 at the f/3.5 prime focus of the Calar Alto 3.5m telescope in the Sierra de Los Filabres in Andalucia, southern Spain. The $\Omega$-2000 infra-red camera contains a $2048\times 2048$ pixel HAWAII-2 Rockwell detector array, with 18.5$\mu$m pixels, giving a scale of 0.45”/pixel at the prime focus. All observations were taken with the $H$-band filter. Poor weather meant that only just over one night’s worth of data were usable, and even then conditions were not photometric. Our primary objective was to image the William Herschel Deep Field as deeply as possible (the results of which are presented in a forthcoming paper), but time was available at the start of each night to image several ’random’ fields for 15 minutes each. These were composed of individual 3 second exposures, stacked in batches of 10 before readout. A dithering pattern on the sky with a shift up to $\pm$25” around the nominal centre was adopted. Data reduction was complicated by the fact that both the dome and twilight sky flat fields appeared to have a complicated out-of-focus pattern of the optical train imprinted upon them (probably an image of the top end of the telescope). This appeared (in reverse) in the science data if these frames where used for flat-fielding. We therefore constructed a master flat field by medianing together all the science frames from a particular night. This was then used to flat-field all the data. Then, individual running medians were constructed from batches of 10 or so temporally adjacent frames, and these were subtracted from each frame to produce a flat, background subtracted image. These were then aligned and stacked together (with sigma clipping to remove hot pixels). 3.2 Calibration Photometric calibration of the $H$-band images is obtained through comparison with the 2MASS point source catalogue. Fig. 5 shows the 2MASS magnitudes compared with our data for 393 matched point sources over the Calar Alto field and the William Herschel Deep Field. The zeropoint of our data is chosen to match that of the 2MASS objects and is accurate to $\pm$0.01 magnitudes. The large datapoints and errorbars indicate the mean offset and $rms$ dispersion as a function of magnitude. When comparing this data to the 2MASS number counts at bright magnitudes it is important to note that the 2MASS point source catalogue includes a maximum bias in the photometric zeropoint of $<$2 per cent around the sky (see the 2MASS website). 3.3 Star/Galaxy separation We use the Sextractor software to separate objects below $H=18$; for this magnitude limit, the associated STAR_CLASS parameter provides a reliable indicator of stars and galaxies. We identify 30.0 per cent as galaxies (CLASS_STAR$<$0.1), 58.9 per cent as stars (CLASS_STAR$>$0.9), leaving 11.1 per cent as unclassified. 4 Faint $H$-band counts 4.1 Comparison with the LCIRS Before determining number counts for the new $H$-band data described in section 3, we first examine the photometry of the Las Campanas Infra-Red Survey (LCIRS; Chen et al., 2002). The published data covers 847 arcmin${}^{2}$ in the Hubble Deep Field South (HDFS) and 561 arcmin${}^{2}$ in the Chandra Deep Field South (CDFS); the combined solid angle (0.39 deg${}^{2}$) represents the largest $H$-band dataset for 14$\sim$$<$$\,$$H$$\sim$$<$$\,$20. The associated number counts are $\approx$15 per cent below the homogeneous Metcalfe et al. (2005) predictions at $H=18$ (see Fig. 1). This is significant, as if the model normalisation was altered to fit, the deficiency in the 2MASS counts at bright magnitudes (Fig. 3) would become much less severe. However, various other surveys show higher counts, although over much smaller solid angles. With the LCIRS data in particular therefore, it is vital to ensure that the photometric zeropoint is consistent with the 2MASS data at bright magnitudes. In Fig. 6 we compare the LCIRS and 2MASS $H$-band photometry for 438 points sources matched over the HDFS and CDFS fields. There appears to be a large offset which is approximately constant for $K>$12. Using point sources matched at all magnitudes, we determine a mean offset of -0.28$\pm$0.01 magnitudes; this is robust to changes in the magnitude range and is consistent over both the HDFS and CDFS fields. 4.2 New $H$-band counts In Fig. 7 we show counts determined for the new $H$-band data described in section 3, the 0.27 deg${}^{2}$ CA field and the 0.06 deg${}^{2}$ WHDF (see also table 1). Both sets of counts are in excellent agreement with the pure luminosity evolution and no evolution homogeneous predictions of Metcalfe et al. (2005). In addition we show LCIRS counts determined in the 0.24 deg${}^{2}$ HDFS and 0.16 deg${}^{2}$ CDFS, applying the 0.28 magnitude zeropoint offset determined with respect to 2MASS in section 4.1. The associated counts are also in excellent agreement with the Metcalfe et al. (2005) models at all magnitudes. In Fig. 8, we show counts determined from our data and the LCIRS combined, with a consistent zeropoint applied as in Fig. 7. We estimate the uncertainty arising from cosmic variance using field-to-field errors, weighted by the solid angle of each field. These combined counts are in good agreement with the Metcalfe et al. (2005) models, particularly at fainter magnitudes where the dispersion in the counts arising from cosmic variance appears to be small. We perform least squares fits between these counts and the pure luminosity evolution model; in the magnitude range 14$<H<$18 we find a best fit normalisation of 1.095${}_{-0.034}^{+0.035}$, where 1.0 corresponds to the Metcalfe et al. (2005) normalisation shown in Fig. 8. Varying the fitting range does slightly alter the result; in the range 16$<H<$18 we find a best fit normalisation of 1.061${}_{-0.033}^{+0.048}$ for example. 5 Discussion In the previous sections, bright $H$-band number counts from 2MASS were determined over the APM survey area ($\approx 4000$ deg${}^{2}$) and almost the entire 66 per cent of the sky ($|b|>$20${}^{\circ}$, $\approx$27$\,$000 deg${}^{2}$), along with faint counts to $H=18$ over a combined solid angle of 0.69 deg${}^{2}$ applying a zeropoint consistent with 2MASS. The bright $H$-band number counts over the APM survey area are extremely low ($\approx 25$ per cent at $H=13$) with respect to homogeneous predictions, and reproduce the form of the bright counts observed in the optical $B$-band (Busswell et al., 2004) and the near infrared $K_{s}$-band (Frith et al., 2005a). Previous work has suggested that, if due solely to local large-scale structure, these low counts would be at odds with the form of clustering expected in a $\Lambda$CDM cosmology. In addition, the bright $H$-band $|b|>$20${}^{\circ}$ counts were also found to be low. In the following section, various possible causes for these low counts are examined. 5.1 Model normalisation The normalisation of number count models may be determined by fixing the predicted to the observed number of galaxies at faint magnitudes. The magnitude range at which this is done should be bright enough to avoid large uncertainties in the evolutionary model while faint enough such that large fluctuations in the counts arising from cosmic variance are expected to be small. Near infrared wavelengths are expected to be insensitive to luminosity evolution at bright magnitudes, making the $H$-band particularly useful for such analysis. Of vital importance when determining the model normalisation is that when making comparisons between faint and bright counts, the zeropoints are consistent; an offset of a few tenths of a magnitude between the two, for example, would be enough to remove the observed anomaly in the bright counts over the APM survey area. Applying the 2MASS zeropoint to the faint $H$-band data presented in this work and the LCIRS data (Chen et al., 2002), covering a combined solid angle of 0.69 deg${}^{2}$, it is clear that a discrepancy between the bright and faint counts exists; the model normalisation used previously, which indicates low counts below $H=14$ over the APM survey area (and for $|b|>$20${}^{\circ}$), provides good agreement with the faint data. In fact, fixing the model to the faint counts implies a slightly higher normalisation. This agreement, as indicated by the errorbars in Fig. 8, suggests that the discrepancy between the bright and faint counts is not due to cosmic variance in the faint data. To remove the observed deficit in the APM survey area counts below $H=14$ by renormalising the model, requires a deviation from the faint counts of 7.0$\sigma$ using the best fit normalisation of 1.095${}_{-0.034}^{+0.035}$ (determined for $14<H<18$). Similarly, renormalising to the $|b|>$20${}^{\circ}$ counts would require a deviation of 7.2$\sigma$ from the faint data. In addition, the model normalisation may also be scrutinised through comparison with redshift distributions. Fig. 2 shows the Metcalfe et al. (2005) pure luminosity evolution model compared with $H$-band $n(z)$ determined through a match between 2MASS and the 2dFGRS Northern and Southern declination strips. The model predictions appear to be consistent with the observations, with relatively homogeneous distributions beyond $z=0.1$ (1 per cent and 8 per cent over-dense in the North and South respectively). Lowering the model normalisation to fit the bright 2MASS number counts would compromise this agreement and imply large over-densities beyond $z=0.1$ (19 per cent and 27 per cent in the North and South respectively). 5.2 Galaxy evolution A change in amplitude therefore, cannot easily account for the discrepancy in the number counts at bright magnitudes. However, could an unexpected change in the slope of the number count model contribute? In section 2, we examined the consistency of the number counts at bright magnitudes with the underlying redshift distribution, assuming a model with insignificant levels of luminosity evolution at low redshift. The predictions derived from the observed $n(z)$ were in good agreement with the observed number counts indicating that luminosity evolution at low redshift is unlikely to have a significant impact on the form of the counts at bright magnitudes. This is supported by the consistency of the pure luminosity evolution model with the observed redshift distributions (Fig. 2); strong low redshift luminosity evolution produces a tail in the $n(z)$ which would imply large deficiencies at high redshift. Could unexpectedly high levels of luminosity evolution at higher redshifts affect our interpretation of the bright counts? If the slope of the homogeneous prediction were to increase significantly above $H\approx 14$ from the evolutionary models considered in this paper, then the model normalisation could effectively be lowered into agreement with the bright counts. The problem with this is that the number counts beyond $H\approx 14$ are consistent with low levels of luminosity evolution to extremely faint magnitudes ($H\approx 26$). Models with significantly higher levels of luminosity evolution above $H\approx 14$ would therefore compromise this agreement. Therefore, it appears that relatively low levels of luminosity evolution are consistent with number count observations to high redshifts. Also, recent evidence from the COMBO-17 survey, examining the evolution of early-type galaxies using nearly 5000 objects to $z\approx 1$ (Bell et al., 2004), suggests that density evolution will also not contribute; $\phi^{*}$ appears to decrease with redshift indicating that the number of objects on the red sequence increases with time, and so acts contrary to the low counts observed at bright magnitudes. This picture is supported by the K20 survey (Cimatti et al., 2002), which includes redshifts for 480 galaxies to a mean depth of ${\bar{z}}\approx 0.7$ and a magnitude limit of $K_{s}=20$ with high completeness. The resulting redshift distribution is consistent with low levels of luminosity and density evolution (Metcalfe et al., 2005). In summary, significant levels of evolution are not expected in passive or star forming pure luminosity evolution models, although could occur through dynamical evolution. However, the pure luinosity evolution models of Metcalfe et al. (2005) fit the observed $H<14$ $n(z)$ at $z>0.1$; it is at lower redshifts that there are fluctuations. In addition, these models continue to fit the observed $n(z)$ at very high redshift and the number counts to extremely faint magnitudes ($K\approx 23$), suggesting that there is little need for evolution at $z\approx 1$, far less $z$$\sim$$<$$\,$0.1. Some combination of dynamical and luminosity evolution might be able to account for these observations; however it would require fine-tuning in order to fit both the steep counts at bright magnitudes and the unevolved $n(z)$ at low and high redshifts. 5.3 Photometry issues & completeness The number counts shown in Figs. 7 and 8 show bright and faint counts with a consistent zeropoint applied. Photometry comparisons have been made using several hundred point sources matched at bright magnitudes. In order to check that the applied zeropoints are consistent with the galaxy samples, we also compare the 2MASS photometry with 24 matched galaxies in the CA field and WHDF and 16 in the LCIRS samples; we find that the mean offsets are $-0.01\pm$0.04 and $-0.32\pm$0.06, consistent with the zeropoints determined via the 2MASS point sources. The comparisons with the 2MASS point source catalogue (Figs. 5 and 6) also indicate that there is no evidence of scale error in either of the faint samples to $H\approx 16$. Could the discrepancy between the bright and faint counts arise from an under-estimation of the total flux of the galaxies? Recall that we make no correction to total magnitude for the faint data presented in this work; however, under-estimating the total flux in the faint data would only increase the observed deficit in the counts at bright magnitudes, if the model normalisation is adjusted to fit the faint counts. The good agreement between the point source and galaxy zeropoints suggests that the estimate for the total galaxy flux is comparable in the bright and faint data. At bright magnitudes, the 2MASS extrapolated $H$-band magnitudes are used. In the $K_{s}$-band, this magnitude estimator has been shown to be an excellent estimate of the total flux, through comparisons with the total $K_{s}$-band magnitude estimator of Jones et al. (2004) and the $K$-band photometry of Loveday (2000). Another possible contribution to the low counts could be high levels of incompleteness in the 2MASS survey. As with the possible systematic effects described previously, it is differing levels of completeness in the faint and bright data which would be important. The 2MASS literature quotes the extended source catalogue completeness as $>90$ per cent (see the 2MASS website for example). Independently, Bell et al. (2003) suggest that the level of completeness is high ($\approx$99 per cent), determined via comparisons with the SDSS Early Data Release spectroscopic data and the 2dFGRS. The faint data presented in this work and the LCIRS data are likely to suffer less from incompleteness, as we cut well below the magnitude limit, are subject to lower levels of stellar confusion and suffer less from low resolution effects. Incompleteness in 2MASS will therefore affect the observed deficit in the bright counts at the $<10$ per cent level, although the effect is likely to be at the low end of this constraint due to incompleteness in the faint catalogues and suggestions that the 2MASS extended source catalogue is fairly complete. 5.4 Large-scale structure It appears therefore, that the observed deficiency in the bright counts might be significantly affected by incompleteness in the 2MASS extended source catalogue. However, the level to which other systematic effects such as the model normalisation, luminosity evolution and photometry issues appears to be small. The question then is $-$ accounting for these various sources of error or uncertainty, are the deficiencies in the bright $H$-band counts over the APM survey area and for $|b|>20^{\circ}$ still at odds with the expected fluctuations in the counts arising from local large-scale structure in a $\Lambda$CDM cosmology, as suggested in previous work (Busswell et al., 2004; Frith et al., 2005a)? We determine the expected fluctuations in the bright number counts due to cosmic variance via $\Lambda$CDM mock 2MASS catalogues; these are described in detail in Frith et al. (2005a). To recap, we apply the 2MASS selection function to 27 virtually independent volumes of $r=500$$\,h^{-1}\,$Mpc formed from the 3000${}^{3}h^{-3}$Mpc${}^{3}$ $\Lambda$CDM Hubble Volume simulation. This simulation has input parameters of $\Omega_{m}=0.3$, $\Omega_{b}=0.04$, $h=0.7$ and $\sigma_{8}=0.9$ (Jenkins et al., 1998). The mean number density of the counts at the magnitude limit is set to that of the observed 2MASS density. We are now in a position to estimate the significance of the observed bright $H$-band counts. We use the 1$\sigma$ fluctuation in the counts expected in a $\Lambda$CDM cosmology (determined using the 2MASS mocks described above), which for the APM survey area is 7.63 per cent (for $H<13$) and 4.79 per cent (for $H<14$), and for $|b|>20^{\circ}$ is 3.25 per cent (for $H<13$) and 1.90 per cent (for $H<14$). In addition we also take into account the uncertainty in the model normalisation; we use the best fit normalisation of the Metcalfe et al. (2005) pure luminosity evolution model (a factor of 1.095 above the Metcalfe et al. (2005) model) and add the uncertainty of $\pm$3.1 per cent derived from the faint $H$-band counts (presented in Fig. 8) in quadrature. Regarding the possible effect arising from survey incompleteness, we first assume that the level of incompleteness is comparable in the faint and bright data; the resulting significance for the APM survey area and $|b|>20^{\circ}$ bright counts are shown in column 3 of table 2. This represents an upper limit on the significance since we have effectively assumed that there is no difference in the incompleteness between the bright and faint datasets. In column 4 of table 2, we assume that there is a difference in the completeness levels in the faint and bright data of 10 per cent. This represents a lower limit on the significance (assuming that there are no further significant systematic effects), since we assume that the completeness of the 2MASS extended source catalogue is 90 per cent (the lower limit) and that there is no incompleteness in the faint data. Therefore, assuming a $\Lambda$CDM cosmology, it appears that the observed counts over the APM survey area might be in line with a rare fluctuation in the local galaxy distribution. However, the counts over 66 per cent of the sky ($|b|>20^{\circ}$) suggest a deficiency in the counts that are at odds with $\Lambda$CDM, even accounting for a 10 per cent incompleteness effect and the measured uncertainty in the best fit model normalisation. 6 Conclusions We have presented new $H$-band photometry over two fields with a combined solid angle of 0.30 deg${}^{2}$ to $H\approx$19. The zeropoint is chosen to match that of the 2MASS photometry at the bright end and is accurate to $\pm$0.01 magnitudes. In addition we have examined the faint $H$-band data of the LCIRS (Chen et al., 2002) which covers two fields with a combined solid angle of 0.39 deg${}^{2}$ to $H\approx$20. The zeropoint of this data appears to be offset from the 2MASS photometry by 0.28$\pm$0.01 magnitudes. Applying a consistent zeropoint, the faint counts determined from the new data presented in this work and the LCIRS are in good agreement with the pure luminosity evolution model of Metcalfe et al. (2005), with a best fit normalisation a factor of 1.095${}_{-0.034}^{+0.035}$ higher. In contrast, the bright $H$-band counts extracted from 2MASS over the $\approx$4000 deg${}^{2}$ APM survey area around the southern galactic pole are low with respect to this model, corroborating previous results over this area in the optical $B$-band and near infrared $K_{s}$-band (Busswell et al., 2004; Frith et al., 2005a). In addition, the counts extracted for almost the entire survey, covering 66 per cent of ths sky, are also low with a deficit of $15-20$ per cent to $H=14$. Importantly, this discrepancy does not appear to be due to zeropoint differences between the faint and bright data or uncertainty in the model normalisation set by the faint counts. We have investigated various possible sources of systematic error which might affect this result: The counts are consistent with low levels of luminosity and density evolution, as predicted by the pure luminosity evolution model of Metcalfe et al. (2005), to extremely faint magnitudes (see Fig. 1). Also, the photometry appears to be consistent between the faint and bright $galaxy$ data using a zeropoint applied via comparisons between point sources. However, differing incompleteness in the bright and faint galaxy samples might have a significant impact; completeness in the 2MASS extended source catalogue is $>90$ per cent. Finally, we determined the expected cosmic variance in the bright number counts from $\Lambda$CDM mock 2MASS catalogues. Allowing for the model normalisation uncertainty determined from the faint counts, and using an upper limit on the incompleteness in the 2MASS galaxy sample, the deficiency in the counts over the APM survey area represent a rare ($\approx$1 in 100) fluctuation in a $\Lambda$CDM cosmology. However, the low $H$-band counts for $|b|>20^{\circ}$ suggest that this deficiency might extend over the entire local galaxy distribution; allowing for incompleteness and the model normalisation uncertainty as before, this would represent a 4$\sigma$ fluctuation ($<$1 in 10$\,$000) in the local galaxy distribution, and would therefore be at odds with the expected form of clustering expected in a $\Lambda$CDM cosmology on large scales. The increase in faint near infrared data from the UK Infrared Deep Sky Survey (UKIDSS) should help to resolve this issue. Acknowledgements The new data presented in this paper were based on observations collected at the Centro Astronòmico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck Institut für Astronomie and the Instituto de Astrofíca de Andalucía (CISC). This publication also makes use of data products from the 2 Micron All-Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Centre/California Institute of Technology, funded by the Aeronautics and Space Administration and the National Science Foundation. We thank Peter Draper for his help with Sextractor. We also thank the wonderful Phil Outram and John Lucey for useful discussion and Nicholas Ross for assistance with the Calar Alto field selection. References Bell et al. (2003) Bell, E.F., McIntosh, D.H., Katz, N. & Weinberg, M.D. 2003, ApJS, 149, 289 Bell et al. (2004) Bell, E.F. et al. 2004, ApJ, 608, 752 Broadhurst et al. (1988) Broadhurst, T.J. Ellis, R.S. & Shanks, T. 1988, MNRAS, 235, 827 Bruzual & Charlot (1993) Bruzual, A.G., & Charlot, S. 1993, ApJ, 405, 538 Busswell et al. (2004) Busswell, G.S., Shanks, T., Outram, P.J., Frith, W.J., Metcalfe, N. & Fong, R. 2004, MNRAS, 354, 991 Cimatti et al. (2002) Cimatti, A. et al. 2002, A&A, 391, L68 Colless et al. (1990) Colless, M.M. Ellis, R.S., Taylor, K. & Hook, R.N. 1990, MNRAS, 244, 408 Colless et al. (2001) Colless, S. et al. 2001, astro-ph/0306581 Chen et al. (2002) Chen, H.-S. et al. 2002, ApJ, 570, 54 Cole et al. (2005) Cole, S.M. et al. 2005, accepted by MNRAS, astro-ph/0501174 Driver (2003) Driver, S. 2003, IAUS, 216, 97 Frith et al. (2003) Frith, W.J., Busswell, G.S., Fong, R., Metcalfe, N. & Shanks, T. 2003, MNRAS, 345, 1049 Frith et al. (2004) Frith, W.J., Outram, P.J. & Shanks, T. 2004, ASP Conf. Proc., Volume 329, 49 Frith et al. (2005a) Frith, W.J., Shanks, T. & Outram, P.J. 2005a, MNRAS, 361, 701 Frith et al. (2005b) Frith, W.J., Outram, P.J. & Shanks, T. 2005b, submitted to MNRAS, astro-ph/0507215 Frith et al. (2005c) Frith, W.J., Outram, P.J. & Shanks, T. 2005c, submitted to MNRAS, astro-ph/0507704 Hawkins et al. (2003) Hawkins, E. et al. 2003, MNRAS, 346, 78 Jarrett (2004) Jarrett, T.H. 2004, astro-ph/0405069 Jenkins et al. (1998) Jenkins, A. et al. 1998, ApJ, 499, 20 Jones et al. (2004) Heath, D.H. et al. 2004, MNRAS, 355, 747 Loveday et al. (1992) Loveday, J., Peterson, B.A., Efstathiou, G. & Maddox, S.J. 1992, ApJ, 390, 338 Loveday (2000) Loveday, J. 2000, MNRAS, 312, 517 Loveday (2004) Loveday, J. 2004, MNRAS, 347, 601L Maddox et al. (1990a) Maddox, S.J., Sutherland, W.J., Efstathiou, G. & Loveday, J. 1990a, MNRAS, 243, 692 Maddox et al. (1990b) Maddox, S.J., Sutherland, W.J., Efstathiou, G., Loveday, J. & Peterson 1990b, MNRAS, 247, 1 Martini (2001) Martini, P. 2001, AJ, 121, 598 McCracken et al. (2000) McCracken, H.J., Metcalfe, N., Shanks, T., Campos, A., Gardner, J.P. & Fong, R. 2000, MNRAS, 311, 707 Metcalfe et al. (1995) Metcalfe, N., Fong, R. & Shanks, T. 1995, MNRAS, 274, 769 Metcalfe et al. (2001) Metcalfe, N., Shanks, T., Campos, A., McCracken, H.J. & Fong, R. 2001, MNRAS, 323, 795 Metcalfe et al. (2005) Metcalfe, N., Shanks, T., Weilbacher, P.M., McCracken, H.J., Campos, A., Fong, R. & Thompson, D. 2005, in prep. Moy et al. (2003) Moy, E., Barmby, P., Rigopoulou, D., Huang, J.-S., Willner, S.P. & Fazio, G.G. 2003, A&A, 403, 493 Shanks (1990) Shanks, T. 1990, IAUS, 139, 269 Teplitz et al. (1998) Teplitz, H.I., Malkan, M. & McLean, I.S. 1998, ApJ, 506, 519 Thompson et al. (1999) Thompson, R., Storrie-Lombardi, L. & Weymann, R. 1999, AJ, 117, 17 Yan et al. (1998) Yan, L., McCarthy, P., Storrie-Lombardi, L. & Weymann, R. 1998, ApJ, 503, L19 Yasuda et al. (2001) Yasuda, N. et al. 2001, AJ, 122, 1104
Effect of Gravitational Waves on the Inhomogeneity of the Universe with Numerical Relativity Ke Wang 111wangke@itp.ac.cn National Astronomical Observatories, Chinese Academy of Sciences, 20A Datun Road, Beijing 100012, China (November 20, 2020) Abstract We numerically integrate the Einstein’s equations for a spatially flat Friedmann-Lemaire-Robertson-Walker (FLRW) background spacetime with a static scalar perturbation and evolving primordial tensor perturbations using the Einstein Toolkit. We find that although the primordial tensor perturbation doesn’t play an important role in the evolution of the overdensity produced by the scalar perturbation, there is an obvious imprint left by the primordial tensor perturbation on the distribution of the fractional density perturbation in the nonlinear region. This imprint may be a possible probe of a gravitational waves background in the future. I Introduction Inflation Starobinsky:1980te ; Guth:1980zm predicts that there is a stochastic gravitational waves (GW) background. Therefore, it’s possible to test inflation scenario experimentally through detection of such a GW background. So far, the B-mode polarization of the cosmic microwave background (CMB) is the most promising probe of this GW background Seljak:1996gy ; Kamionkowski:1996zd ; Kamionkowski:2015yta . In the future, 21cm HI emission from the dark ages will be a complementary and even more sensitive probe of this GW background Book:2011dz ; Masui:2010cz . Furthermore, there are some not very competitive probes of this GW background, including weak lensing shear Dodelson:2003bv ; Dodelson:2010qu and other large-scale structure observables Jeong:2012nu ; Schmidt:2012nw . The goal of this paper is to study the effect of gravitational waves on the inhomogeneity of the Universe with numerical relativity, thereby proposing a possible probe of a GW background. Although cosmological principle points out that the Universe is homogeneous and isotropic on large scales and can be described by a Friedmann-Lemaitre-Robertson-Walker (FLRW) model, it is inhomogeneous and anisotropic on scales smaller than $\sim 80h^{-1}\textrm{Mpc}$ Yadav:2010cc ; Scrimgeour:2012wt today. These inhomogeneities can induce nonlinear general relativistic effects which may be detected by forthcoming cosmological surveys Amendola:2016saw ; Maartens:2015mra ; Ivezic:2008fe . Moreover, these nonlinear general relativistic effects on small scales may be accompanied by unexpected nonperturbative behavior on larger scales. This “backreaction” is believed to be the reason of the recent cosmic expansion by Buchert:1999er ; Kolb:2004am ; Buchert:2007ik ; Rasanen:2011ki ; Clarkson:2011zq ; Buchert:2011sx ; Buchert:2015iva ; Green:2015bma . Usually, the linear perturbation theory of General Relativity (GR) is used on large scales and Newtonian N-body simulations (or Newtonian gravity) provide a very good approximation on small scales. To study the nonlinear general relativistic effects, one can do general relativistic N-body simulations as Adamek:2013wja . Undoubtedly, the direct numerical integration of Einstein’s equation is the only way without any systematic errors and approximation to study the Universe on all scales. The first cosmological work that is fully non-linear, fully relativistic and does not impose symmetries or dimensional-reductions has been done by Giblin:2015vwq ; Mertens:2015ttp using CosmoGRaPH. For cosmological purpose, soon afterward, Bentivegna:2015flc turned to the wide-used Einstein Toolkit Loffler:2011ay to integrate Einstein’s equation. Macpherson:2016ict also studied the inhomogeneous cosmology with Einstein Toolkit by developing a new thorn, FLRWSolver. Here, our work is based on the thorn CFLRWSolver (a C language counterpart of FLRWSolver) which takes the tensor perturbation into our consideration and a self-developing thorn CFLRWAnalysis which studies the effect of tensor perturbation on the inhomogeneity of universe. This paper is organized as follows. In Sec. II, we give the evolution equations of the FLRW background spacetime and scalar and tensor perturbations through solving the zero-order and first-order of Einstein equations and solutions to them. In Sec. III, we give the initial conditions of the system with small perturbations needed by the thorn CFLRWSolver. In Sec. IV, we analyse the results of simulations provided by the thorn CFLRWAnalysis. At last, a brief summary and discussion are included in Sec. V. In this paper, we adopt the following conventions: Greek indices run in {0, 1, 2, 3}, Latin indices run in {1, 2, 3} and repeated indices implies summation and we are in a geometric unit system with $G=c=1$, II Cosmological Perturbations For a spatially flat FLRW background spacetime, the line element is $$ds^{2}=a^{2}(\eta)[-d\eta^{2}+\delta_{ij}dx^{i}dx^{j}],$$ (1) where $\eta$ is the conformal time, $a$ is the scale factor and $\delta_{ij}$ is the identity matrix. In the conformal Newtonian gauge, the line element that includes both the scalar and tensor perturbations to the metric is $$ds^{2}=a^{2}(\eta)[-(1+2\Psi)d\eta^{2}+(1-2\Phi)\delta_{ij}dx^{i}dx^{j}+h_{ij}% dx^{i}dx^{j}],$$ (2) where $\Psi$ is the Newtonian potential, $\Phi$ the spatial curvature perturbation and $h_{ij}$ is a divergenceless, traceless and symmetric tensor. And for a perfect fluid without the anisotropic stress tensor, its energy-momentum tensor with density $\rho=\rho_{0}+\rho_{1}$, isotropic pressure $P=P_{0}+P_{1}$ and 4-velocity $u^{\mu}=a^{-1}[1-\Psi,v^{1}_{1},v^{2}_{1},v^{3}_{1}]$ is $$T_{\mu\nu}=(\rho+P)u_{\mu}u_{\nu}+Pg_{\mu\nu}.$$ (3) The Einstein equations relate the spacetime curvature to the energy-momentum tensor as $$G_{\mu\nu}=8\pi T_{\mu\nu}.$$ (4) The zero-order Einstein equations give the Friedmann constraint and evolution equation for the FLRW background spacetime $$\displaystyle\mathcal{H}^{2}$$ $$\displaystyle=$$ $$\displaystyle\frac{8\pi}{3}a^{2}\rho_{0},$$ (5) $$\displaystyle\mathcal{H}^{\prime}$$ $$\displaystyle=$$ $$\displaystyle-\frac{4\pi}{3}a^{2}(\rho_{0}+3P_{0}),$$ where a prime represents a derivative with respect to the conformal time. According to Macpherson:2016ict , the dust ($P\ll\rho$) solution to (5) is $$\displaystyle a$$ $$\displaystyle=$$ $$\displaystyle a_{\mathrm{init}}\xi^{2},$$ (6) $$\displaystyle\rho_{0}$$ $$\displaystyle=$$ $$\displaystyle\rho_{0,\mathrm{init}}\xi^{-6},$$ $$\displaystyle\xi$$ $$\displaystyle=$$ $$\displaystyle 1+\sqrt{\frac{2\pi\rho_{0}*}{3a_{\mathrm{init}}}}\eta,$$ where $a_{\mathrm{init}}$ and $\rho_{0,\mathrm{init}}$ are the values of $a$ and $\rho_{0}$ at $\eta=0$ respectively, $\xi$ is the scaled conformal time and $\rho_{0}*=\rho_{0}a^{3}$ is the conserved comoving density. From the first-order perturbed Einstein equations, we derive equations describing scalar metric perturbations as Macpherson:2016ict ; Malik:2008im $$\displaystyle\nabla^{2}\Phi-3\mathcal{H}(\Phi^{\prime}+\mathcal{H}\Psi)$$ $$\displaystyle=$$ $$\displaystyle 4\pi a^{2}\rho_{1},$$ (7) $$\displaystyle\mathcal{H}\partial_{i}\Psi+\partial_{i}\Phi^{\prime}$$ $$\displaystyle=$$ $$\displaystyle-4\pi a^{2}(\rho_{0}+P_{0})\delta_{ij}v^{j}_{1},$$ $$\displaystyle\Phi$$ $$\displaystyle=$$ $$\displaystyle\Psi,$$ $$\displaystyle\Phi^{\prime\prime}+3\mathcal{H}\Phi^{\prime}+(2\mathcal{H}^{% \prime}+\mathcal{H}^{2})\Phi$$ $$\displaystyle=$$ $$\displaystyle 4\pi a^{2}P_{1}.$$ Macpherson:2016ict also gives the dust ($P\ll\rho$) solution to (7) for the growing mode as $$\displaystyle\Phi$$ $$\displaystyle=$$ $$\displaystyle f(x^{i}),$$ (8) $$\displaystyle\frac{\rho_{1}}{\rho_{0}}$$ $$\displaystyle=$$ $$\displaystyle C_{1}\xi^{2}\nabla^{2}f(x^{i})-2f(x^{i}),$$ $$\displaystyle v_{1}^{i}$$ $$\displaystyle=$$ $$\displaystyle C_{2}\xi\partial^{i}f(x^{i}),$$ where $f(x^{i})$ is an arbitrary function of space, $C_{1}=\frac{a_{\mathrm{init}}}{4\pi\rho_{0}*}$ and $C_{2}=-\sqrt{\frac{a_{\mathrm{init}}}{6\pi\rho_{0}*}}$. From the spatial part of the first-order perturbed Einstein equations, we have a wave equation $$h_{ij}^{\prime\prime}+2\mathcal{H}h_{ij}^{\prime}-\nabla^{2}h_{ij}=0.$$ (9) One can expand the tensor perturbation in plane waves $$h_{ij}(\vec{x},\eta)=\int\frac{d^{3}k}{(2\pi)^{3}}h^{s}_{k}(\eta)\varepsilon^{% s}_{ij}e^{i\vec{k}\cdot\vec{x}},$$ (10) where $\varepsilon^{s}_{ij}$ with $s=\times,+$ are transverse and traceless polarization tensors and each of $h^{s}_{k}(\eta)$ evolves independently and satisfies $${h^{s}_{k}}^{\prime\prime}+2\mathcal{H}{h^{s}_{k}}^{\prime}+k^{2}h^{s}_{k}=0.$$ (11) According to Wang:1995kb , for modes inside the horizon during matter dominated era, the exact solution is $$h_{k}^{s}(\eta+\eta_{0})=3h_{k}^{s}(0)\frac{\sin[k(\eta+\eta_{0})]-[k(\eta+% \eta_{0})]\cos[k(\eta+\eta_{0})]}{[k(\eta+\eta_{0})]^{3}},$$ (12) where $\eta+\eta_{0}$ is the comoving size of horizon. III Initial Conditions To integrate Einstein equations, Einstein Toolkit turns to the metric in the form of $(3+1)$ formalism $$ds^{2}=-\alpha^{2}dt^{2}+\gamma_{ij}(dx^{i}+\beta^{i}dt)(dx^{j}+\beta^{j}dt),$$ (13) where $\alpha$ is the lapse function, $\beta^{i}$ is the shift vector and $\gamma_{ij}$ is the spatial metric and evolves depending on the extrinsic curvature $K_{ij}$ as $$(\partial_{t}-\mathcal{L}_{\vec{\beta}})\gamma_{ij}=-2\alpha K_{ij}.$$ (14) Now we will use CFLRWSolver to initialize an almost FLRW Universe with small perturbations. First, we should relate the groups of gird function for basic spacetime variables in the thorn ADMBase to the variables in (2): $$\displaystyle\gamma_{ij}$$ $$\displaystyle=$$ $$\displaystyle a^{2}[(1-2\Phi)\delta_{ij}+h_{ij}],$$ (15) $$\displaystyle K_{ij}$$ $$\displaystyle=$$ $$\displaystyle\frac{2a^{\prime}[(1-2\Phi)\delta_{ij}+h_{ij}]-2a\Phi^{\prime}% \delta_{ij}+ah_{ij}^{\prime}}{-2\sqrt{1+2\Psi}}.$$ where we have set $dt=\frac{a\sqrt{1+2\Psi}}{\alpha}d\eta$ and $\beta^{i}=0$. There is still a freedom to choose $\alpha$ without changing the physical solution but highly affecting the computational efficiency. Here we choose the harmonic slicing $$\partial_{t}\alpha=-\frac{1}{4}\alpha^{2}K,$$ (16) which describes the evolution of $\alpha$. Then, we relate the basic variables and grid functions for hydrodynamics evolutions in the thorn HydroBase to the variables in (3): $$\displaystyle\rho$$ $$\displaystyle=$$ $$\displaystyle\rho_{0}+\rho_{1},$$ (17) $$\displaystyle P$$ $$\displaystyle=$$ $$\displaystyle P_{0}+P_{1},$$ $$\displaystyle v^{i}$$ $$\displaystyle=$$ $$\displaystyle\frac{v_{1}^{i}}{a},$$ $$\displaystyle\Gamma$$ $$\displaystyle=$$ $$\displaystyle\left(1-\gamma_{ij}\frac{v_{1}^{i}}{a}\frac{v_{1}^{j}}{a}\right)^% {-\frac{1}{2}}.$$ According to (6), (8) and (12), we set the initial conditions for a dust system with periodic boundary conditions at $t=\eta=0$ as $$\displaystyle\Phi$$ $$\displaystyle=$$ $$\displaystyle\Phi_{0}\sum_{i=1}^{3}\sin\left(\frac{2\pi x^{i}}{l}\right),$$ (18) $$\displaystyle\alpha$$ $$\displaystyle=$$ $$\displaystyle\sqrt{1+2\Phi},$$ $$\displaystyle h_{ij}$$ $$\displaystyle=$$ $$\displaystyle 3h_{\frac{2\pi}{L}}^{s}(0)\frac{L^{3}\sin(\frac{2\pi\eta_{0}}{L}% )-2\pi L^{2}\eta_{0}\cos(\frac{2\pi\eta_{0}}{L})}{(2\pi\eta_{0})^{3}}\cos\left% [\frac{2\pi(z+125)}{L}\right]\varepsilon^{s}_{ij},$$ $$\displaystyle h_{ij}^{\prime}$$ $$\displaystyle=$$ $$\displaystyle 3h_{\frac{2\pi}{L}}^{s}(0)\frac{[(2\pi\eta_{0})^{4}L^{2}-3(2\pi% \eta_{0})^{2}L^{4}]\sin(\frac{2\pi\eta_{0}}{L})+3(2\pi\eta_{0}L)^{3}\cos(\frac% {2\pi\eta_{0}}{L})}{(2\pi\eta_{0})^{6}}\frac{2\pi}{L}\cos\left[\frac{2\pi(z+12% 5)}{L}\right]\varepsilon^{s}_{ij},$$ $$\displaystyle\gamma_{ij}$$ $$\displaystyle=$$ $$\displaystyle a_{\mathrm{init}}^{2}(\delta_{ij}-2\Phi\delta_{ij}+h_{ij}),$$ $$\displaystyle K_{ij}$$ $$\displaystyle=$$ $$\displaystyle\frac{\sqrt{\frac{32\pi\rho_{0,\mathrm{init}}}{3}}a_{\mathrm{init% }}^{2}(\delta_{ij}-2\Phi\delta_{ij}+h_{ij})+a_{\mathrm{init}}h_{ij}^{\prime}}{% -2\alpha},$$ $$\displaystyle\rho$$ $$\displaystyle=$$ $$\displaystyle\rho_{0,\mathrm{init}}-\rho_{0,\mathrm{init}}\left[\left(\frac{2% \pi}{l}\right)^{2}C_{1}+2\right]\Phi,$$ $$\displaystyle v^{i}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{a_{\mathrm{init}}}\frac{2\pi}{l}C_{2}\Phi_{0}\cos\left(% \frac{2\pi x^{i}}{l}\right),$$ $$\displaystyle\Gamma$$ $$\displaystyle=$$ $$\displaystyle\left(1-\gamma_{ij}v^{i}v^{j}\right)^{-\frac{1}{2}},$$ where $l=500$ is the half length of one side of our simulation box with $x^{i}$ in $[-375,625]$, $\Phi_{0}$ is the amplitude of static scalar perturbation and $h_{\frac{2\pi}{L}}^{s}(0)$ is the amplitude of monochromatic primordial tensor perturbation with wave number $k=\frac{2\pi}{L}$ before horizon-crossing. For simplicity, we set the initial scale factor $a_{\mathrm{init}}=1$. And we set $L<4000$ and $\rho_{0,\mathrm{init}}=10^{-6}$ so that $\frac{2\pi\eta_{0}}{L}>1$ which implies the tensor perturbation has crossed inside the horizon $\eta_{0}\simeq 2\sqrt{\frac{3}{8\pi\rho_{0,\mathrm{init}}}}$ at the beginning of simulation. To keep the linear approximation remains valid and save the computational time as much as possible, we set $\Phi_{0}=10^{-4}$ which means the density contrast is about $10^{-3}$. Here we will run three simulations at resolution $100^{3}$ with $h_{\frac{2\pi}{L}}^{s}(0)=0$, $h_{\frac{2\pi}{500}}^{+}(0)=10^{-3}$ and $h_{\frac{2\pi}{500}}^{\times}(0)=10^{-3}$ to study the effect of tensor perturbation on the inhomogeneity of universe. IV Results We can analyse the effect of tensor perturbation by comparing the outputs of several derived variables from the basic ones in the ADMBase and HydroBase from those simulations performed above directly. Fig. 1 shows the evolution of $\frac{h^{\times}(\eta_{0}+\eta)}{h^{\times}(0)}$ at the point $(0,0,375)$ of our cubic domain in our three simulations. We can see that the scalar perturbations can produce the tensor perturbations due to the nonlinear effects (red solid curve) as pointed by Baumann:2007zm , and the evolution of primordial tensor perturbation (black solid curve) follows the exact solution $\frac{j_{1}[k(\eta_{0}+\eta)]}{k(\eta_{0}+\eta)}$ (green solid curve), where $j_{1}(z)=\frac{\sin z-z\cos z}{z^{2}}$ is the spherical Bessel functions of order one. Fig. 2 shows the evolution of $\frac{\rho_{1}}{\rho_{0}}$ at the point $(375,375,375)$ of our simulation box without primordial tensor perturbations (left), and the primordial tensor perturbation’s contribution to $\frac{\rho_{1}}{\rho_{0}}$ at the point $(375,375,375)$ (right). We can see that the simulation result (black solid curve) deviates from the linear analytic solution (red solid curve) due to the nonlinear effects. Moreover, even though the primordial tensor perturbation die off quickly after the horizon-crossing as shown in Fig. 1, their contribution to $\frac{\rho_{1}}{\rho_{0}}$ grow up quickly in nonlinear regions. Fig. 3 shows the distribution of $\frac{\rho_{1}}{\rho_{0}}$ on the x-y plane of $z=375$ at the beginning $(\eta=0)$ and end $(\eta=1200)$ of simulation without primordial tensor perturbations. We can see that the locations of the maximum of $\frac{\rho_{1}}{\rho_{0}}$ are fixed at points $(-125,-125,375)$, $(375,-125,375)$, $(375,375,375)$ and $(-125,375,375)$. That is to say there is almost no interaction between the perturbation peaks. The left one of Fig. 4 shows the contribution of primordial tensor perturbation with just $h^{\times}$ component to $\frac{\rho_{1}}{\rho_{0}}$, while the right one shows the contribution from primordial tensor perturbation with only $h^{+}$ at the end of our simulation. We can see that the contributions of both cases are too small to modify the right one of Fig. 3. However, primordial tensor perturbation with different component do leave a characteristic imprint on the distribution of $\frac{\rho_{1}}{\rho_{0}}$: the tensor perturbation with $h^{\times}$ enhances the overdensity; the tensor perturbation with $h^{+}$ enhances the overdensity alone the lines $(-125,y,375)$ and $(375,y,375)$ but suppresses the overdensity alone the lines $(x,-125,375)$ and $(x,375,375)$. V Summary and discussion We have performed three simulations using Einstein Toolkit in this paper: The first one gives the evolution of overdensity $\frac{\rho_{1}}{\rho_{0}}$ in a spatially flat FLRW background spacetime with a static scalar perturbation; In the next two, we added an evolving primordial tensor perturbation with just $h^{\times}$ or $h^{+}$ component to the spacetime and find that these two components leave a characteristic imprint on the distribution of the fractional density perturbation in the nonlinear region. These imprints may be a possible probe of a GW background in the future. In our paper, we give the initial conditions by solving the perturbed Einstein equations with $\Phi_{0}=10^{-4}$. So there is a question that whether these initial data satisfy the Hamiltonian constraint and the momentum constraint or not. Fig. 5 shows the evolution of the maximum of the Hamiltonian constraint and the momentum constraint. We can see that our initial conditions for both of the two simulations are reasonable. Acknowledgments We would like to thank Qing-Guo Huang and You-Jun Lu for their helpful discussions and advices on this paper. This work is partly supported by the National Natural Science Foundation of China under grant No. 11690024, the Strategic Priority Program of the Chinese Academy of Sciences (Grant No. XDB 23040100) References (1) A. A. Starobinsky, Phys. Lett.  91B, 99 (1980). doi:10.1016/0370-2693(80)90670-X (2) A. H. Guth, Phys. Rev. D 23, 347 (1981). doi:10.1103/PhysRevD.23.347 (3) U. Seljak and M. Zaldarriaga, Phys. Rev. Lett.  78, 2054 (1997) doi:10.1103/PhysRevLett.78.2054 [astro-ph/9609169]. (4) M. Kamionkowski, A. Kosowsky and A. Stebbins, Phys. Rev. Lett.  78, 2058 (1997) doi:10.1103/PhysRevLett.78.2058 [astro-ph/9609132]. (5) M. Kamionkowski and E. D. Kovetz, Ann. Rev. Astron. Astrophys.  54, 227 (2016) doi:10.1146/annurev-astro-081915-023433 [arXiv:1510.06042 [astro-ph.CO]]. (6) L. Book, M. Kamionkowski and F. Schmidt, Phys. Rev. Lett.  108, 211301 (2012) doi:10.1103/PhysRevLett.108.211301 [arXiv:1112.0567 [astro-ph.CO]]. (7) K. W. Masui and U. L. Pen, Phys. Rev. Lett.  105, 161302 (2010) doi:10.1103/PhysRevLett.105.161302 [arXiv:1006.4181 [astro-ph.CO]]. (8) S. Dodelson, E. Rozo and A. Stebbins, Phys. Rev. Lett.  91, 021301 (2003) doi:10.1103/PhysRevLett.91.021301 [astro-ph/0301177]. (9) S. Dodelson, Phys. Rev. D 82, 023522 (2010) doi:10.1103/PhysRevD.82.023522 [arXiv:1001.5012 [astro-ph.CO]]. (10) D. Jeong and F. Schmidt, Phys. Rev. D 86, 083512 (2012) doi:10.1103/PhysRevD.86.083512 [arXiv:1205.1512 [astro-ph.CO]]. (11) F. Schmidt and D. Jeong, Phys. Rev. D 86, 083513 (2012) doi:10.1103/PhysRevD.86.083513 [arXiv:1205.1514 [astro-ph.CO]]. (12) J. K. Yadav, J. S. Bagla and N. Khandai, Mon. Not. Roy. Astron. Soc.  405, 2009 (2010) doi:10.1111/j.1365-2966.2010.16612.x [arXiv:1001.0617 [astro-ph.CO]]. (13) M. Scrimgeour et al., Mon. Not. Roy. Astron. Soc.  425, 116 (2012) doi:10.1111/j.1365-2966.2012.21402.x [arXiv:1205.6812 [astro-ph.CO]]. (14) L. Amendola et al., arXiv:1606.00180 [astro-ph.CO]. (15) R. Maartens et al. [SKA Cosmology SWG Collaboration], PoS AASKA 14, 016 (2015) [arXiv:1501.04076 [astro-ph.CO]]. (16) Z. Ivezic et al. [LSST Collaboration], arXiv:0805.2366 [astro-ph]. (17) T. Buchert, Gen. Rel. Grav.  32, 105 (2000) doi:10.1023/A:1001800617177 [gr-qc/9906015]. (18) E. W. Kolb, S. Matarrese, A. Notari and A. Riotto, Phys. Rev. D 71, 023524 (2005) doi:10.1103/PhysRevD.71.023524 [hep-ph/0409038]. (19) T. Buchert, Gen. Rel. Grav.  40, 467 (2008) doi:10.1007/s10714-007-0554-8 [arXiv:0707.2153 [gr-qc]]. 黑皮书 (20) S. Rasanen, Class. Quant. Grav.  28, 164008 (2011) doi:10.1088/0264-9381/28/16/164008 [arXiv:1102.0408 [astro-ph.CO]]. (21) C. Clarkson, G. Ellis, J. Larena and O. Umeh, Rept. Prog. Phys.  74, 112901 (2011) doi:10.1088/0034-4885/74/11/112901 [arXiv:1109.2314 [astro-ph.CO]]. (22) T. Buchert and S. Rasanen, Ann. Rev. Nucl. Part. Sci.  62, 57 (2012) doi:10.1146/annurev.nucl.012809.104435 [arXiv:1112.5335 [astro-ph.CO]]. (23) T. Buchert et al., Class. Quant. Grav.  32, 215021 (2015) doi:10.1088/0264-9381/32/21/215021 [arXiv:1505.07800 [gr-qc]]. (24) S. R. Green and R. M. Wald, arXiv:1506.06452 [gr-qc]. (25) J. Adamek, D. Daverio, R. Durrer and M. Kunz, Phys. Rev. D 88 (2013) no.10, 103527 doi:10.1103/PhysRevD.88.103527 [arXiv:1308.6524 [astro-ph.CO]]. (26) J. T. Giblin, J. B. Mertens and G. D. Starkman, Phys. Rev. Lett.  116, no. 25, 251301 (2016) doi:10.1103/PhysRevLett.116.251301 [arXiv:1511.01105 [gr-qc]]. (27) J. B. Mertens, J. T. Giblin and G. D. Starkman, Phys. Rev. D 93, no. 12, 124059 (2016) doi:10.1103/PhysRevD.93.124059 [arXiv:1511.01106 [gr-qc]]. (28) E. Bentivegna and M. Bruni, Phys. Rev. Lett.  116, no. 25, 251302 (2016) doi:10.1103/PhysRevLett.116.251302 [arXiv:1511.05124 [gr-qc]]. (29) F. Loffler et al., Class. Quant. Grav.  29, 115001 (2012) doi:10.1088/0264-9381/29/11/115001 [arXiv:1111.3344 [gr-qc]]. (30) H. J. Macpherson, P. D. Lasky and D. J. Price, Phys. Rev. D 95, no. 6, 064028 (2017) doi:10.1103/PhysRevD.95.064028 [arXiv:1611.05447 [astro-ph.CO]]. (31) K. A. Malik and D. Wands, Phys. Rept.  475, 1 (2009) doi:10.1016/j.physrep.2009.03.001 [arXiv:0809.4944 [astro-ph]]. (32) Y. Wang, Phys. Rev. D 53, 639 (1996) doi:10.1103/PhysRevD.53.639 [astro-ph/9501116]. (33) D. Baumann, P. J. Steinhardt, K. Takahashi and K. Ichiki, Phys. Rev. D 76, 084019 (2007) doi:10.1103/PhysRevD.76.084019 [hep-th/0703290].
Stretching dependence of the vibration modes of a single-molecule Pt-H${}_{2}$-Pt bridge D. Djukic Kamerlingh Onnes Laboratorium, Universiteit Leiden, Postbus 9504, NL - 2300 RA Leiden, The Netherlands    K. S. Thygesen Center for Atomic-scale Materials Physics, Department of Physics, Technical University of Denmark, DK - 2800 Kgs. Lyngby, Denmark    C. Untiedt [ Kamerlingh Onnes Laboratorium, Universiteit Leiden, Postbus 9504, NL - 2300 RA Leiden, The Netherlands    R. H. M. Smit [ Kamerlingh Onnes Laboratorium, Universiteit Leiden, Postbus 9504, NL - 2300 RA Leiden, The Netherlands    K. W. Jacobsen Center for Atomic-scale Materials Physics, Department of Physics, Technical University of Denmark, DK - 2800 Kgs. Lyngby, Denmark    J. M. van Ruitenbeek ruitenbeek@physics.leidenuniv.nl Kamerlingh Onnes Laboratorium, Universiteit Leiden, Postbus 9504, NL - 2300 RA Leiden, The Netherlands (December 7, 2020) Abstract A conducting bridge of a single hydrogen molecule between Pt electrodes is formed in a break junction experiment. It has a conductance near the quantum unit, $G_{0}=2e^{2}/h$, carried by a single channel. Using point contact spectroscopy three vibration modes are observed and their variation upon isotope substitution is obtained. The stretching dependence for each of the modes allows uniquely classifying them as longitudinal or transversal modes. The interpretation of the experiment in terms of a $\text{Pt-H}_{2}\text{-Pt}$ bridge is verified by Density Functional Theory calculations for the stability, vibrational modes, and conductance of the structure. pacs: 73.63.Rt, 63.22.+m, 73.23.-b, 85.65.+h Present address: ]Dpto. de Física Aplicada, Universidad de Alicante, E-03690 Alicante, Spain Present address: ]Dpto. de Física de la Materia Condensada - C3, Universidad Autónoma de Madrid, 28049 Madrid, Spain There is beauty and power in the idea of constructing electronic devices using individual organic molecules as active elements. Although the concept was proposed as early as 1974 Aviram and Ratner (1974) only recently experiments aimed at contacting individual organic molecules are being reported Reed et al. (1997); Kergueris et al. (1999); Reichert et al. (2002); Park et al. (2002); Cui et al. (2002); Bumm et al. (1996); Liang et al. (2002); Kubatkin et al. (2003); Xu and Tao (2003); Kervennic et al. and devices are being tested Luo and et al. (2002); Collier et al. (1999). The first results raised high expectations, but quickly problems showed up such as large discrepancies between the current-voltage characteristics obtained by different experimental groups, and large discrepancies between experiments and theory. The main tools that have been applied in contacting single molecules are STM (or conducting tip AFM) and break junction devices. Often it is difficult to show that the characteristics are due to the presence of a molecule, or that only a single molecule has been contacted. There has been important progress in analysis and reproducibility of some experiments Reichert et al. (2002); Park et al. (2002); Liang et al. (2002); Kubatkin et al. (2003); Xu and Tao (2003); Kervennic et al. , but in comparing the data with theory many uncertainties remain regarding the configuration of the organic molecule and the nature of the molecule-metal interface. The organic molecules selected for these studies are usually composed of several carbo-hydride rings and are anchored to gold metal leads by sulphur groups. In view of the difficulties connected with these larger molecules it seems natural to step back and focus on even simpler systems. Here we concentrate on the simplest molecule, $\text{H}_{2}$, anchored between platinum metal leads using mechanically controllable break junctions. The first experiments on this system Smit et al. (2002) showed that the conductance of a single hydrogen molecule between Pt leads is slightly below $1G_{0}$, where $G_{0}=2e^{2}/h$ is the conductance quantum. A vibration mode near 65 meV was observed and interpreted as the longitudinal center-of-mass (CM) mode of the molecule. These results have inspired new calculations on this problem using Density Functional Theory (DFT) methods Cuevas et al. (2003); García et al. (2004). Cuevas et al. Cuevas et al. (2003) find a conductance around $0.9G_{0}$, in agreement with the first DFT calculations presented in Smit et al. (2002). In contrast, García et al. García et al. (2004) obtain a conductance of only $(0.2-0.5)~{}G_{0}$ for the in-line configuration of the hydrogen molecule. Instead they propose an alternative configuration with hydrogen atoms sitting above and below a Pt-Pt atomic contact. In this letter we combine new experimental results with DFT calculations to show that the configuration proposed in Smit et al. (2002) is correct, yet the observed vibration mode was incorrectly attributed. In contrast, the present experiment resolves three vibration modes that can be classified as longitudinal or transverse modes based on the observed shifts with stretching of the contacts. The comparison with the calculations is nearly quantitative and the large number of experimentally observed parameters (the number of vibration modes, their stretching dependence and isotope shifts, the conductance and the number of conductance channels) puts stringent constraints on any possible interpretation. The measurements have been performed using the mechanically controllable break junction technique Muller et al. (1992); Agraït (1997). A small notch is cut at the middle of a Pt wire to fix the breaking point. The wires used are $100~{}\mu$m in diameter, about 1 cm long, and have a purity of 99,9999%. The wire is glued on top of a bending beam and mounted in a three-point bending configuration inside a vacuum chamber. Once under vacuum and cooled to 4.2 K the wire is broken by mechanical bending of the substrate. Clean fracture surfaces are exposed and remain clean for days in the cryogenic vacuum. The bending can be relaxed to form atomic-sized contacts between the wire ends using a piezo element for fine adjustment. After admitting a small amount ($\sim 3~{}\mu$mole) of molecular H${}_{2}$ (99.999%) in the sample chamber and waiting some time for the gas to diffuse to the cold end of the insert, a sudden change is observed in the conductance of the last contact before breaking. The typical value of $(1.6\pm 0.4)G_{0}$ for a single-atom Pt contact is replaced by a frequently observed plateau near $1G_{0}$ that has been attributed in Smit et al. (2002) to the formation of a Pt-H${}_{2}$-Pt bridge. By increasing the bias voltage above 300 mV we recover the pure Pt conductance. But as soon as the bias voltage is decreased the H${}_{2}$ induced plateaus at $1G_{0}$ reappear. We interpret this as desorption of hydrogen due to Joule heating of the contacts. For biases below 100 mV, the Pt-H${}_{2}$-Pt bridge can be stable for hours. At the $1G_{0}$-conductance plateaus we take differential conductance ($dI/dV$) spectra in order to determine the inelastic scattering energies. By repeatedly breaking the contacts, joining them again to a large contact, and pulling until arriving at a plateau near $1G_{0}$, we obtain a large data set for many independent contacts. The experiments were repeated for more than 5 independent experimental runs, and for the isotopes HD (96%) and D${}_{2}$ (99.7%). Fig. 1 shows a spectrum taken for D${}_{2}$ showing a sharp drop in the differential conductance by 1–2% symmetrically at $\pm 50$ meV. Such signals are characteristic for point contact spectroscopy Khotkevich and Yanson (1995), which was first applied to single-atom contacts by Agraït et al. (2002). The principle of this spectroscopy is simple: when the difference in chemical potential between left- and right-moving electrons, $eV$, exceeds the energy of a vibration mode, $\hbar\omega$, back-scattering associated with the emission of a vibration becomes possible, giving rise to a drop in the conductance. This can be seen as a dip (peak) in the second derivative $d^{2}I/dV^{2}$ at positive (negative) voltages, as in Fig. 1. Some contacts can be stretched over a considerable distance, in which case we observe an increase of the vibration mode energy with stretching. This observation suffices to invalidate the original interpretation Smit et al. (2002) of this mode as the longitudinal CM mode. Indeed, our DFT calculations show that the stretching mainly affects the Pt-H bond which is elongated and weakened resulting in a drop in the frequency of the $\text{H}_{2}$ longitudinal CM mode. An increase can be obtained only for a transverse mode which, like a guitar string, obtains a higher pitch at higher string tension due to the increased restoring force. On many occasions we observe two modes in the $dI/dV$ spectra, see the inset of Fig. 1. The relative amplitude of the two modes varies; some spectra show only the lower mode, some only the higher one. All frequencies observed in a large number of experiments are collected in the histograms shown in Fig. 2. With a much larger data set compared to Smit et al. (2002) we are now able to resolve two peaks in the distribution for H${}_{2}$ corresponding to the two modes seen in the inset of Fig. 1. The peaks are expected to shift with the mass $m$ of the isotopes as $\omega\propto\sqrt{1/m}$. This agrees with the observations, as shown by the scaled position of the hydrogen peaks marked by arrows above the distributions for D${}_{2}$ and HD. Note that the distribution for HD proves that the vibration modes belong to a molecule and not an atom, since the latter would have produced a mixture of the H${}_{2}$ and D${}_{2}$ distributions. In the case of D${}_{2}$ we observe a third peak in the distribution at 86–92 meV. For the other isotopes this mode falls outside our experimentally accessible window of about $\pm 100$ meV, above which the contacts are destabilized by the large current. Fig. 3 shows the dependence of this mode upon stretching of the junction. In contrast to the two low-frequency modes this mode shifts down with stretching, suggesting that this could be the longitudinal CM mode that was previously attributed to the low-frequency modes Smit et al. (2002). In order to test the interpretation of the experiment in terms of a Pt-$\text{H}_{2}$-Pt bridge we have performed extensive DFT calculations using the plane wave based pseudopotential code Dacapo Bahn and Jacobsen (2002); Hammer et al. (1999). The molecular contact is described in a supercell containing the hydrogen atoms and two 4-atom Pt pyramids attached to a Pt(111) slab containing four atomic layers, see inset of Fig. 4 compdetails . In the total energy calculations both the hydrogen atoms and the Pt pyramids were relaxed while the remaining Pt atoms were held fixed in the bulk structure. The vibration frequencies were obtained by diagonalizing the Hessian matrix for the two hydrogen atoms. The Hessian matrix is defined by $\partial^{2}E_{0}/(\partial\tilde{u}_{n\alpha}\partial\tilde{u}_{m\beta})$, where $E_{0}$ is the ground state potential energy surface and $\tilde{u}_{n\alpha}$ is the displacement of atom $n$ in direction $\alpha$ multiplied by the mass factor $\sqrt{M_{n}}$. In calculating the vibration modes all Pt atoms were kept fixed which is justified by the large difference in mass between H and Pt. The conductance is calculated from the Meir-Wingreen formula meirwingreen using a basis of partly occupied Wannier functions powannier , representing the leads as bulk Pt crystals. In order to simulate the stretching process of the experiment we have calculated contacts for various lengths of the supercell. The bridge configuration is stable over a large distance range with the binding energy of the $\text{H}_{2}$ molecule varying from $-0.92$ eV to $-0.47$ eV, relative to gas phase $\text{H}_{2}$ and a broken Pt contact, over the range of stretching considered here. The H-H bond length stays close to 1.0 Å during the first stages of the stretching upon which it retracts and approaches the value of the free molecule. The hydrogen thus retains its molecular form and the elongation mainly affects the weaker Pt-H bond. For smaller electrode separations a structure with two hydrogen atoms adsorbed on the side of a Pt-Pt atomic contact becomes the preferred geometry, as also found by García et al.  García et al. (2004). However, we find that the latter structure has a conductance of 1.5 $G_{0}$, well above 1 $G_{0}$. Moreover, this structure has at least three conduction channels with significant transmission, which excludes it as a candidate structure based on the analysis of conductance fluctuations in Smit et al. (2002) and Csonka et al. (2004), which find a single channel only. In view of the activity of the Pt surface towards catalyzing hydrogen dissociation one would have anticipated a preference for junctions of hydrogen in atomic form. However, we find that the bonding energy of H compared to that of H${}_{2}$ strongly depends on the metal coordination number of the Pt atom. For metal coordination numbers smaller than 7 bonding to molecular hydrogen is favored, the bond being strongest for fivefold coordinated Pt. The calculations identify the six vibrational modes of the hydrogen molecule. For moderate stretching two pairwise degenerate modes are lowest in frequency. The lowest one corresponds to translation of the molecule transverse to the transport direction while the other one is a hindered rotation mode. The two modes are characterized by increasing frequencies as a function of stretch of the contact. At higher energies we find the two longitudinal modes: first the CM mode and then the internal vibration of the molecule. These two modes become softer during stretching up to Pt-H bond lengths of about 1.9 Å. Beyond this point the Pt-H bond begins to break and the internal vibration mode approaches the one of the free molecule. The variation of the frequencies of the lowest lying hydrogen modes with stretching is thus in qualitative agreement with the experiments, a strong indication that the suggested structure is indeed correct. The agreement is even semiquantitative: If we focus on displacements in the range 1.7 - 2.0 Å (see Fig. 4) the calculated conductance does not deviate significantly from the experimentally determined value close to $1G_{0}$. In this regime the three lowest calculated frequencies are in the range 30-42 meV, 64-92 meV, and 123-169 meV. The two lowest modes can be directly compared with the experimentally determined peaks at 54 and 72 meV observed for H${}_{2}$, while a mass re-scaling of the D${}_{2}$ result for the highest mode gives approximately 126 meV. The second peak in the HD distribution in Fig. 2 is somewhat above the position obtained by scaling the $\text{H}_{2}$ peak by $\sqrt{3/2}$. The transverse translation mode and the hindered rotation mode are decoupled when the two atoms of the molecule have the same mass. In the case of HD they couple and the simple factor does not hold. Having identified the character of these modes a proper re-scaling of the experimentally determined H${}_{2}$ frequencies (54 and 72 meV) to the case of HD leads to the frequencies 42 and 66 meV, in very good agreement with the peaks observed for HD. Even though there is good agreement with the calculated signs of the frequency shifts with stretching for the various modes, there is a clear discrepancy in magnitudes. Considering, e.g., the high-lying mode for D${}_{2}$, the measured shift of the mode is of the order 15 meV/Å, which is almost an order of magnitude smaller than the calculated variation of around 130 meV/Å. However, experimentally the distances are controlled quite far from the molecular junction and the elastic response of the electrode regions have to be taken into account. Simulations of atomic chain formation in gold chainformation during contact breaking show that most of the deformation happens not in the atomic chains but in the nearby electrodes. A similar effect for the Pt-H${}_{2}$-Pt system will significantly reduce the stretch of the molecular bridge compared to the displacement of the macroscopic electrodes. The observation of the three vibration modes and their stretching dependence provides a solid basis for the interpretation. The fourth mode, the internal vibration, could possibly be observed using the isotope tritium. The hydrogen molecule junction can serve as a benchmark system for molecular electronics calculations. The experiments should be gradually expanded towards more complicated systems and we have already obtained preliminary results for CO and C${}_{2}$H${}_{2}$ between Pt leads. We thank M. Suty for assistance in the experiments and M. van Hemert for many informative discussions. This work was supported by the Dutch “Stichting FOM,”, the Danish Center for Scientific Computing through Grant No. HDW-1101-05 and the Spanish MCyT under contract MAT-2003-08109-C02-01 and the ESF through the EUROCORES SONS programme. References Aviram and Ratner (1974) A. Aviram and M. A. Ratner, Chem. Phys. Lett. 29, 277 (1974). Reed et al. (1997) M. A. Reed, C. Zhou, C. J. Muller, T. P. Burgin and J. M. Tour, Science 278, 252 (1997). Kergueris et al. (1999) C. Kergueris, J.P. Bourgoin, S. Palacin, D. Esteve, C. Urbina, M. Magoga and C. Joachim, Phys. Rev. B 59, 12505 (1999). Reichert et al. (2002) J. Reichert, R. Ochs, D. Beckmann, H.B. Weber, M. Mayor and H. von Löhneysen, Phys. Rev. Lett. 88, 176804 (2002). Park et al. (2002) J. Park, A. N. Pasupathy, J. I. Goldsmith, C. Chang, Y. Yaish, J. R. Petta, M. Rinkoski, J. P. Sethna, H. D. Abruña, P. L. McEuen and D. C. Ralph, Nature (London) 417, 722 (2002). Cui et al. (2002) X. D. Cui, A. Primak, X. Zarate, J. Tomfohr, O. F. Sankey, A. L. Moore, T. A. Moore, D. Gust, L. A. Nagahara and S. M. Lindsay, J. Phys. Chem. B 106, 8609 (2002). Bumm et al. (1996) L. A. Bumm, J. J. Arnold, M. T. Cygan, T. D. Dunbar, T. P. Burgin, L. Jones, II, D. L. Allara, J. M. Tour and P. S. Weiss, Science 271, 1705 (1996). Liang et al. (2002) Wenjie Liang, Matthew P. Shores, Marc Bockrath, Jeffrey R. Long and Hongkun Park, Nature (London) 417, 725 (2002). Kubatkin et al. (2003) S. Kubatkin, A. Danilov, M. Hjort, J. Cornil, J.L. Brédas, N. Stuhr-Hansen, P. Hedegård and T. Bjørnholm, Nature (London) 425, 698 (2003). Xu and Tao (2003) B. Xu and N. J. Tao, Science 301, 1221 (2003). (11) Y. V. Kervennic, J. M. Thijssen, D. Vanmaekelbergh, C. A. van Walree, L. W. Jenneskens and H. S. J. van der Zant, unpublished. Luo and et al. (2002) Y. Luo, C. P. Collier, J. O. Jeppesen, K. A. Nielsen, E. DeIonno, G. Ho, J. Perkins, H.-R. Tseng, T. Yamamoto, J. F. Stoddart and J. R. Heath , ChemPhysChem 3, 519 (2002). Collier et al. (1999) C. P. Collier, E. W. Wong, M. Belohradský, F. M. Raymo, J. F. Stoddart, P. J. Kuekes, R. S. Williams and J. R. Heath, Science 285, 391 (1999). Smit et al. (2002) R. H. M. Smit, Y. Noat, C. Untiedt, N. D. Lang, M. C. van Hemert and J. M. van Ruitenbeek, Nature (London) 419, 906 (2002). Cuevas et al. (2003) J. C. Cuevas, J. Heurich, F. Pauly, W. Wenzel and Gerd Schön , Nanotechnology 14, R29 (2003). García et al. (2004) Y. García, J. J. Palacios, E. SanFabián, J. A. Vergés, A. J. Pérez-Jiménez and E. Louis, Phys. Rev. B 69, 041402(R) (2004). Muller et al. (1992) C. J. Muller, J. M. van Ruitenbeek and L. J. de Jongh, Physica C 191, 485 (1992). Agraït (1997) Agraït, A. L. Yeyati and Jan M. van Ruitenbeek , Phys. Rep. 377, 81 (2003). Kozub and Kulik (1986) V. I. Kozub and I. O. Kulik, Sov. Phys. JETP 64, 1332 (1986), [Zh. Eksp. Teor. Fiz. 91, 2243-2251 (1986)]. Khotkevich and Yanson (1995) A. V. Khotkevich and I. K. Yanson, Atlas of point contact spectra of electron-phonon interactions in metals (Kluwer Academic Publishers, Dordrecht, 1995). Agraït et al. (2002) N. Agraït, C. Untiedt, G. Rubio-Bollinger, and S. Vieira, Phys. Rev. Lett. 88, 216803 (2002). Bahn and Jacobsen (2002) S. R. Bahn and K. W. Jacobsen, Comp. Sci. Eng. 4, 56 (2002), the Dacapo code can be downloaded at http://www.fysik.dtu.dk/campos. Hammer et al. (1999) B. Hammer, L. B. Hansen, and J. K. Nørskov, Phys. Rev. B 59, 7413 (1999). (24) The plane wave expansion is cut off at 25 Ry. We use ultrasoft pseudopotentials Vanderbilt (1990) and the exchange correlation functional PW91 Perdew et al. (1992). A (1,4,4) Monkhorst pack grid has been used to sample the Brillouin zone. Vanderbilt (1990) D. Vanderbilt, Phys. Rev. B 41, R7892 (1990). Perdew et al. (1992) J. P. Perdew, J. A. Chevary, S. H. Vosko, K. A. Jackson, M. R. Pederson, D. J. Singh and C. Fiolhais, Phys. Rev. B 46, 6671 (1992). (27) Y. Meir and N. S. Wingreen, Phys. Rev. Lett. 68, 2512 (1992). (28) K. S. Thygesen and K. W. Jacobsen, to be published. Csonka et al. (2004) Sz. Csonka and A. Halbritter and G. Mihály and O. I. Shklyarevskii and S. Speller and H. van Kempen, Phys. Rev. Lett. 93, 016802 (2004). (30) G. Rubio-Bollinger, S. R. Bahn, N. Agraït, K. W. Jacobsen, and S. Vieira, Phys. Rev. Lett. 87, 026101 (2001).
On spiral minimal surfaces A. V. Kiselev Department of Higher Mathematics, Ivanovo State Power University, Rabfakovskaya str. 34, Ivanovo, 153003 Russia. (A. K.): Department of Physics, Middle East Technical University, 06531 Ankara, Turkey. arthemy@newton.physics.metu.edu.tr  and  V. I. Varlamov (Date:: February 20, 2006) Abstract. A class of spiral minimal surfaces in ${\mathbb{E}}^{3}$ is constructed using a symmetry reduction. The new surfaces are invariant with respect to the composition of rotation and dilatation. The solutions are obtained in closed form and their asymptotic behaviour is described. Key words and phrases:Minimal surfaces, symmetry reductions, invariant solutions, phase portrait 2000 Mathematics Subject Classification: 35C20, 49Q05, 53A10, 85A15. ISPUmath-4/2005 Introduction In this paper we construct the two-dimensional minimal surfaces $\varSigma\subset{{\mathbb{E}}}^{3}$ which are invariant with respect to the composition of the dilatation centered at the origin and a rotation of space around the origin. We present the solutions in closed form through the Legendre transformation, and we describe their asymptotic behaviour using a non-parametric representation. Also, we construct a special solution of the problem. The paper is organized as follows. In Sec. 1 we formulate the problem of a symmetry reduction for the minimal surface equation, and we express its general solution in parametric form using the Legendre transformation. In Sec. 2 we convert the reduction problem to the auxiliary Riccati equation and classify its solutions. Then in Sec. 3 we describe the phase portrait of a cubic-nonlinear ODE [1] whose solutions determine the profiles of the minimal surfaces on some cylinder in ${\mathbb{E}}^{3}$. Finally, in Sec. 4 we construct two classes of the spiral minimal surfaces and indicate their asymptotic approximations. The shape of solutions that belong to the first class resembles the spiral galaxies. The second class of solutions is composed by helicoidal spiral surfaces with exponentially growing helice steps. 1. The symmetry reduction problem Let $\xi$, $\eta$, and $z$ be the Cartesian coordinates in space ${\mathbb{E}}^{3}$. Consider the surfaces which are locally defined by graphs of functions, $\varSigma=\{z=\chi(\xi$, $\eta)\mid(\xi$, $\eta)\in{\mathcal{V}}\subset{{\mathbb{R}}}^{2}\}$. A two-dimensional surface $\varSigma$ is minimal (that is, the minimum of area is realized by $\varSigma$) if the function $\chi$ satisfies the minimal surface equation $$(1+\chi_{\eta}^{2})\,\chi_{\xi\xi}-2\chi_{\xi}\chi_{\eta}\chi_{\xi\eta}+(1+% \chi_{\xi}^{2})\,\chi_{\eta\eta}=0.$$ (1) In the sequel, we investigate the properties of Eq. (1) and structures related with it up to its discrete symmetry $\xi\leftrightarrow\eta$, $z\mapsto-z$. The Lie algebra of classical point symmetries for Eq. (1) is generated [1] by three translations along the respective coordinate axes, three rotations $\Psi_{j}$ of the coordinate planes around the origin, and the dilatation $\Lambda$ centered at the origin. Let $\Psi_{1}$ be the (infinitesimal) rotation of the plane $0\xi\eta$, and consider the generator $\phi=\Psi_{1}+\Lambda$ of a symmetry of Eq. (1). In [1], the problem of symmetry reduction for Eq. (1) by the composition $\phi$ of rotation and dilatation was posed, although no attempts to find at least one solution were performed. By constructions, each solution of the symmetry reduction problem is a minimal surface invariant with respect to the rotation of the plane $0\xi\eta$ and, simultaneously, the dilatation of the entire space ${{\mathbb{E}}}^{3}$. 1.1. Solutions via the Legendre transform The general solution of the reduction problem for Eq. (1) by its symmetry $\phi$ is obtained [4] in parametric form using the Legendre transformation. Now we describe a two-parametric family of the $\phi$-invariant minimal surfaces. Although, we claim that the reduction provides a special solution that is not incorporated in this family. The present paper is essentially devoted to the description of asymptotic properties of the two classes of these surfaces. First we recall that the generating section of the symmetry at hand is $$\phi=\chi-(\xi-\eta)\chi_{\xi}-(\xi+\eta)\chi_{\eta},$$ (2) see [2, 6] for details. The $\phi$-invariance condition for $\chi(\xi,\eta)$ is $\phi=0$. Next, consider the Legendre transformation $${\mathcal{L}}=\{w=\xi\chi_{\xi}+\eta\chi_{\eta}-\chi,\quad p=\chi_{\xi},\quad q% =\chi_{\eta}\}.$$ (3) Now we act by the Legendre transformation $\mathcal{L}$ onto the system composed by Eq. (1) and the equation $\phi=0$. Then from (1) we obtain the linear elliptic equation $$\displaystyle(1+p^{2})\,w_{pp}+2pq\,w_{pq}+(1+q^{2})\,w_{qq}=0,$$ (4a) and from the invariance condition $$\phi=0$$ we get the equation $$\displaystyle w+q\,w_{p}-p\,w_{q}=0.$$ (4b) Let $(\varrho$, $\vartheta)$ be the polar coordinates on the plane $0pq$ such that $p=\varrho\,\cos\vartheta$ and $q=\varrho\,\sin\vartheta$. Then Eq. (4b) acquires the form $w-\tfrac{\partial}{\partial\vartheta}w=0$, whence it follows that $w=\omega(\varrho)\cdot\exp(\vartheta)$. Substituting $w(\varrho$, $\vartheta)$ in (4a), we obtain the equation $$\varrho^{2}\cdot(1+\varrho^{2})\,\omega^{\prime\prime}(\varrho)+\varrho\,% \omega^{\prime}(\varrho)+\omega(\varrho)=0.$$ (5) Its complex-valued solutions are $$\omega_{\pm}={\bigl{(}2+\varrho^{2}\bigr{)}}^{\frac{1}{2}}\,\exp\Bigl{(}\pm% \arctan{\bigl{(}-(1+\varrho^{2})\bigr{)}}^{-\frac{1}{2}}\pm\operatorname{% arctanh}{\bigl{(}-(1+\varrho^{2})\bigr{)}}^{\frac{1}{2}}\Bigr{)}.$$ (6) Their real and imaginary parts define a basis of real solutions for Eq. (5). The inverse Legendre transform is ${\mathcal{L}}^{-1}=\{\xi=w_{p}$, $\eta=w_{q}$, $\chi=pw_{p}+qw_{q}-w\}$. Rewriting $\mathcal{L}^{-1}$ in the polar coordinates $\varrho$ and $\vartheta$, we finally get $$\xi=\Bigl{[}\frac{\partial\omega}{\partial\varrho}\cos\vartheta+\frac{\omega}{% \varrho}\sin\vartheta\Bigr{]}\cdot\exp\vartheta,\ \eta=\Bigl{[}\frac{\partial% \omega}{\partial\varrho}\sin\vartheta-\frac{\omega}{\varrho}\cos\vartheta\Bigr% {]}\cdot\exp\vartheta,\ \chi=\Bigl{[}\varrho\cdot\frac{\partial\omega}{% \partial\varrho}-\omega\Bigr{]}\cdot\exp\vartheta.$$ (7) Formulas (7) provide a parametric representation of the generic $\phi$-invariant minimal surfaces. Special minimal surfaces are defined by special solutions of Eq. (5) different from family (6). Remark 1. Representation (7) of the minimal surfaces based on the linearizing Legendre transformation, see Eq. (3), is not a unique way to describe open arcwise connected minimal surfaces $\varSigma=\{\vec{\xi}=\vec{\xi}(p,q)\}\in{\mathbb{E}}^{3}$ in parametric form (e.g., we have $p=\chi_{\xi}$ and $q=\chi_{\eta}$ in formulas (4)). An alternative is given through the Ennepert–Weierstrass representation [5] $$\displaystyle\varSigma=\bigl{\{}\vec{\xi}(\zeta)\mid\vec{\xi}={\vec{\xi}}_{0}+% \text{Re}\,\int_{0}^{\zeta}\vec{\Phi}(\lambda)\,d\lambda,\ \zeta\in{{\mathbb{Z% }}}\subset{{\mathbb{C}}},\ {\vec{\xi}}_{0}\in{{\mathbb{E}}}^{3}\bigr{\}},$$ (8a) here $$\vec{\Phi}(\zeta)=\{\Phi_{1}(\zeta)$$, $$\Phi_{2}(\zeta)$$, $$\Phi_{3}(\zeta)\}$$ is a triple of complex analytic functions that satisfy the constraint $$\displaystyle\Phi_{1}^{2}+\Phi_{2}^{2}+\Phi_{3}^{2}=0.$$ (8b) We see that $\text{Re}\,\zeta$ and $\text{Im}\,\zeta$ are the new parameters on the minimal surface $\varSigma$. It would be of interest to obtain the Weierstrass representation (8) for the $\phi$-invariant minimal surfaces (7). 1.2. Reduction of Eq. (1) to an ODE Let us return to the system composed by Eq. (1) and the constraint $\phi=0$. This symmetry reduction leads to one scalar second order ODE with a cubic nonlinearity. Namely, we have Proposition 1 ([1]). Let $(\rho$, $\theta)$ be the polar coordinates on the plane $0\xi\eta$ such that $(z$, $\rho$, $\theta)$ are the cylindric coordinates in space. Consider a minimal surface $\varSigma\subset{{\mathbb{E}}}^{3}$ invariant w.r.t. symmetry (2) of Eq. (1). Then $\varSigma$ is defined by the formula $$z=\rho\cdot h(\theta-\ln\rho),$$ where the function $h(q)$ of $q=\theta-\ln\rho$ satisfies the equation $$h^{\prime\prime}\cdot(h^{2}+2)+h-2h^{\prime}-(h^{\prime})^{3}-(h-h^{\prime})^{% 3}=0.$$ (9) Corollary 2. Assume that $\varSigma$ is a minimal surface invariant w.r.t. composition $\phi$ of rotation and dilatation, see (2). Let $\Pi=\smash{\bigl{\{}z=h(q){\bigr{|}}_{\rho=1}\bigr{\}}}$ be the profile defined by $\varSigma$ on the cylinder $Q=\{\rho=1\}\subset{{\mathbb{E}}}^{3}$, here $h$ is a solution of Eq. (9). Then the surface $\varSigma$ is extended from $Q$ along the intersections of the logarithmic spirals $\rho={\mathop{\rm const}\nolimits}\cdot\exp(\theta)$, $\theta\in{\mathbb{R}}$, and the cones $z=\rho\cdot h(q)$. The $\phi$-invariant surface $\varSigma$ has a singular point at the origin. The subsitution $h=x$, $h^{\prime}=y(x)$ maps Eq. (9) to the cubic-nonlinear equation $$(x^{2}+2)y\frac{dy}{dx}={2y-x+y^{3}-(x-y)^{3}};$$ (10) equation (10) defines the phase portrait of Eq. (9). Remark 2. Consider the family of equations $$(x^{2}+2)yy^{\prime}=-x+\alpha y+y^{3}+(y-x)^{3}$$ (11) that contains (10) as a particular case when $\alpha=2$. The change of variables $\smash{\frac{1}{u(x)}}=(x^{2}+2)y(x)$ in (11) leads to the Abel equation $u^{\prime}=f_{3}(x)u^{3}+f_{2}(x)u^{2}+f_{1}(x)u+f_{0}(x)$, which is integrable in quadratures [7] if $\alpha=3$. For $\alpha=2$ we have $f_{0}=\smash{-\frac{2}{(x^{2}+2)^{2}}}$, $f_{1}=\smash{\frac{x}{x^{2}+2}}$, $f_{2}=-(3x^{2}+2)$, and $f_{3}=x(x^{2}+1)(x^{2}+2)$. The Abel equation can be expressed in the normal form $\eta^{\prime}=\eta^{3}+H(x(\xi))$, where $\eta=\eta(\xi)$, the prime denotes the derivative w.r.t. $\xi$, and the function $H(x)$ is constructed as follows. Let us make another change of coordinates, $u(x)=\omega(x)\eta(\xi)-\frac{f_{2}}{3f_{3}}$, where $\omega(x)=\smash{\exp(\int(f_{1}-\frac{f_{2}^{2}}{3f_{3}})dx)}=\smash{\frac{(x% ^{2}+2)^{1/6}}{x^{2/3}(x^{2}+1)^{5/6}}}$ and $\xi=\xi(x)=\int\frac{(x^{2}+2)^{4/3}}{x^{1/3}(x^{2}+1)^{2/3}}dx$. The function $x(\xi)$ is defined by inverting the expression for $\xi(x)$. Then the function $H(x)$ is obtained from the relation $$f_{3}\cdot\omega^{3}\cdot H(x)=f_{0}+{{\left(\frac{f_{2}}{3f_{3}}\right)}^{% \prime}_{x}}-{\frac{f_{1}f_{2}}{3f_{3}}+\frac{2f_{2}^{2}}{27f_{3}^{2}}},$$ whence we finally get $$H(x)=\left(\frac{x^{2}+1}{x^{2}+2}\right)^{3/2}\cdot\frac{-20}{27x^{2}(x^{2}+1% )^{2}(x^{2}+2)^{2}};$$ (12) note that the inverse function $x(\xi)$ must be substituted for $x$ in (12). Hence we conclude that the coordinate transformation $u(x)=\frac{(x^{2}+2)^{1/6}}{x^{2/3}(x^{2}+1)^{5/6}}\cdot\eta(\xi)+\frac{3x^{2}% +2}{3x(x^{2}+2)(x^{2}+1)}$ maps the Abel equation to its normal form $\eta^{\prime}=\eta^{3}+H(x(\xi))$. 1.3. Exact solutions of the limit analogue of Eq. (10) In this subsection we consider the limit analogue of system (10) whose right-hand side contains the terms that provide maximal contribution as $R=\sqrt{x^{2}+y^{2}}\to\infty$ on $0xy$. To this end, we note that for large $R$ the right-hand side of Eq. (10) is equivalent to its “cubic” component $\bigl{[}{y^{3}-(x-y)^{3}}\bigr{]}\cdot{y^{-1}\cdot x^{-2}}$. Let us find exact solutions of the equation $$\frac{dy}{dx}=\frac{y^{3}-(x-y)^{3}}{yx^{2}}.$$ (13) Using the substitution $s=\frac{y}{x}$ in (13), we obtain the equation $$x\,\frac{ds}{dx}=\frac{2s^{3}+3s-4s^{2}-1}{s}.$$ Note that $2s^{3}-4s^{2}+3s-1=(s-1)(2s^{2}-2s+1)$. Therefore we get $$\int\frac{s\,ds}{(s-1)(2s^{2}-2s+1)}=\frac{1}{2}\ln\frac{(s-1)^{2}}{2s^{2}-2s+% 1}=\ln|\delta x|,$$ (14) whence we finally obtain the integral $\frac{(s-1)^{2}}{2s^{2}-2s+1}=\delta x^{2}$, here $\delta\geq 0$. Suppose $\delta=0$, then we have $s=1$. Thus we get the solution $y=x$ of approximation (13). In Proposition 12 below we prove that the diagonal is the asymptote for a solution of Eq. (10). If $\delta>0$, then we solve the quadratic equation w.r.t. $s$ and obtain $$s=\frac{1-\delta x^{2}\pm\sqrt{\delta}\cdot|x|\cdot\sqrt{1-\delta x^{2}}}{1-2% \delta x^{2}}.$$ Hence we conclude that $$\displaystyle\bar{y}_{1}$$ $$\displaystyle=\frac{x\cdot\sqrt{1-\delta x^{2}}}{1-2\delta x^{2}}(\sqrt{1-% \delta x^{2}}+\sqrt{\delta}\cdot|x|),$$ (15a) $$\displaystyle\bar{y}_{2}$$ $$\displaystyle=\frac{x\cdot\sqrt{1-\delta x^{2}}}{\sqrt{1-\delta x^{2}}+\sqrt{% \delta}\cdot|x|}.$$ (15b) Solutions (15) are defined for $|x|<\frac{1}{\sqrt{\delta}}$, $|x|\neq\frac{1}{\sqrt{2\delta}}$. Suppose $x>0$, then we conclude that $$\lim_{x\to\frac{1}{\sqrt{2\delta}}-0}\bar{y}_{1}(x)=+\infty.$$ Finally, consider solution (15a) with the Cauchy data $(x_{0},y_{0})$, $y_{0}\neq x_{0}$, located on a large circle. Then the above reasonings yield that $$\delta=\frac{|\frac{y_{0}}{x_{0}}-1|}{\sqrt{2y_{0}^{2}-2y_{0}x_{0}+x_{0}^{2}}}.$$ We claim that the behaviour of solutions of initial equation (10) is correlated with the solutions of limit equation (13), see Remark 8 on p. 8. 2. Solutions of the auxiliary Riccati equation In this section, we start to investigate the phase portrait of Eq. (9). We show that the phase curves are described by solutions of the auxiliary Riccati equation (22). Using the topological method by Warzewski [3], we conclude that Eq. (22) has one unstable solution and a large class of stable solutions. On the phase plane for Eq. (9) these solutions are represented by a pair of centrally symmetric separatrices and by trajectories that repulse from the separatrices and have vertical asymptotes at infinity, respectively. This will be discussed in the next section. Consider the autonomous system associated with (10), $$\frac{dx}{dt}=2y+yx^{2},\quad\frac{dy}{dt}=2y-x+y^{3}-(x-y)^{3}.$$ (16) Integral curves for Eq. (10) are assigned to integral trajectories of system (16). Consider the linear change of variables $x=2(z_{1}-z_{2})$, $y=2z_{1}$. Then system (16) is transformed to $$\frac{dz_{1}}{dt}=z_{1}+z_{2}+4(z_{1}^{3}+z_{2}^{3}),\quad\frac{dz_{2}}{dt}=-z% _{1}+z_{2}+4z_{2}(2z_{1}^{2}-z_{1}z_{2}+z_{2}^{2}).$$ (17) The linear approximation for (17) has an unstable focus at the origin; indeed, the eigenvalues are $\lambda_{1,2}=1\pm i$. By a Lyapunov’s theorem, nonlinear systems (16) and (17) will exhibit analogous behaviour near the origin. Next, we use the substitution $z_{1}=r\cos\varphi$, $z_{2}=r\sin\varphi$ in (17), whence after some transformations we obtain the triangular system $$\displaystyle\dot{r}$$ $$\displaystyle=r+4r^{3},$$ (18a) $$\displaystyle\dot{\varphi}$$ $$\displaystyle=-1+4r^{2}\sin\varphi\,(\cos\varphi-\sin\varphi).$$ (18b) For any $r_{0}>0$ the Cauchy problem for Eq. (18a) has the solution $$r(t,r_{0})=\frac{r_{0}\exp(t)}{\sqrt{1+4r_{0}^{2}\bigl{(}1-\exp(2t)\bigr{)}}}$$ (19) whenever $r(0,r_{0})=r_{0}$. Assume that $r=r_{0}>0$ at $t=0$ for a solution $\{r(t)$, $\varphi(t)\}$. From (19) it follows that any solution of system (18) achieves the infinity on the plane $0xy$ at the finite time $t^{*}=\smash{\frac{1}{2}\ln(1+\frac{1}{4r_{0}^{2}})}$, and we see that $t^{*}\to\infty$ if $r_{0}\to+0$. Obviously, we have $\dot{\varphi}\to-1$ for $r\to+0$, that is, the trajectories in a small neighbourhood of the origin are the spirals that unroll clockwise. These reasonings describe the behaviour of the trajectories of systems (16) and (17). Proposition 3. The trajectories ${z_{1}}(t)$, $z_{2}(t)$ are transversal to any circle centered at the origin. Therefore the phase curves $\{x(t)$, $y(t)\}$ for Eq. (9) are transversal to any cetrally symmetric ellipsis that corresponds to a circle on the plane $0z_{1}z_{2}$ (the major axes of these ellipses are located along the diagonal $y=x$). Hence we deduce that system (16) has no cycles and equation (9) has no periodic solutions except zero, which corresponds to the stable point on the phase plane. A unique stable point $(0,0)$ of system (16) is an unstable focus. The spiral-type phase curves for Eq. (9) unroll clockwise around this point. Any trajectory of system (16) that does not coincide with the origin achieves the infinity at a finite time. Remark 3. The equations in system (16) do not change under the involution $(x,y)$ $\leftrightarrow(-x,-y)$. Hence for any trajectory of systems (16–18) its image under the central symmetry is also a trajectory. Further on, we investigate properties of the phase curves $\{x(t)$, $y(t)\}$ up to this symmetry (or, equivalently, up to the transformation $\varphi\mapsto\varphi+\pi$ of the angular coordinate $\varphi$). 2.1. In this subsection we study the behavoiur of solutions of system (18) as $r\to+\infty$. Let us divide Eq. (18b) by Eq. (18a). Thus we obtain $$\frac{d\varphi}{dr}=\frac{-1+4r^{2}\sin\varphi(\cos\varphi-\sin\varphi)}{r+4r^% {3}}.$$ (20) Substituting $\tau=4r^{2}$ in (20), we get the equation $$\frac{d\varphi}{d\tau}=\frac{-1+\tau\sin\varphi(\cos\varphi-\sin\varphi)}{2% \tau(1+\tau)}.$$ (21) After some trigonometric simplifications in the r.h.s. of Eq. (21) we arrive at $$\frac{d\varphi}{d\tau}=\frac{(\sqrt{2}-1)\tau-2-2\sqrt{2}\tau\sin^{2}(\varphi-% \frac{\pi}{8})}{4\tau(\tau+1)}.$$ (21$${}^{\prime}$$) Next, divide both sides in Eq. (21${}^{\prime}$) by $\cos^{2}(\varphi-\frac{\pi}{8})$ assuming that $\varphi\neq-\frac{3\pi}{8}+\pi n$, $n\in{{\mathbb{Z}}}$. Also, we put $u=\tan(\varphi-\frac{\pi}{8})$ by definition. Hence we finally obtain the Riccati equation $$\displaystyle\frac{du}{d\tau}=b(\tau)-a(\tau)u^{2},$$ (22a) where $$\displaystyle a(\tau)=\frac{(\sqrt{2}+1)\tau+2}{4\tau(\tau+1)},\quad b(\tau)=% \frac{(\sqrt{2}-1)\tau-2}{4\tau(\tau+1)}.$$ (22b) The Riccati equation (22) can be further transformed [10] to the Schrödinger equation with the potential $b/a$ and zero energy. 2.2. The Warzewski theorem First let us recall some definitions [3]. Definition 1. Let $\Omega^{0}\subset\Omega$ be an open subset of a domain $\Omega$. Consider the Cauchy problem $$\displaystyle y^{\prime}=f(t,y),$$ (23a) $$\displaystyle y(t_{0})=y_{0}.$$ (23b) Suppose $y=y(t)$ is a solution of (23). A point $(t_{0},y_{0})\in\Omega\cap\partial\Omega^{0}$ is called an exit point with respect to equation (23a) and the domain $\Omega^{0}$ if for any solution $y(t)$ satisfying (23b) there is a constant $\varepsilon>0$ such that $(t,y(t))\in\Omega^{0}$ for all $t$ in the interval $t_{0}-\varepsilon\leq t<t_{0}$. An exit point $(t_{0},y_{0})$ for the domain $\Omega^{0}$ is a strict exit point if $(t,y(t))\notin\overline{\Omega^{0}}$ whenever $t_{0}<t\leq t_{0}+\varepsilon$ for some $\varepsilon>0$. We denote by $\Omega^{0}_{e}$ the set of all exit points for the domain $\Omega^{0}$, and let $\Omega^{0}_{se}$ denote the set of all strict exit points. Now we analyze the behaviour of solutions of Eq. (22) as $\tau\to+\infty$. First we note that $\tan\frac{\pi}{8}=\sqrt{2}-1$; hence we put $u_{1}=\sqrt{2}-1$, $u_{2}=1-\sqrt{2}$. The above notation corresponds to the angles $\varphi_{1}=\frac{\pi}{4}$ and $\varphi_{2}=0$, respectively. Also, we set $\tau_{0}=2(\sqrt{2}+1)+\delta_{0}$, where $\delta_{0}>0$ is arbitrary. Further, we introduce two closed domains $D_{1}$ and $D_{2}$ in the right half-plane $\tau>0$ of the coordinate plane $(\tau;u)$: we let $$\displaystyle D_{1}$$ $$\displaystyle=\{(\tau;u)|\tau\geq\tau_{0},\ 0\leq u\leq u_{1}\},$$ (24) $$\displaystyle D_{2}$$ $$\displaystyle=\{(\tau;u)|\tau\geq\tau_{0},\ u_{2}\leq u\leq 0\}.$$ Next, we note that the equality $\frac{du}{d\tau}=0$ is valid on the curves $u(\tau)=\pm\sqrt{{b(\tau)}\bigr{/}{a(\tau)}}$. Therefore we define the third domain $D_{3}\subset D_{1}\cup D_{2}$ through $$D_{3}=\left\{(\tau,u)|\tau>2(\sqrt{2}+1),\ -\sqrt{{b(\tau)}\bigr{/}{a(\tau)}}<% u<\sqrt{{b(\tau)}\bigr{/}{a(\tau)}}\right\}.$$ It is easy to check that the inequality $\frac{du}{d\tau}>0$ holds in the domain $D_{3}$. By definition, put $f(u,\tau)=b(\tau)-a(\tau)u^{2}$. Let us describe the inclination field for Eq. (22) on the lines $u=u_{1}$, $u=0$, and $u=u_{2}$. We have $f(u_{1},\tau)=f(u_{2},\tau)=-\frac{1+(\sqrt{2}-1)^{2}}{2\tau(\tau+1)}<0$ whenever $\tau\geq\tau_{0}$, and we also have $f(0,\tau)=b(\tau)>0$ under the same assumption. This argument shows that all points on the lines $u=0$ and $u=u_{1}$ are the strict entry points with respect to $D_{1}$. Simultaneously, the lines $u=0$ and $u=u_{2}$ are composed by the strict exit points w.r.t. the closed domain $D_{2}$. The Warzewski theorem and Example 1 below are borrowed from [3]. Using them, we prove the existence of solutions of Eq. (22) that do not leave the domains $D_{1}$ and $D_{2}$. Theorem 4 ([3]). Let $f(t,y)$ be a continuous function on an open set $\Omega$ of points $(t$, $y)$, and assume that solutions of system (23a) are uniquely defined by the initial condition (23b). Also, let $\Omega^{0}\subset\Omega$ be an open subset such that $\Omega^{0}_{e}=\Omega^{0}_{se}$. Further let $S\subset\Omega^{0}$ be a nonempty subset such that • the intersection $S\cap\Omega^{0}_{e}$ is a retract of $\Omega^{0}_{e}$, but • the intersection $S\cap\Omega^{0}_{e}$ is not a retract of $S$. Then there is at least one point $(t_{0},y_{0})\in S\cap\Omega^{0}$ such that the graph of solution $y(t)$ of the Cauchy problem (23) is contained in $\Omega^{0}$ on its maximal right interval of definition. Example 1 ([3]). Suppose that $y$ is real and the function $f(t,y)$ in system (23a) is continuous on the set $\Omega$ which coincides with the whole plain $(t,y)$. Let $\Omega^{0}$ be the strip $|y|<b,\ -\infty<t<\infty$. The boundary of $\Omega^{0}$ is contained in $\Omega$ and consists of the two lines $y=\pm b$. Assume $f(t,b)>0$ and $f(t,-b)<0$ such that $\Omega^{0}_{e}=\Omega^{0}_{se}=\partial\Omega^{0}\cap\Omega$. Next, let $S$ be the segment $S=\{(t,y)\mid t=0,\ |y|\leq b\}$. Then $S\cap\Omega^{0}_{e}$ consists of the two points $(0,\pm b)$; the intersection is a retract of of the set $\Omega^{0}_{e}$ but is not a retract of $S$. Theorem 4 yields the existence of a point $(0,y_{0})$, $|y_{0}|<b$ such that there is the solution of the Cauchy problem for system (23a) with the initial condition $y(0)=y_{0}$. This solution satisfies the inequality $|y(t)|<b$ for all $t\geq 0$. From Example 1 that illustrates Theorem 4 we obtain Corollary 5. There is a solution $u=\psi_{1}(\tau)$ of the Riccati equation (22) that does not leave the domain $D_{1}$, which is defined in (24), for all $\tau\geq\tau_{0}$. Analogously, there is a solution $u=\psi_{2}(\tau)$ in $D_{2}$ that does not leave $D_{2}$. Remark 4. The Warzewski theorem guarantees the existence of a solution $\psi_{1}$ in $D_{1}$. We claim that there are infinitely many solutions of this class in our case. The asymptotic expansions of these solutions as $r\to\infty$ are specified in (32). The convergence of the expansions can be rigorously proved, see Remark 5 on p. 5. Also, we claim that the solution $\psi_{2}$ in the domain $D_{2}$ is unique and unstable. Now we calculate the limits of solutions of the Riccati equation (22) as $\tau\to\infty$. Lemma 6. Let $u(\tau)$ be a solution of Eq. (22) which is greater than $u=\psi_{1}(\tau)$ for all $\tau\geq\tau_{0}$. Then we have $$\lim_{\tau\to\infty}(u(\tau)-\psi_{1}(\tau))=0.$$ (25) Proof. Assume the converse, $\inf\limits_{\tau\geq\tau_{0}}(u(\tau)-\psi_{1}(\tau))=\Delta_{0}$, where $\Delta_{0}>0$. Then the difference $w(\tau)=u(\tau)-\psi_{1}(\tau)$ of two solutions for Eq. (22) satisfies the equation $$\frac{dw}{d\tau}=-a(\tau)\bigl{(}u(\tau)+\psi_{1}(\tau)\bigr{)}\cdot w.$$ (26) Integrating Eq. (26), we obtain $$w(\tau,\tau_{0})=w(\tau_{0})\cdot\exp(-\int_{\tau_{0}}^{\tau}a(s)(u(s)+\psi_{1% }(s))ds.$$ (27) Further recall that $$u(s)+\psi_{1}(s)\geq u(s)-\psi_{1}(s)\geq\Delta_{0}$$ whenever $s\geq\tau_{0}$ and $a(s)>0$. Therefore, $$\displaystyle-\int_{\tau_{0}}^{\tau}a(s)(u(s)+\psi_{1}(s))ds\leq-\Delta_{0}% \int_{\tau_{0}}^{\tau}a(s)ds=-\frac{\Delta_{0}}{4}\left((\sqrt{2}-1)\ln\frac{% \tau+1}{\tau_{0}+1}+2\ln\frac{\tau}{\tau_{0}}\right).$$ Consequently, we have $\lim\limits_{\tau\to+\infty}w(\tau,\tau_{0})=0$. This conclusion contradicts the assumption ${\Delta_{0}>0}$. ∎ Similarly we prove that a solution $u(\tau)$ tends to $u=\psi_{1}(\tau)$ as $\tau\to+\infty$ if $u(\tau)$ enters the domain $D_{1}$ and there it is located under the graph $u=\psi_{1}(\tau)$. Lemma 7. The following equality holds: $$\lim_{\tau\to+\infty}\psi_{1}(\tau)=\lim_{\tau\to+\infty}\sqrt{{b(\tau)}\bigr{% /}{a(\tau)}}=\sqrt{2}-1=u_{1}.$$ Proof. Suppose $\psi_{1}(\tau)\subset D_{1}\cap D_{3}$ for all $\tau>\tau_{0}$. Then the solution $\psi_{1}(\tau)$ grows and is bounded. Therefore there is the limit $$\lim_{\tau\to+\infty}\psi_{1}(\tau)=d_{0}.$$ Next, let $u=u(\tau)$ be a solution that enters the domain $D_{3}\cap D_{1}$ through the line $u=u_{1}$ at a large $\tau_{1}$ and remains in it for all $\tau>\tau_{1}$. If $d_{0}<\sqrt{2}-1$, then the limit of the solution $u(\tau)$ is $d_{1}>d_{0}$, but the limits $d_{0}$ and $d_{1}$ must coincide by (25). Consequently, $d_{0}=d_{1}=\sqrt{2}-1$. ∎ Lemma 8. The solution $u=\psi_{2}(\tau)$ remains in the domain $D_{2}\setminus D_{3}$ for all $\tau\geq\tau_{0}$, and its limit at infinity is $$\lim_{\tau\to+\infty}\psi_{2}(\tau)=1-\sqrt{2}.$$ Proof. If the solution $\psi_{2}(\tau)$ enters the domain $D_{2}\cap D_{3}$ at some $\tau_{1}>\tau_{0}$ and remains there, then it grows and is bounded. Consequently, there is the limit $\lim\limits_{\tau\to+\infty}\psi_{2}(\tau)=d_{2}$, where $d_{2}\in(1-\sqrt{2};0]$. Hence for a large $\tau_{2}$ and $\tau\geq\tau_{2}$ we obtain $$\psi_{1}(\tau)+\psi_{2}(\tau)\geq\frac{\sqrt{2}-1+d_{2}}{2}.$$ By definition, put $w(\tau)=\psi_{1}(\tau)-\psi_{2}(\tau)$. Then from Eq. (22) it follows that $$\frac{dw}{d\tau}=-a(\tau)(\psi_{1}(\tau)+\psi_{2}(\tau))w,$$ whence we deduce $$\displaystyle w(\tau)=w(\tau_{2})\cdot\exp\left(-\int_{\tau_{2}}^{\tau}a(s)(% \psi_{1}(s)+\psi_{2}(s))ds\right)\leq\\ \displaystyle\leq w(\tau_{2})\exp\left(-\frac{\sqrt{2}-1+d_{2}}{2}\cdot\int_{% \tau_{2}}^{\tau}a(s)ds\right).$$ (28) Yet we see that the r.h.s. in (28) tends to $0$ as $\tau\to+\infty$. Therefore the l.h.s. in (28) must also tend to zero which is impossible. Hence the graph $\psi_{2}(\tau)$ is contained in $D_{2}\setminus D_{3}$ for large $\tau$, that is, the solution $\psi_{2}$ decreases and is bounded from below. This argument shows that $$\lim_{\tau\to+\infty}\psi_{2}(\tau)=\lim_{\tau\to+\infty}\left(-\sqrt{{b(\tau)% }\bigr{/}{a(\tau)}}\right)=-(\sqrt{2}-1)=1-\sqrt{2}.$$ This completes the proof. ∎ Analogously, suppose the graph of a solution enters the domain $D_{3}$ and is located above the graph $u=\psi_{2}(\tau)$. Then it tends to the solution $u=\psi_{1}(\tau)$. The proof is straightforward. A solution which is less than ${u=\psi_{2}(\tau)}$ achieves $-\infty$ at a finite time. Using Eqs. (26) and (27), it can be proved that the solution $u=\psi_{2}(\tau)$ in the domain $D_{2}\setminus D_{3}$ is unique. The proof is by reduction ad absurdum. 2.3. Asymptotic expansions of the solutions $\psi(\tau)$ In this subsection we use the method of undetermined coefficients and obtain the asymptotic expansion in $\tfrac{1}{\tau}$ for the solution $\psi_{1}^{*}(\tau)$ of Eq. (22) that tends from above to $u_{1}=\tan\frac{\pi}{8}$ as $\tau=4r^{2}\to+\infty$. Also, we get the expansion for $\psi_{2}(\tau)$ that tends to $u_{2}=-\tan\frac{\pi}{8}$ at infinity. We emphasize that the expansion for $\psi_{1}^{*}$ provides a solution different from $\psi_{1}(\tau)$, which exists by Corollary 5. Indeed, the new function $\psi_{1}^{*}$ tends to the limit $u_{1}=\sqrt{2}-1$ monotonously descreasing. Proposition 9. The solutions $\psi^{*}_{1}(\tau)$ and $\psi_{2}(\tau)$ admit the following asymptotic expansions: $$\displaystyle\psi^{*}_{1}(\tau)$$ $$\displaystyle=\sqrt{2}-1+\frac{2\sqrt{2}(\sqrt{2}-1)}{\tau}+O\bigl{(}\frac{1}{% \tau^{2}}\bigr{)},$$ (29a) $$\displaystyle\psi_{2}(\tau)$$ $$\displaystyle=1-\sqrt{2}+\frac{2\sqrt{2}(\sqrt{2}-1)}{3\tau}+O\bigl{(}\frac{1}% {\tau^{2}}\bigr{)}.$$ (29b) The inequalities $\psi^{*}_{1}(\tau)>\sqrt{2}-1$ and $\psi_{2}(\tau)>1-\sqrt{2}$ hold for large $\tau$. Remark 5. Using the method of majorant series [8, 9], we prove that bounded solutions of the Riccati equation (22) are assigned to expansions (29). These solutions are real analytic in a neighbourhood of the infinity, and the radius of this neighbourhood can be estimated. 2.4. The general case: $u=\psi(r)$. Now we construct the asymptotic expansions for all solutions of (22) that tend to $\pm(\sqrt{2}-1)$. This time we use the expansions in $r$ but not in $\tau=4r^{2}$. Let us re-write the Riccati equation (22) in the form $$\frac{du}{dr}=\frac{2(\sqrt{2}-1)r^{2}-1}{r(4r^{2}+1)}-\frac{2(\sqrt{2}+1)r^{2% }+1}{r(4r^{2}+1)}\cdot u^{2}.$$ (30) Its right-hand side is real analytic in $\frac{1}{r}$ if $r>\frac{1}{2}$. Suppose $r\geq 1$. Let us use the expansion $$u(r)=w_{0}+\frac{w_{1}}{r}+\frac{w_{2}}{r^{2}}+O\bigl{(}\frac{1}{r^{3}}\bigr{)}.$$ (31) Substituting (31) for $u$ in (30), we get $$\displaystyle-\frac{w_{1}}{r^{2}}-\frac{2w_{2}}{r^{3}}-\ldots=\frac{\sqrt{2}-1% }{2r}\cdot\bigl{(}1-\frac{1}{4r^{2}}+\frac{1}{16r^{4}}-\ldots\bigr{)}-\\ \displaystyle{}-\frac{1}{4r^{3}}\cdot\bigl{(}1-\frac{1}{4r^{2}}+\frac{1}{16r^{% 4}}-\ldots\bigr{)}-\bigl{(}w_{0}^{2}+\frac{2w_{0}w_{1}}{r}+\frac{w_{1}^{2}+2w_% {0}w_{2}}{r^{2}}+\ldots\bigr{)}\cdot\\ \displaystyle{}\cdot\Bigl{(}\frac{\sqrt{2}+1}{2r}\cdot\bigl{(}1-\frac{1}{4r^{2% }}+\frac{1}{16r^{4}}-\ldots\bigr{)}+\frac{1}{4r^{3}}\cdot\bigl{(}1-\frac{1}{4r% ^{2}}+\frac{1}{16r^{4}}-\ldots\bigr{)}\Bigr{)}.$$ Equating the coefficients of $1/r$ and $1/r^{2}$, we obtain $$\sqrt{2}-1-(\sqrt{2}+1)\cdot w_{0}^{2}=0,\quad w_{1}=(\sqrt{2}+1)\cdot w_{0}w_% {1}.$$ The former equation has two roots, $w_{0,1}=\sqrt{2}-1$ and $w_{0,2}=1-\sqrt{2}$. First we let $w_{0}=w_{0,1}$; then the root of the second equation is an arbitrary real number! In this case, we use the notation $w_{1}=C$. Secondly, suppose $w_{0}=w_{0,2}$; then a unique root of the second equation is $w_{1}=0$. Now we equate the coefficients of $\frac{1}{r^{3}}$, whence we get $$-2w_{2}=-\frac{\sqrt{2}-1}{8}-\frac{1}{4}-\frac{w_{0}^{2}}{4}-\frac{\sqrt{2}+1% }{2}\cdot(w_{1}+2w_{0}w_{2})+\frac{(\sqrt{2}+1)w_{0}^{2}}{8}.$$ If $w_{0}=\sqrt{2}-1$, then $w_{2}=\frac{\sqrt{2}(\sqrt{2}-1)}{2}+\frac{\sqrt{2}+1}{2}\cdot C^{2}$. Alternatively, if $w_{0}=1-\sqrt{2}$, then $w_{2}=\frac{\sqrt{2}(\sqrt{2}-1)}{6}$. This implies that for $C=0$ we obtain the two asymptotic expansions for the solutions $\psi_{1}^{*}(\tau)$ and $\psi_{2}(\tau)$, which depend on even powers of $r$ and which were previously found in (29). Now we see that all other expansions involve $r=\frac{\sqrt{\tau}}{2}$ explicitly. By definition, put $D=\tfrac{1}{2}\bigl{[}\sqrt{2}(\sqrt{2}-1)+(\sqrt{2}+1)\cdot C^{2}\bigr{]}$. Then for any $u(r)$ such that $\lim_{r\to+\infty}u(r)=u_{1}$ we finally have $$u(r,C)=\sqrt{2}-1+\frac{C}{r}+\frac{D}{r^{2}}+O\bigl{(}\frac{1}{r^{3}}\bigr{)}% ,\qquad C\in{\mathbb{R}}.$$ (32) An analogue of Remark 5 is also true for (32): using the method of majorant series [8], one readily proves the convergence of expansion (32) for large $r$ to analytic solutions of equation (30), and it is also possible to estimate the radius of convergence. Finally, we formulate the assertion about the behaviour of solutions of the Riccati equation (22). Theorem 10. If $\tau\in(0,2[\sqrt{2}+1])$, then all solutions of Eq. (22) decrease monotonously. Suppose $\tau\geq\tau_{0}$. Then there is the unstable solution $\psi_{2}(\tau)$ that tends from above to the limit $u_{2}=-\tan\frac{\pi}{8}$ as $\tau\to\infty$; the asymptotic expansion for $\psi_{2}$ is given in (29b). In the domain $D_{1}$, see (24), there are infinitely many growing solutions $\psi_{1}(r)$ that tend to $u_{1}=\tan\frac{\pi}{8}$ as $r\to\infty$. These solutions correspond to $C<0$ in expansion (32). All solutions which are located between $\psi_{2}(\tau)$ and $\psi_{1}(r)$ tend to $u_{1}$ as $\tau=4r^{2}\to\infty$. All solutions which are located under $\psi_{2}$ repulse from it; they decrease and achieve $-\infty$ at a finite time. The solution $\psi_{1}^{*}(\tau)$, which is located above the domain $D_{1}$, decreases and tends to $u_{1}$ as $\tau\to\infty$; its expansion is described by (29a) (or by formula (32) with $C=0$). There are infinitely many other solutions $\psi_{1}(r)$ which are greater than $\psi_{1}^{*}(\tau)$. These solutions also tend to $u_{1}$ as $\tau\to\infty$, and their expansions at infinity are given through (32) with $C>0$. 3. The phase plane $0xy$ In this section we describe the behaviour of phase curves for Eq. (10). Inverting the transformations introduced on p. 2 and preserving the subscripts of respective functions, we pass from solutions $\psi(\tau)$ of Eq. (22) to solutions $\{x(t)$, $y(t)\}$ of system (16), and next we obtain solutions $y(x)$ of Eq. (10). For example, the solution $\psi_{1}^{*}(\tau)$ is transformed to $y_{1}^{*}(x)$, and the separatrix $y_{2}$ is assigned to the unstable solution $\psi_{2}$. The exposition goes along the time $t$ in (16): first we consider a neighbourhood of the origin, next we analyze the behaviour of the trajectories at a finite distance from the origin, and finally we describe their asymptotic expansions at the infinity ($x^{2}+y^{2}\to\infty$). 3.1. From Eq. (18a) it follows that system (16) has an unstable focus at the origin. In a small neighbourhood of this point, all trajectories unroll clockwise. By Proposition 3, each trajectory is transversal to the ellipses centered at the origin (we recall that these ellipses correspond to the circles $r=\mathop{\rm const}\nolimits$ on the plane $0z_{1}z_{2}$); the trajectories achieve the infinity at a finite time. Proposition 11. All extrema of the phase curves for system (16) are located on the straight line $y=x/2$. Proof. Solving the equation $P_{3}(x,y)=(y-x)^{3}+y^{3}+2y-x=0$ with respect to $y$, we obtain the real root $x/2$. The quotient $P_{3}(x,y)/(y-x/2)=2y^{2}-2xy+2x^{2}+2$ has no real roots for $y$ at any $x$. ∎ Consider the phase trajectories $\{x(t)$, $y(t)\}$ of Eq. (10) in a neighbourhood of the axis $0x$. The parts of the trajectories near the points $(x,0)$ describe the graphs of solutions $h(q)$ of Eq. (9) near their extrema. The trajectories are approximated with the circles of radius $\varrho(x)=|x|\cdot\frac{x^{2}+1}{x^{2}+2}$ centered at the points $(x_{0}$, $0)$, where $x_{0}=\frac{|x|}{x^{2}+2}$. Obviously, we have $x_{0}\to 0$ and $\varrho(x)\to|x|$ as $|x|\to\infty$. Remark 6 (On inflection points). In the first and second quadrants of $0xy$ (and owing to the central symmetry of the phase portrait, in the third and fourth quadrants, respectively) there is the curve $\mathcal{I}$ that consists of the inflection points of the phase curves. The curve $\mathcal{I}$ is composed by two components $\mathcal{I}_{1,2}$ that join in the first quadrant making a cusp; the point of their intersection is the nearest (w.r.t. the Euclidean metric on $0z_{1}z_{2}$) inflection point located on the spirals that unroll from the origin. The first component $\mathcal{I}_{1}$ consists of the inflection points where the trajectories start to repulse from the separatrix $y_{2}(x)$ (the unstable solution $y_{2}$ tends from above to the ray $y=x$, see Proposition 12 below) and turn towards their local maxima on the ray $y=x/2$. At the infinity $x^{2}+y^{2}\to\infty$, the component $\mathcal{I}_{1}$ approaches the diagonal $y=x$ from below. The second component $\mathcal{I}_{2}$ goes left from its intersection with $\mathcal{I}_{1}$ and contains the inflection point of the separatrix. Then the curve $\mathcal{I}_{2}$ enters the second quadrant. There, it describes the moments when the phase curves, having crossed the $0x$ axis at $x<0$, repulse from the separatrix and turn upward, possessing the vertical asymptotes at infinity. 3.2. Having achieved the inflection point, the trajectories of system (16) approach the infinity following one of the three schemes below. The expansions for the solutions $\psi_{1}(\tau)$, $\psi_{1}^{*}(r)$, and $\psi_{2}(\tau)$ of the Riccati equation (22), which were obtained in the previous section, determine the asymptotes for the phase curves of all the three types. Proposition 12. The diagonal ${y=x}$ is the asymptota of the separatrix $y_{2}(x)$ in a neighbourhood of infinity; the curve $y=y_{2}(x)$ approaches the diagonal from above. Proof. Using expansion (29b) for the solution $\psi_{2}(\tau)$ as $\tau=4r^{2}\to\infty$, and taking into account the transformation from Eq. (16) to Eq. (17), we obtain $$\displaystyle y-x=2z_{2}=2r\sin\varphi=2r\sin\bigl{(}(\varphi-\tfrac{\pi}{8})+% \tfrac{\pi}{8}\bigr{)}=\\ \displaystyle=2r\cos\tfrac{\pi}{8}\cos(\varphi-\tfrac{\pi}{8})\cdot\bigl{(}% \tan(\varphi-\tfrac{\pi}{8})+\tan\tfrac{\pi}{8}\bigr{)}=2r\cos\tfrac{\pi}{8}% \bigl{(}\psi_{2}(\tau)+\sqrt{2}-1\bigr{)}\bigr{/}\sqrt{1+\psi_{2}^{2}(\tau)}=% \\ \displaystyle=2r\cos\tfrac{\pi}{8}\Bigl{(}\frac{2\sqrt{2}(\sqrt{2}-1)}{12r^{2}% }+O\bigl{(}\frac{1}{r^{4}}\bigr{)}\Bigr{)}\bigl{/}\sqrt{1+\psi_{2}^{2}(\tau)}% \xrightarrow{r\to\infty}+0.$$ This argument concludes the proof. ∎ The separatrix divides the trajectories in the first (consequently, in the third) quadrant in two classes. The curves $y_{1}^{*}$ of the first type repulse from the separatrix and go vertically upward, not leaving the quadrant. Other trajectories dive under the separatrix and achieve the extrema on the ray $y=x/2$. Then they intersect the axis $0x$, whence they either turn down and have a vertical asymptote at $-\infty$ or reach the third quadrant. Then the whole situation is reproduced up to the central symmetry. Now we give a rigorous proof of these properties. Proposition 13. The coordinate axis $x=0$ is the vertical asymptote for the solution $y_{1}^{*}$, which approaches it from the left. Proof. Using expansion (29a) and inverting the transformation $0xy\mapsto 0z_{1}z_{2}$, we let $r\to\infty$ and hence obtain $$\displaystyle x=2(z_{1}-z_{2})=2r(\cos\varphi-\sin\varphi)=\\ \displaystyle=-2\sqrt{2}r\sin(\varphi-\tfrac{\pi}{4})=-2\sqrt{2}r\sin\bigl{(}(% \varphi-\tfrac{\pi}{8})-\tfrac{\pi}{8}\bigr{)}=\\ \displaystyle=-2\sqrt{2}\cos\tfrac{\pi}{8}\cdot r\bigl{(}\tan(\varphi-\tfrac{% \pi}{8})-\tan\tfrac{\pi}{8}\bigr{)}\cos(\varphi-\tfrac{\pi}{8})=\\ \displaystyle=-2\sqrt{2}\cos\tfrac{\pi}{8}\cdot r\bigl{(}\psi_{1}(\tau)-\sqrt{% 2}+1\bigr{)}\bigr{/}\sqrt{1+\psi_{1}^{2}(\tau)}=\\ \displaystyle=-2\sqrt{2}\cos\tfrac{\pi}{8}\cdot r\Bigl{(}\frac{2\sqrt{2}(\sqrt% {2}-1)}{4r^{2}}+O\bigl{(}\frac{1}{r^{4}}\bigr{)}\Bigr{)}\bigl{/}\sqrt{1+\psi_{% 1}^{2}(\tau)}\to-0.$$ (33) According to (18a), the function $r$ grows infinitely along the trajectories. Consequently, the function $R(t)=\sqrt{x^{2}(t)+y^{2}(t)}$ also tends to $+\infty$ along the solutions $\{x(t)$, $y(t)\}$. By (33), the coordinate $x$ tends to zero from the left, therefore the axis $x=0$ is the vertical asymptote of the trajectory $y_{1}^{*}(x)$. ∎ Let us analyze the asymptotic behaviour of the trajectories $y_{1}(x)$, which correspond to the solutions $\psi_{1}(r)$ of the Riccati equation (22). Using expansion (32) for an arbitrary solution $\psi_{1}(r)$, we describe the asymptotes on the plane $0xy$. We recall that the choice $C>0$ in formula (32) corresponds to solutions located above the graph $\psi_{1}^{*}(\tau)$. The expansion of the function $\psi_{1}^{*}$ itself is given at $C=0$, and the asymptote of its image on the plane $0xy$ was obtained in Proposition 13. If $C<0$, then the solutions $\psi_{1}(r)$ are located between $\psi_{1}^{*}(\tau)$ and $\psi_{2}(\tau)$. Consider the phase plane $0xy$. The solutions of the class $y_{1}$ grow infinitely in the second quadrant if $C>0$. Suppose $C<0$, then the representatives of this class go to infinity between the axis $0y$ and the diagonal $y=x$ in the first quadrant; recall that the diagonal is the asymptote of the separatrix $y_{2}$. Using an appropriate modification of the proof of Proposition 13, we obtain the estimate $$\displaystyle x=-\frac{2\sqrt{2}r\cos\frac{\pi}{8}}{\sqrt{1+u^{2}(r,C)}}\bigl{% (}u(r,C)-\tan\tfrac{\pi}{8}\bigr{)}=-\frac{2\sqrt{2}r\cos\frac{\pi}{8}}{\sqrt{% 1+u^{2}(r,C)}}\left(\frac{C}{r}+\frac{D}{r^{2}}+O\bigl{(}\frac{1}{r^{3}}\bigr{% )}\right),$$ where $D=\tfrac{1}{2}\,[{2-\sqrt{2}+(\sqrt{2}+1)\cdot C^{2}}]$ and expansion (32) is substituted for $u(r$, $C)$. Hence we finally obtain Proposition 14. All solutions of the class $y_{1}$ have vertical asymptotes, $$\lim_{r\to\infty}x(r,C)=-2\sqrt{2}C\cos^{2}\tfrac{\pi}{8}=-(\sqrt{2}+1)C.$$ The graphs of solutions contained in the upper half-plane approach these asymptotes from the left. Remark 7. Without loss of generality, let $C<0$. Then the vertical straight line $x=-(\sqrt{2}+1)\cdot C$, which is located in the right half-plane, is the vertical asymptote for two trajectories of system (16). If a solution $y_{1}$ is greater than the separatrix $y_{2}$, then it tends to the asymptote from the left. Another trajectory dives under the separatrix and approaches the same asymptote from the right in the fourth quadrant. The constants $C>0$ describe centrally symmetric curves in the third and second quadrants, respectively. Remark 8. From Propositions 12–14 it follows that for large $R$ solutions of Eq. (10) are approximated by the exact solutions of limit equation (13). The constants $C$ in (32) and $\delta$ in (14) are related by the formula $(\sqrt{2}+1)\cdot C=\pm 1/\sqrt{2\delta}$. 4. The spiral minimal surfaces By construction, solutions $h(q)$ of Eq. (9) determine the profiles $\Pi=\{z=h(q)\mid\rho=1$, $q\in{\mathbb{R}}\}$ of the minimal surfaces $\varSigma\subset{\mathbb{E}}^{3}$ on the cylinder $Q=\{\rho=1\}$; we thus have $\Pi=\varSigma\cap Q$. We further recall that a minimal surface in ${\mathbb{E}}^{3}$ is extended from the profile $\Pi$ in agreement with Corollary 2. Yet, not each surface in ${\mathbb{E}}^{3}$ assigned to a solution of (9) is minimal. Let us study this aspect in more detail. 4.1. The selection rule In this subsection we formulate a rule that defines which components of the phase trajectories for Eq. (9) provide minimal surfaces. Using this rule, we conclude that the graphs $\{z=\chi(\xi$, $\eta)\}$ of solutions $\chi=\rho\cdot h(\theta-\ln\rho)$ attach nontrivially to each other. Let us recall a well-known property of the minimal surfaces [5]. A smooth two-dimensional surface in space is minimal iff at any point its mean curvature $H$ vanishes. Hence each point of the surface is a saddle (of course, we assume that the surface at hand is not a plane). Further on, we denote by $\frac{\partial}{\partial\theta}$ the respective coordinate vector attached to a point of the cylinder $Q$. We see that the logarithmic spirals $q=\theta-\ln\rho=\mathop{\rm const}\nolimits$ are convex, and at any point of the surface the nonzero curvature vector has a positive projection onto $\frac{\partial}{\partial\theta}$. Therefore we request that the curvature vectors at all points of the profiles $\Pi=\{z=h(q)$, $\rho=1\}\subset Q$ must have negative projections on $\frac{\partial}{\partial\theta}$. The above condition upon solutions $h(q)$ of Eq. (9) can be reformulated as follows: $h^{\prime}\cdot h^{\prime\prime}>0$. This inequality is the rule for selecting the components of the phase curves on the plane $0xy$. Namely, consider a neighbourhood $\Omega_{\Gamma}$ of a point $(x,y)$ on a phase curve $\Gamma$. The minimal surface $\varSigma_{\Gamma}$ is assigned to this component of the curve if the coordinates $x$ and $y$ satisfy the inequality $$y^{3}\cdot(y^{3}-(x-y)^{3}+2y-x)>0.$$ Hence we obtain $y>x/2$ for $y>0$ and $y<x/2$ whenever $y<0$, see the proof of Proposition 11. If not all points of a phase curve satisfy this inequality (actually, each curve while it stays near the focus has to pass through the domains where it is not satisfied), then several components of the profiles $\Pi$ and several minimal surfaces $\varSigma$ are assigned to this curve. A continuous motion along the phase trajectory $\Gamma$ corresponds to different (possibly, distant from each other) components of the profile $\Pi$. In Sec. 3 we described two classes of trajectories on the plane $0xy$, the curves $y_{1}(x)$ with vertical asymptotes and the separatrix $y_{2}(x)$ with the diagonal asymptote $y=x$. Clearly, a small neighbourhood of the focus at the origin corresponds to oscillations of any profile, and some parts of the oscillating curves $z=h(q)$ are prohibited by the selection rule. At infinity, the solutions of first type describe the finite size fragments of the profiles $\Pi_{1}=\{z=h(q)\}$; these fragments have the vertical tangent at a finite value of the function $h$. Conversely, when the separatrix approaches the diagonal asymptote, it defines the exponentially growing profile $\Pi_{2}$ that turns infinitely many times around the cylinder $Q$. Example 2 ([4]). Let us choose the constant $C<0$ such that $y_{x}^{*}(-(\sqrt{2}+1)C)=0$. Consider the following two components of the phase curves $\Gamma=\{y=y_{1}(x$, $C)\}$: $$\displaystyle\Gamma_{-\infty}$$ $$\displaystyle=\{(x,y)\mid-(\sqrt{2}+1)C\geq x(t)>0,\ 0\geq y=y_{1}^{*}(x)% \xrightarrow{x\to+0}-\infty\},$$ $$\displaystyle\Gamma_{+\infty}$$ $$\displaystyle=\{(x,y)\mid 0\leq x(t)<-(\sqrt{2}+1)C,\ y=y_{1}(x)\xrightarrow[x% \to-(\sqrt{2}+1)C]{}+\infty\}.$$ Also, let us use the focus $(0,0)$, which corresponds to the trivial solution $h\equiv 0$ and hence amounts to the plane $z=0$ in ${{\mathbb{E}}}^{3}$. Using these curves, we construct the closed profiles $\Pi_{1}(C)\subset Q$, see Fig. 1. $${\begin{picture}(50.0,25.0)(0.0,3.5)\put(5.0,5.0){\hbox to 0.0pt{Fig.\,1}} \put(0.0,15.0){\vector(1,0){50.0}} \put(10.0,0.0){\vector(0,1){30.0}} \put(49.0,12.0){$q$} \put(9.0,27.0){\hbox to 0.0pt{$h$}} \put(10.0,15.0){\circle*{1.0}} \put(34.0,15.0){\circle*{1.0}} \put(9.0,16.0){\hbox to 0.0pt{$R$}} \put(35.0,16.0){$S$} \put(18.5,21.0){\hbox to 0.0pt{$\sigma_{1}$}} \put(18.5,7.0){\hbox to 0.0pt{$-\sigma_{1}$}} \put(35.0,21.0){$\sigma_{1}^{*}$} \put(35.0,7.0){$-\sigma_{1}^{*}$} \qbezier(10.0,15.0)(22.0,19.0)(22.0,27.0)\qbezier(10.0,15.0)(22.0,11.0)(22.0,3% .0)\put(22.0,15.0){\oval(24.0,24.0)[r]} \put(22.0,16.0){$\Pi$} \end{picture}\phantom{MMMMMMMM}\begin{picture}(50.0,25.0)(0.0,3.5)\put(-2.0,5.% 0){\hbox to 0.0pt{Fig.\,2}} \put(0.0,2.0){\vector(1,0){42.0}} \put(2.0,0.0){\vector(0,1){30.0}} \put(42.5,3.0){$q$} \put(1.0,27.0){\hbox to 0.0pt{$h$}} \put(14.0,10.0){\hbox to 0.0pt{$\sigma_{1}$}} \put(24.0,20.0){\hbox to 0.0pt{$\tilde{\sigma}_{1}$}} \put(28.0,14.0){$\sigma_{2}$} \qbezier(10.0,6.0)(28.0,9.0)(30.0,26.0)\qbezier(10.0,6.0)(20.0,10.0)(20.0,16.0% )\qbezier(20.0,16.0)(28.0,18.0)(30.0,26.0)\end{picture}}$$ By Corollary 2, the points $R$ and $S$ correspond to the logarighmic spirals on the plane $z=0$ in ${\mathbb{E}}^{3}$. Shifting the point $R$ and the components $\pm\sigma_{1}$ towards $S$, we make the length $|RS|$ of the profile $\Pi$ comparable with $2\pi$. Attaching the respective number of the profiles one after another on the cylinder $Q$, we place the edge $R$ of each profile on the spiral $S$ of the previous profile. Hence the resulting minimal surface becomes self-supporting. Example 3. Consider the following parts of the phase curves: a solution $y_{1}(x)$ as it tends to $+\infty$, another solution $\tilde{y}_{1}(x)$ of this class located below the separatrix before the extremum on $y=x/2$, and the separatrix $y_{2}(x)$ itself. Then we obtain the closed profile with finite cross-section in the upper half-plane $h>0$. Indeed, we attach the corresponding curves $\sigma_{1}$, $\tilde{\sigma}_{1}$, and $\sigma_{2}$ as shown on Fig. 2. 4.2. The profiles $\Pi=\{z=h_{1,2}(q)\}$ and their approximations Now we obtain the asymptotic estimates for the profiles $\Pi_{1}$. In this subsection we assume that the absolute value of the derivative $h^{\prime}\equiv\tfrac{dh}{dq}$ (and, possibly, of the function $h$ itself) is large. Let the constant $C$ in (32) be arbitrary and put $$D=\tfrac{1}{2}\bigl{[}2-\sqrt{2}+C^{2}\cdot(\sqrt{2}+1)\bigr{]}$$ as before. Again, we use expansion (32) as $r\to\infty$. Then we get the equivalence ($\sim$) $$\displaystyle h=x=2(z_{1}-z_{2})=2r(\cos\varphi-\sin\varphi)=\\ \displaystyle=\frac{2\sqrt{2}\cos\frac{\pi}{8}\cdot r(\sqrt{2}-1-u(r,C))}{% \sqrt{1+u^{2}(r,C)}}\sim-(\sqrt{2}+1)\left(C+\frac{D}{r}\right),$$ which holds up to $O(\tfrac{1}{r^{2}})$. The derivative $h^{\prime}$ is expressed through a solution of Eq. (22) in the following way: $$\displaystyle h^{\prime}=y=2r\cos\varphi=2r\cos\tfrac{\pi}{8}\cos\bigl{(}% \varphi-\tfrac{\pi}{8}\bigr{)}\left(1-\tan\tfrac{\pi}{8}\tan\bigl{(}\varphi-% \tfrac{\pi}{8}\bigr{)}\right)=\\ \displaystyle=\frac{2r\cos\frac{\pi}{8}(1-(\sqrt{2}-1)u(r,C))}{\sqrt{1+u^{2}(r% ,C)}}.$$ As $r\to\infty$, the above formula is equivalent ($\sim$) to $$\displaystyle 2r\cos^{2}\tfrac{\pi}{8}\left(1-(\sqrt{2}-1)^{2}-(\sqrt{2}-1)% \Bigl{(}\frac{C}{r}+\frac{D}{r^{2}}\Bigr{)}\right)=\\ \displaystyle=r\Bigl{(}1+\frac{\sqrt{2}}{2}\Bigr{)}(2\sqrt{2}-2)-\Bigl{(}1+% \frac{\sqrt{2}}{2}\Bigr{)}(\sqrt{2}-1)\Bigl{(}C+\frac{D}{r}\Bigr{)}=\\ \displaystyle=\sqrt{2}r-\frac{\sqrt{2}}{2}\Bigl{(}C+\frac{D}{r}\Bigr{)}.$$ This reasoning shows that for large $r$ we have $$h\sim-(\sqrt{2}+1)\left(C+\frac{D}{r}\right),\quad h^{\prime}\sim\sqrt{2}\,r-% \frac{\sqrt{2}}{2}\left(C+\frac{D}{r}\right).$$ Eliminating $r$ from these formulas, we obtain the differential equation $$h^{\prime}=\frac{\sqrt{2}}{2}\frac{h}{\sqrt{2}+1}-\frac{\sqrt{2}D}{C+\frac{% \displaystyle h}{\displaystyle 1+\sqrt{2}}}.$$ By definition, put $g=\frac{h}{\sqrt{2}+1}$. Then we get $$(\sqrt{2}+1)g^{\prime}=\frac{\sqrt{2}}{2}g-\frac{\sqrt{2}D}{C+g}.$$ (34) Let us consider two cases: $C=0$ and $C\neq 0$. If $C=0$, then we see that $$\frac{(\sqrt{2}+1)2g\,dg}{\sqrt{2}(g^{2}-(2-\sqrt{2}))}=dq\quad\Longrightarrow% \quad\frac{\sqrt{2}+1}{\sqrt{2}}\ln|g^{2}-(2-\sqrt{2})|=q+A,$$ where $A=\mathop{\rm const}\nolimits$. Resolving this equality w.r.t. $g$, we obtain $$g=\pm\sqrt{2-\sqrt{2}+\exp(\frac{\sqrt{2}}{\sqrt{2}+1}q+A)}.$$ Now let $C\neq 0$; then we express $g(q)$ from Eq. (34). Therefore we get ${h=(\sqrt{2}+1)\,g(q)}$ in the form of an integral. Proposition 15. Suppose $|h^{\prime}|\to\infty$. If $C=0$, then the asymptotic behaviour of the solution $h_{1}^{*}(q)$ of Eq. (9), which is assigned to the phase curve $y_{1}^{*}(x)$, is described by the formula $$h_{1}^{*}(q)\sim\pm\sqrt{2+\sqrt{2}+(\sqrt{2}+1)^{2}\exp(\sqrt{2}(\sqrt{2}-1)q% +A)},$$ here $A$ is an arbitrary constant. If $C\neq 0$, then the approximation for $h_{1}^{*}(q)$ is given implicitly through the integral for Eq. (34), $$\displaystyle{\bigl{|}(\sqrt{2}-1)h+\tfrac{1}{2}C-B\bigr{|}}^{2B+C}\cdot{\bigl% {|}(\sqrt{2}-1)h+\tfrac{1}{2}C+B\bigr{|}}^{2B-C}{}=A\cdot\exp\bigl{(}2(2-\sqrt% {2})\,Bq\bigr{)},$$ here $A>0$, $B=\sqrt{\tfrac{1}{4}C^{2}+2D}$, $C\in{\mathbb{R}}$, and $D=\tfrac{1}{2}\bigl{[}2-\sqrt{2}+C^{2}\cdot(\sqrt{2}+1)\bigr{]}$. Finally, we use Proposition 12 and describe the profile $\Pi_{2}$ assigned to the separatrix $y_{2}$. The asymptote $y=x$ defines the equivalence $h^{\prime}\sim h$ as $|h|$, $|h^{\prime}|\to\infty$. Consequently, the profile $\Pi_{2}$ is approximated by solutions of the differential equation $h^{\prime}=h$. Proposition 16. Let $h$ be large, then the function $h_{2}(q)$ grows exponentially in $q\in{\mathbb{R}}$: $h_{2}\sim A\cdot\exp(q)$, where $A=\mathop{\rm const}\nolimits$. 4.3. Approximations of the minimal surfaces From Propositions 15 and 16 we obtain the following assertion. Theorem 17. Let $\rho$, $\theta$ be the polar coordinates on the plane $0\xi\eta$ and put $q=\theta-\ln\rho$, $h={z\bigr{|}}_{\rho=1}$. (1) For large values of $h^{\prime}(q)$, there is a $\phi$-invariant minimal surface $\varSigma_{1}^{*}=\{z=\rho\cdot h_{1}^{*}(\theta-\ln\rho)\}$, which is approximated by the graph of the function $$z=\rho\cdot\sqrt{2+\sqrt{2}+\frac{(\sqrt{2}+1)^{2}\exp(\sqrt{2}(\sqrt{2}-1)% \theta+A)}{\rho^{\sqrt{2}(\sqrt{2}-1)}}},\quad A=\mathop{\rm const}\nolimits.$$ (35) Also, there is a class of the $\phi$-invariant minimal surfaces $\varSigma_{1}$ that correspond to the asymptotic formulas for $h_{1}(q)$ in Proposition 15. This class contains $\varSigma_{1}^{*}$ as a particular case. (2) Suppose that the absolute values $|h_{2}|$ are sufficiently large such that the phase curve $h_{2}^{\prime}(h_{2})$ is near to the diagonal $h^{\prime}=h$. Then there is the $\phi$-invariant minimal surface $\varSigma_{2}=\{z=\rho\cdot h_{2}(\theta-\ln\rho)\}$ whose approximation is given by the graph of the function $$z=A\,\rho\cdot\exp(\theta-\ln\rho)=A\cdot\exp(\theta),\quad A=\mathop{\rm const% }\nolimits.$$ (36) The constant $A\in{\mathbb{R}}$ determines the rotation of the surfaces around the axis $0z$. The choice $A<0$ in (36) defines the reflection symmetry $z\mapsto-z$. Both surfaces $\varSigma_{1}^{*}$, $\varSigma_{2}\subset{{\mathbb{E}}}^{3}$ have a singular point at the origin. Finally, let us plot the graph of a spiral minimal surface determined by the profiles $\Pi$ on Fig. 1, see Example 2. We conjecture that the spiral minimal surfaces may be relevant in astrophysics owing to their visual similarity with the spiral galaxy-like objects. Also, we indicate their application in fluid dynamics: these surfaces can realize the minimum of the free energy $\mathcal{F}$ of a vortex. Acknowledgements The authors thank R. Vitolo and D. V. Pelinovskiǐ for helpful discussions. A part of this research was carried out while A. K. was visiting at the University of Lecce. A. K. was partially supported by University of Lecce grant no.650 CP/D. References [1] Bîlă N., Lie groups applications to minimal surfaces PDE. Diff. Geom. – Dynam. Systems. 1 (1999) no.1, 1–9. [2] Bocharov, A. V., Chetverikov, V. N., Duzhin, S. V., et al., Symmetries and Conservation Laws for Differential Equations of Mathematical Physics. Amer. Math. Soc., Providence, RI, 1999. I. Krasil’shchik and A. Vinogradov (eds.) [3] Hartman Ph., Ordinary differential equations. John Wiley & Sons, NY etc., 1964. [4] Kiselev A. V., On a symmetry reduction of the minimal surface equation, Proc. XXVII Conference of Young Scientists, Moscow (2005), P. 67–71. [5] Nitsche J. C. C., Vorlesungen über Minimalflächen. Berlin: Springer-Verlag, 1974. [6] Olver P. J., Applications of Lie groups to differential equations, $2$nd ed., Springer–Verlag, NY, 1993. [7] Scalizzi P., Soluzione di alcune equazioni del tipo di Abel, Atti Accademia dei Lincei. 5 (1917) no.26, 60–64. [8] Varlamov V. I., The method of majorant series for a class of analytic almost periodic systems, Differentsial’nye Uravneniya. 15 (1979) no.4, 579–588. [9] Varlamov V. I., On bounded solutions of the Riccati equation and absence of conjugate points on the semi-axis, Bulletin of ISPU (2004) no.3, 56–60 (in Russian). [10] Zelinkin M. I., Homogeneous spaces and the Riccati equation in the calculus of variations. Moscow, Factorial Publ., 1998.
The 2012 Interferometric Imaging Beauty Contest Fabien Baron\supita    William D. Cotton\supitb    Peter R. Lawson\supitc    Steve T. Ridgway\supitd    Alicia Aarnio\supita    John D. Monnier\supita    Karl-Heinz Hofmann\supite    Dieter Schertl\supite    Gerd Weigelt\supite    Éric Thiébaut\supitf    Férréol Soulez\supitf    David Mary\supitg    Florentin Millour\supitg    Martin Vannier\supitg    John Young\supith    Nicholas M. Elias II \supiti    Henrique R. Schmitt \supiti    Sridharan Rengaswamy\supitj \skiplinehalf\supitaUniv. of Michigan    941 Dennison Building    500 Church Street    Ann Arbor    MI 48109    USA; \supitb National Radio Astronomy Obs.    520 Edgemont Road    Charlottesville    VA 22903    USA; \supitc Jet Propulsion Lab.    California Institute of Technology    Pasadena    CA 91109    USA; \supitd National Optical Astronomy Observatory    Tucson    AZ 85726-6732    USA; \supite Max-Planck Institute for Radio Astronomy    69 Auf dem Hügel    Bonn    Germany; \supitf CRAL    Observatoire de Lyon    9 av. Charles Andre    F-69561 Saint Genis Laval Cedex    France; \supitg Laboratoire Lagrange    Univ. de Nice    CNRS    Observatoire de la Côte d’Azur    Nice    France; \supith University of Cambridge    JJ Thompson Avenue    Cambridge    CB3 0HE UK; \supiti National Radio Astronomy Obs.    Array Operation Center    Socorro    NM 87801-0387    USA; \supitj European Southern Obs. Casilla 19001    Santiago 19    Chile. Abstract We present the results of the fifth Interferometric Imaging Beauty Contest. The contest consists in blind imaging of test data sets derived from model sources and distributed in the OIFITS format. Two scenarios of imaging with CHARA/MIRC-6T were offered for reconstruction: imaging a T Tauri disc and imaging a spotted red supergiant. There were eight different teams competing this time: Monnier with the software package MACIM; Hofmann, Schertl and Weigelt with IRS; Thiébaut and Soulez with MiRA ; Young with BSMEM; Mary and Vannier with MIROIRS; Millour and Vannier with independent BSMEM and MiRA entries; Rengaswamy with an original method; and Elias with the radio-astronomy package CASA. The contest model images, the data delivered to the contestants and the rules are described as well as the results of the image reconstruction obtained by each method. These results are discussed as well as the strengths and limitations of each algorithm. keywords: Astronomical software, closure phase, aperture synthesis, imaging, optical, infrared, interferometry \authorinfo Further author information: (Send correspondence to F. B.) F.B.: E-mail: fbaron@umich.edu, Telephone: 1 734 615 4714 1 INTRODUCTION The IAU Interferometry Beauty Contest is a competition aimed at encouraging the development of new algorithms in the field of interferometric imaging, by showcasing the current performance of image reconstruction packages. The contest is being conducted by the Working Group on Image Reconstruction of IAU Commission 54. The principle of the contest is the following. One or several science cases are first selected by the organizers, then realistic models of the science targets are used to generate synthetic images. These “truth” images are then turned into data sets, by simulating the acquisition of interferometric observables by a typical interferometer. Finally, the contestants attempt to reconstruct images from the data sets without knowledge of the original truth images beyond the nature of the target. The reconstruction closest to the truth image is then declared the winner. The previous contests took place in 2004[1], 2006[2], 2008[3] and 2010[4], thus the 2012 Interferometry Beauty Contest described here is the fifth contest. The contest results were announced on July 5th during the 2012 SPIE Astronomical Telescopes and Instrumentation conference in Amsterdam. 2 CONTEST MODEL, DATA AND GUIDELINES 2.1 Original model images In this 2012 edition of the Interferometry Beauty Contest, the focus was to assess the potential of current reconstruction packages on very resolved objects, under realistic observing scenarios. The organizers identified two science cases for which the reconstruction performance is deemed critical for interpretation: imaging Young Stellar Objects and imaging spotted stars. Consequently two models were generated for the Contest. A T Tauri “star + disc” system that was nicknamed Alp Fak, and a red supergiant with bright spots named Bet Fak. The Alp Fak image was modeled by Alicia Aarnio at the University of Michigan, using the TORUS[5] 3D radiative transfer code written by Tim Harries at the University of Exeter. The parameters for the T Tauri simulation were loosely based on that of v1295 Aql. The scaling of the image was set to be slightly larger than the extent of the disc. The outer radius of the disc was about $200$ AU, with the inner radius at $40$ AU. The central star, of radius $3R_{\odot}$, was offset by about $1.3$ AU in the $(X,Y)$ coordinates of the image, but left in the midplane of the disk ($Z=0$). The mass accretion rate was chosen low, as was the magnetospheric temperature, so that their effects on the image are negligible. The resulting image was then rotated by $63.5^{\circ}$ to obtain the final truth image shown on Figure 1 (left). Our main expectations from reconstructions of this object were: the detection of the central source, a correct global orientation of the target, as well as smooth flux and sharp transitions at the right locations. The Bet Fak image is (to our knowledge) the first image in the Beauty Contest that was partially derived from real data. This original data came from 2011 observations of the Red Supergiant AZ Cyg, that presents clear asymmetric features that may be spots or convection cells. Several images were reconstructed from the original data, using complex combinations of regularizers (total variation, $\ell_{1}\ell_{2}$ regularization and limb-darkened disk prior of $3.9$ mas), and keeping the reduced $\chi^{2}$ below $1.0$. A particularly “good-looking” image was picked amongst this one to be the truth image of Bet Fak, shown on Figure 1 (right). Our main expectations from reconstructions of this object were: a smooth circumference without artefacts (knowing that contestants would most likely use priors), and approximate locations for the bright spots/convection cells. 2.2 Data set generation: $(u,v)$ coverage and signal-to-noise Both data sets were simulated as if they were acquired by the MIRC-6T[6] instrument (see Che et al., 2012 in these proceedings), installed on the CHARA Array[7] atop Mt. Wilson, CA, USA. The H-band low spectral resolution mode of MIRC-6T was chosen for the simulations. In this mode the star light is spectrally dispersed onto eight spectral channels. Because very few software packages are able to handle multi-spectral reconstructions, we chose not introduce any spectral dependency in the data. For a similar reason, no temporal dependencies were assumed in the data beyond aperture-synthesis due to Earth’s rotation. The $(u,v)$ coverage of the contest data is presented on Figure 2. With MIRC-6T, the instantaneous “snapshot” Fourier coverage provided is $15$ baselines – i.e. $30$ $(u,v)$ points – as well as $10$ closure phases. For Alp Fak, the complete $(u,v)$ coverage was chosen to correspond to $5$ hours of observation, with snapshots acquired every $15$ minutes. For Bet Fak, the $(u,v)$ coverage was directly copied from the original AZ Cyg data. Compared to the previous Contest in 2010 that assumed the use of $10$ VLTI stations, the $(u,v)$ coverage is much sparser. In particular, there is hole at low frequencies due to the absence of CHARA baselines below $30$ m. On the other hand, the use of six CHARA stations simulataneously (instead of e.g. only three with VLTI/AMBER) allows to recover twice the amount of phase information. Once the Fourier sampling determined, the complex visibilities were computed via Discrete Fourier Transform from the model images. The contest data consisted of the conventional interferometric observables, i.e. power spectra (squared visibilities) and bispectra (triple amplitudes + closure phases). These were computed directly from complex visibilities, and modified using realistic noises. For Alp Fak our current best empiric noise model for MIRC-6T was applied: typically a few percent errors on power spectra, $1^{\circ}$ to $5^{\circ}$ on closure phases for short baselines, and $10$ to $80^{\circ}$ on longer ones. For Bet Fak the signal-to-noise was directly copied from the original AZ Cyg data, thus reflecting the actual MIRC-6T noise. As shown on Figure 2, visibility amplitude values are very low for both objects. In the case of Alp Fak, out of $1320$ power spectra, only $162$ are greater than $0.01$. The noisy data were then packaged into OIFITS[8] data files, and these were validated with the JMMC online validation tool (http://www.jmmc.fr/oival). Note that in addition to Alp Fak and Bet Fak, synthetic data of a binary star with very high signal-to-noise and excellent $(u,v)$ coverage was provided. This data was solely meant for contestants to to check whether their software were able to reconstruct a simple model, and to help them determine the orientation of their reconstructions with respect to the default contest convention (North up, East left). The binary separation was $4.0$ mas, with a principal axis of $40^{\circ}$ East of North (from the bright star to the faint one) and a flux ratio of $5.0$. The uniform disk sizes were $1.0$ mas for the primary and $0.75$ mas for the secondary. Note that all these data sets (Alp Fak, Bet Fak, and the test binary) are available on the Interferometry Beauty Contest website111http://olbin.jpl.nasa.gov/iau/2012/Contest12.html. 2.3 Contest guidelines In the past Beauty Contests, the contestants were free to choose the field of view and pixel scale of their submissions. While this certainly made the contests more challenging (in 2010, this freedom allowed the organizers to “hide” a point source far from the main target), this also raised the concern that submissions had to be rescaled/convolved to the truth image pixellation. In particular any super-resolution achieved by the reconstruction algorithms was likely to be destroyed by such a procedure. Therefore, in this 2012 edition of the Beauty Contest, both pixel scale and image sizes were explicitly recommended. For both targets, the suggested pixel scale was $0.15$ mas, with an advised field of view of $64\times 64$ pixels. 3 CONTEST SUBMISSIONS The 2012 Interferometric Imaging Beauty Contest enjoyed a record participation, with eight teams in competition (compared to 4-5 for its previous editions). In the following sections (3.1 to 3.8), the contestants present the procedure they used for reconstruction, as well as the features they believe to be real in their images. The reconstructions submitted by the contestants can be found on Figure 3 for Alp Fak and Figure 4 for Bet Fak. When more than one entry per team was submitted for the same software, the best image was chosen for display. Also, for the first time since the creation of the Beauty Contest, one team (Millour & Vannier) submitted reconstructions with two packages they are not actually developing (BSMEM and MiRA). Thus their approach was that of non-expert but experienced users: it was particularly interesting in that it allows to assess the similarities (or lack thereof) between reconstructions arising from different user choices. 3.1 MiRA by Thiébaut and Soulez (Observatoire de Lyon) For both data sets Alpha Fak and Beta Fak, due to the lack of short baseline on CHARA, the $(u,v)$ coverage is quite weak for low spatial frequency. Consequently, the global shape of reconstructed objects is quite difficult to recover, and strongly depends on the regularization and on the starting solution (as the objective function is not convex due to the type of interferometric data: squared visibilities and phase closures). The first data set is a T Tauri star. As the squared visibilities as a function of baseline doesn’t show lobes, we supposed that the star is not resolved by the interferometer. For that reason, we began the image restoration with a Dirac smoothed to the resolution of CHARA (approx. $0.5$ mas = 3 pixels in the restored image). In accordance with Renard et al. [9] (2011), we choose total variation for the regularization. To avoid getting stuck in local minima, we introduced some perturbations when the algorithm seemed to have converged: e.g. by soft thresholding or by doing several iterations using squared visibilities only, or squared visibilities and closures only. We consider some kind of global convergence achieved when the reduced $\chi^{2}$ is between $0.9$ and $1.0$ for the combination of square visibilities and closures, square visibilities and triple amplitudes, and square visibilities and bispectra. The reconstructed object presents a square background of $6\times 6$ mas that seems to be the size of the simulation box. At the center of this squared background there is the unresolved bright star. This star is surrounded by the half of an elliptic bright ring with a major axis of about $3$ mas oriented at about $\text{PA}=65^{\circ}$ (counted from North to East). The second data set, Beta Fak, is a red supergiant. As (u,v) coverage is very sparse for low spatial frequencies, we estimated the global shape of the star using a linear limb darkening model. We fit the parameters of this model by minimizing the same data cost function as the one used by MiRA. These parameters (diameter = 4.0 mas, limb darkening parameter = 0.42) have been confirmed by LITPro[10]. For the contest, we produced two images which have been obtained from the square visibilities and closure phases (we did not use triple amplitudes for this data set), starting with the limb darkening model and with total variation or quadratic regularization. This latter regularization is taken as the total quadratic difference between the image and the initial model. In spite of using different regularizations, the two images are very similar. This is a (weak, because the problem is non-convex) confirmation of the reality of the recovered structures at the stellar surface. The regularization weights have been tuned to have a final normalized $\chi^{2}$ of $0.93$ per data sample. 3.2 CASA by Elias (NRAO) CASA[11] is a radio interferometry package. It requires visibility amplitudes and baseline phases, not the squared visibilities and closure phases provided by the Beauty Contest organizers. CASA can operate directly upon uvfits or measurement set (MS) format files, not OIFITS format files. The MS is the native CASA file format. Many radio interferometers employ “fillers” to convert their file formats to MSes. NME2 is in the process of creating an OIFITS to MS filler within CASA now, but it is not yet ready. For the beauty contest, the OIFITS format files were converted to uvfits by the OYSTER package[12] and in turn the uvfits files were converted to MSes by CASA. OYSTER also estimated the baseline phases from closure phases before the file format conversion. For an array of six telescopes the number of closures phases is $1/3$ less than the number of baseline phases, so the system of equations is degenerate. OYSTER determined the singular-value decomposition (SVD) of the design matrix and formed the “minimum-norm” pseudo-inverse (infinite inverted singular values are set to zero). The minimum-norm pseudoinverse is among the simplest and works well for ensembles of a few point sources but not as well for extended emission. This technique will be available for the first version of the OIFITS to MS filler. Additional model constraint inputs will eventually become available. This task has the highest priority for the next Beauty Contest as well as other advanced imaging techniques such as full-Stokes optical interferometric polarimetry (OIP). Once the Beauty Contest data were filled into CASA, they were imaged. The main imaging algorithm [13] was an advanced version of CLEAN employing multispatial scale (MSS) and multifrequency synthesis (MFS). Standard CLEAN determines and removes source components in the image plane using a delta function, while MSS employs a configurable extended function. MFS inserts data from all frequencies into a single uv plane before imaging. The frequency dependence for each pixel can be selected. The Beauty Contest data had flat spectra, so only a constant was fitted. Several rounds of iterative CLEANing were interleaved with self-calibration, where deviations between the model and observed visibilities are considered to be calibration errors. Many imaging trials were performed with different CLEANing depths. Each trial led to a different final image, which is symptomatic of ill-defined initial phases from the minimum-norm pseudo-inverse. 3.3 IRS by Hofmann and Weigelt (Max Planck Institute für Radioastronomy) The iterative Image ReconStruction algorithm (IRS) uses the measured bispectrum to reconstruct images. IRS uses the non-linear optimization algorithm ASA-CG detailed in Hager & Zhang [14] (2005) and Hager & Zhang [15] (2006). ASA-CG is a conjugate gradient based algorithm. The advantage of IRS is that it is much faster than the Building Block algorithm (Hofmann & Weigelt [16], 1993). The reconstructions are images of $64\times 64$ pixels with a pixel scale of $0.15$ mas and not convolved with a PSF. For the red supergiant Bet Fak, we reconstructed a disc with bright and dark spots. The intensities outside the disk are probably artefacts. For the T Tauri disc Alp Fak, we see an inclined circumstellar disc with weak extensions above and below the inclined disc. The long disc axis is approximately (but not exactly) horizontal. 3.4 MIROIRS by David Mary and Martin Vannier (University of Nice) We present here our contribution to the Beauty Contest, using the prototype software MIROIRS (Methods for Image Reconstruction in Optical Interferometry with Regularizations based on Sparse priors), which is still in a early phase. Starting from scratch, our progresses so far have led to a preliminary version based on the following very simple principles. From an initial image (obtained using the LITpro model-fitting software), we produce a gradient map of a criterion including the target visibilities and phase closures. In this map, we consider a patch of pixels (of possibly varying size, say, 5 by 5 to 20 by 20 pixels) around the location of the strongest value of the gradient. This patch is used to define the location of the image pixels where the flux should be changed to decrease the criterion. The flux is changed by small increments which are proportional to the gradient value within the patch, only for gradient values above a predefined threshold. This gives the new image, from which the process is iterated, with possible adjustments of the parameters (size of patch, gradient threshold, increment) to ensure that the cost function is iteratively decreased. Our motivation for participating in the Beauty Contest was originally to inject in the reconstruction algorithm models based on sparse representations. This is done here in a very crude but fast way. We do not impose any particular synthesis dictionary for the reconstruction. Instead, the synthesis ”atoms” are formed by analyzing and thresholding the gradient of the cost function, and they come only as corrections to the initial image, in a greedy manner. Optimization-wise, the adopted descent method is very basic and largely empirical with respect to the tuning of the various parameters involved. The relevance of the reconstructed image at the end of the process also heavily depends on how close the initial image is to the ideal solution. The prototype method is currently written in Matlab language. It is not at all optimized and thus quite slow, but not prohibitively slow for the present data set and ouput format. As for our analysis of the presented result for the Beta Fak reconstruction: we are confident that the circular structure is reconstructed with a fairly correct diameter. The few tests we could make starting from different initial images chosen as perturbed versions of a uniform disk indicate that the reconstruction surface brightness distribution is relatively stable under perturbations of the initial image. This suggests that the bright and dark spots that are visible on the surface may be real. We believe however that the faint structures around the disk are reconstruction artefacts. Concerning Alpha Fak, we also believe that the general shape we obtained is not wrong (a large, elongated diffuse object oriented SW-NE), and we think that some internal structures might like look what we reconstructed. But, here also, the $(u,v)$ coverage and the SNR at mid and high-frequencies does not allow us to be very much confident about it. 3.5 MACIM by John Monnier (University of Michigan) I used MACIM using a “uniform disk” regularizer. which is the $\ell_{\frac{1}{2}}$ norm of the spatial gradient of the image. Fabien Baron and I invented this metric in 2011 – it gives all uniform disks the same regularization if they contain the same flux, irrespective of diameter. Just as the total variation regularizer, this regularization prefers sharp boundaries, but is agnostic on the size of the spot or feature. As for total variation, it will prefer ”round” spots compared to elliptical ones. For the Alp Fak reconstruction. Based on inspection of the visibility curves and expectation of a central unresolved source, I introduced a uniform disk model containing 1.4% of the flux. MACIM included this model component in the imaging process, but the amount of flux in the point did not effect the regularization. This had the effect of creating a ”hole” in the disk emission, likely coming from a dust-free inner region in the model. I ran MACIM with a range of regularizer weights and for different amounts of time, creating 3 different possible images. I did not spend more than a few hours on this and chose the image that showed approximately mean zero residuals for the short baselines, which are easy to get wrong because there are relatively few short baselines (i.e., one can iterate too long and get a lower $\chi^{2}$ but the residuals show overfitting of long baselines and underfitting of short baselines). There are some intriguing asymmetric structures in the disk but do not expect much of it is real. More could be done to symmetrize the images but I thought it was better to just leave the MACIM result in a rather raw state. For a real dataset, I would assess the fidelity of features by using bootstrap methods or splitting up my data into independent chunks. Note that for my entry I just used a single pixel in the center to represent the central star contribution, which is not exactly that same model as MACIM used, but quite close. For the Bet Fak reconstructions, I generate candidate limb-darkened disks (with power law based on Lacour et al. 2008[17], with coefficient $0.26$) between $3.9$ and $4.30$ mas in diameter. I used these as weak priors in MACIM along with “uniform disc” regularizer. I got similar structures in all images, but found the best agreement with the expected limb-darkening profiles and with the short-baselines residual using the image reconstruction based on the $4.30$ mas limb-darkening prior. I note that imaging spots of complex geometry is very difficult. I would not publish this without additional independent datasets. I also note that by increasing the regularizer strength I could smooth out most of these structures with only a minor effect on the global $\chi^{2}$, emphasizing the difficulty of this effort. 3.6 Original (unnamed) method by Sridharan Rengaswamy (European Southern Observatory) The method used for reconstruction is the following: • The closure phases (CP) are solved for visibility phases $\phi$ using singular value decomposition (SVD) method. The phase solution is not unique. The uncertainty in the solution is also estimated. • Complex visibilities are obtained as $\sqrt{V^{2}}\times\exp i\phi$ and weighted according to their signal-to-noise ratio. • The Dirty-map is obtained as the direct Fourier transform of the 2-d visibility function. It is first multiplied by an apodization function in the image domain, Discrete Fourier transformed, multiplied by an apodization function that is equivalent to the transfer function of a telescope with diameter equal to the maximum baseline. The Dirty-beam is obtained in a similar manner, replacing the complex visibilities at the measured uv points are by unity. • The Dirty-map is then deconvolved with the dirty beam, using the Maximum Entropy method (MEM). MEM iterations are stopped when the relative increase in entropy is less than 1% or when the entropy stats to decrease after reaching a maximum. • Images were obtained in each spectral channel, as described in previous steps. The final images were registered by cross-correlating the images with the reference image (longest wavelength spectral channel image) and then added together to obtain the final image. The submitted images followed the Contest recommendations in terms of size and pixellation. We found that only the morphology of the images (and not their photometry) seem to be reliable. For Alp-Fak, the flux (sum of pixel intensities) increases with the spectral channel (higher flux at longest wavelength) alluding to the fact that the ‘object’ has a disk. Only the central disk like structure (in log-scale) is reliable. The faint unresolved features in the lower half at about $5.9$ and $8.7$ mas from the center are unreliable. For Bet-Fak, the flux slowly increases but remains almost constant at longer wavelengths (spectral channels). A bright granule of size $4.95\times 2.5$ mas is clearly visible. There is also another small granule on its left, separated by a dark lane. Other point like features in the images are not reliable. 3.7 BSMEM by John Young (University of Cambridge) The BSMEM (BiSpectrum Maximum Entropy Method) software was first written in 1992 to demonstrate image reconstruction from optical aperture synthesis data. It has been extensively enhanced and tested since then, although there have been no changes of late. The code used for this year’s contest entry is essentially identical to that employed for the 2010 contest. The algorithm applies a fully Bayesian approach to the inverse problem of finding the most probable image given the evidence, making use of the Maximum Entropy approach to maximize the posterior probability of an image. An important advantage of BSMEM is the automatic Bayesian estimation of the hyperparameter alpha that controls the weighting of the entropic prior relative to the likelihood. BSMEM can also perform a Bayesian estimation of missing triple amplitudes and their associated errors from the power spectrum data. BSMEM is available free-of-charge to the scientific community on submission of the academic license agreement at http://www.mrao.cam.ac.uk/research/OAS/bsmem.html. BSMEM uses a trust region method with non-linear conjugate gradient steps to minimize the sum of the log-likelihood ($\chi^{2}$) of the data given the image and a regularization term expressed as the Gull-Skilling entropy $\sum_{k}[I_{k}-M_{k}-I_{k}\log(I_{k}/M_{k})]$. The model image $M_{k}$ is usually chosen to be a Gaussian, a uniform disk, or a delta-function centered in the field of view, which conveniently fixes the location of the reconstructed object (the bispectra and power spectra being invariant to translation). This type of starting model also acts as a support constraint by penalizing the presence of flux far from the center of the image. The reconstruction of the T Tauri disk (Alp Fak) used a circular Gaussian default image (with FWHM found by fitting to the short-baseline squared visibility data). For the reconstruction of the supergiant star (Bet Fak) surface, I found it necessary to use additional prior information to constrain the radial distribution of flux. This was obtained by fitting a circular limb-darkened disk (Hestroffer model) to the squared visibility data (elliptical models did not fit significantly better). The image corresponding to the best-fit limb-darkened model was convolved with a 0.3 mas FWHM Gaussian blur, in order to avoid penalizing slight deviations of the disk edge from circular symmetry, before being used as the default image in BSMEM. Following the advice of the contest organizer, a pixel size of 0.15 mas was selected for both contest objects. For the supergiant star (Bet Fak), I am confident the following features are real: the three brightest spots (S, W, and E) in the central region of the stellar surface; the protrusion at the NE edge of the star; the ”cut-out” at the W edge of the star. I am not convinced that the possible companion with position angle  $100^{\circ}$ E of N is real. Certainly all of the other features outside of the star are artefacts. For the T Tauri object (Alp Fak), I am confident in the overall shape and orientation of the disk (quasi-rectangular outer contours and elliptical inner contours). At the centre of the image there are possibly two sections of a bright inner disk rim and a central star. 3.8 BSMEM and MiRA by Florentin Millour and Martin Vannier (University of Nice) We tried to place ourselves from a user point of view, i.e. we wished to use several (ideally most of) available image reconstruction software, and compare the obtained results. Therefore, our plans were initially to reconstruct images using image reconstruction software that are available: MiRA and BSMEM. We also tried to use the WISARD software, using complementary low-frequency data derived from model-fitting. However we noticed some incoherency in the way WISARD reads this data. This probably stems both from some inconsistency in our home-made OIFITS file, and from a lack of robustness of the WISARD conversion routine from the OIFITS format. As this problem could not be tackled in time, we could only obtained flawed reconstructions from WISARD, which we chose not to show here. The presented MiRA and BSMEM images have 64 pixels, and 0.15 mas per pixel, supposedly increasing with increasing alpha and delta. They are centered around the photocenter of the image. Our method consisted in the following: • We first fitted the datasets in the first visibility lobe using the softwares fitOmatic (home-made) and LITpro (developed by JMMC). • A synthetic OIFITS file containing squared visibilities (and closure phases) was generated out of this fit result, for each object, with uniformly and randomly spread baselines in this same first visibility lobe. This synthetic dataset was added to the original dataset. The idea here was to strongly limit the field of view of the image reconstruction to the effective size of the object. • from this point on, different procedures were used depending on the image reconstruction software: – for MiRA, $300$ images were generated with a random initial guess (either a uniformly random image, a random-width Gaussian, or a random-width uniform disk). These were sorted by increasing $\chi^{2}$, and any image with a $\chi^{2}$ larger than 2 times the minimum one was discarded. All other images were kept and averaged in the result image. This first image run was used as a prior for a second run of $300$ generated images, the same way. The result was averaged and produced the presented result. We estimate that the centering process involved in the averaging decreases the effective resolution of the image to $0.45$ mas. For more information on our procedure, see Millour & Vannier (2012) in the same proceedings. – For BSMEM, we used the default method and reconstruction parameters. We just considered for Alp Fak a Gaussian prior with the size of the fitted model we have done previously (3 mas), and for Bet Fak a uniform disk of diameter 3.5 mas. There was no subsequent convolution with a beam, so the resolution is supposedly $0.15$ mas (whereas the effective details would show up at a resolution of approximately $1$ mas). Our interpretation of these datasets is the following: • for Alp Fak: the model fitting provides an overall elongated Gaussian shape with major axis 4.2 mas, minor axis 3.1 mas, and position angle 66 degrees. The attempts to reconstruct images (and also the model fitting) indicate that there are other, finer structures, like 2 or 3 ”clumps” near the Gaussian center, but we were unable to locate them clearly. Our guess would be one clump in the center (the star) plus 2 clumps on both sides, along the major axis, which would represent an hypothetical inner rim. • for Bet Fak: The model fitting (and the shape of visibilities vs spatial frequencies) indicates us a circular uniform disk plus 1 or 2 bright spots. Indeed, we are able to get spots in the reconstructed images, but we believe only MiRA is able to locate them correctly, whereas BSMEM tends to produce symmetric images of these spots. 4 COMPARISION METRICS AND CONTEST RESULT The judge of the 2012 Interferometric Imaging Beauty Contest was William Cotton. The scoring of the Beauty Contest data was done through the following (and now standard) procedure: 1. the submissions are aligned using features in the images. This is done by correlating the submissions with the truth image. This also includes image flips if required, as contestants did not necessarily use the Beauty Contest orientation conventions. The Beauty Contest adopts the standard convention for optical interferometry: East to the left, and North up. 2. the submitted images are interpolated onto the pixellation grid of the master images, if needed. For contestants that submitted images at the recommended pixellation of $0.15$ mas, this step was skipped. 3. All images (including the truth images) are normalized to unity in a given box. For Alp Fak this was the rectangle defined by corners $[7,7]$ and $[119,119]$, representing more than $90\%$ of the emission. For Bet Fak this was the box within $14$ pixels of the center, that included essentially all emission. 4. For each object, the score is computed as $10^{6}$ times the RMS pixel-by-pixel difference between the submission and the truth image in the boxes defined in 3). 5. The scores for Alp Fak and Bet Fak are added. The best scores are the lowest. If more than one image was submitted per object and per team, the one achieving the best score was retained for scoring (e.g. Thiébaut and Soulez submitted images of Bet Fake regularized with total variation and a quadratic regularizer). The official 2012 Interferometric Imaging Beauty Contest scores are given in Table 1. In light of the results, John Monnier (University of Michigan) was declared the winner of the contest for his MACIM entry and was awarded the contest prize by the jury (see Figure 5). 5 DISCUSSION AND CONCLUSION The general agreement amongst contestants is that both targets were hard to reconstruct. This gave rise to more variance in reconstruction quality than what was witnessed during previous contests; but also demonstrated that decent imaging quality can be obtained on very resolved objects in realistic conditions. Alp Fak was a difficult target, being probably too resolved for any current software to reconstruct it very well. MiRA and IRS obtained the best results, managing to reconstruct a smooth central star. The regularization used by MACIM favored uniform patches of fluxes, and thus was most probably not adapted to recover the original distribution. Bet Fak was overall reconstructed well in terms of size, but the actual location of the spots was very dependent on the algorithm. Perhaps because stellar surface imaging is a more familiar application, image priors such as limb-darkened disks were used by most reconstruction. As both the $(u,v)$ coverage and signal-to-noise of Bet Fak were derived from real CHARA/MIRC-6T data, these results demonstrate caution will be needed when reconstructing spots from real data. MiRA achieved the best scores on Alp Fak, but its lower performance on Bet Fak (as well as the current specificities of the Beauty Contest metrics that put more weight on this target) prevented it from getting the best overall scores. With record participation and overall convincing reconstructions, most contestants felt that the fifth Beauty Contest was successful at showcasing the diversity and strengths of the current imaging packages in monochromatic mode. As several packages are planed to add multi-wavelength imaging capabilities in 2012-2013, this contest may be indeed the last one to figure only monochromatic data. As multi-wavelength image reconstruction is both a difficult algorithmic problem and a necessity for new science, the next Beauty Contests should definitively prove exciting… Acknowledgements.Work by Fabien Baron was supported by the National Science Foundation through awards AST-0807577 to the University of Michigan. Work by Peter R. Lawson was undertaken at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Work by William D. Cotton was supported by the National Radio Astronomy Observatory, a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.. References [1] Lawson, P. R., Cotton, W. D., Hummel, C. A., Monnier, J. D., Zhao, M., Young, J. S., Thorsteinsson, H., Meimon, S. C., Mugnier, L., Le Besnerais, G., Thiebaut, E., and Tuthill, P. G., “The 2004 Optical/IR Interferometry Imaging Beauty Contest,” Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 5491, 886 (2004). [2] Lawson, P. R., Cotton, W. D., Hummel, C. A., Baron, F., Young, J. S., Kraus, S., Hofmann, K.-H., Weigelt, G. P., Ireland, M., Monnier, J. D., Thiébaut, E., Rengaswamy, S., and Chesneau, O., “2006 interferometry imaging beauty contest,” Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 6268 (2006). [3] Cotton, W., Monnier, J., Baron, F., Hofmann, K.-H., Kraus, S., Weigelt, G., Rengaswamy, S., Thiébaut, E., Lawson, P., Jaffe, W., Hummel, C., Pauls, T., Schmitt, H., Tuthill, P., and Young, J., “2008 imaging beauty contest,” Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 7013 (2008). [4] Malbet, F., Cotton, W., Duvert, G., and Lawson, P., “The 2010 interferometric imaging beauty contest,” Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 7734, 77342 (2010). [5] Harries, T. J., “An algorithm for Monte Carlo time-dependent radiation transfer,” MNRAS 416, 1500–1508 (Sept. 2011). [6] Monnier, J. D., Anderson, M., Baron, F., Berger, D. H., Che, X., Eckhause, T., Kraus, S., Pedretti, E., Thureau, N., Millan-Gabet, R., Ten Brummelaar, T., Irwin, P., and Zhao, M., “MI-6: Michigan interferometry with six telescopes,” Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 7734 (July 2010). [7] ten Brummelaar, T., McAlister, H., Ridgway, S., Bagnuolo, W. J., Turner, N. H., Sturmann, L., Sturmann, J., Berger, D. H., Ogden, C. E., Cadman, R., Hartkopf, W. I., Hopper, C. H., and Shure, M. A., “First results from the CHARA Array. II. A description of the instrument,” The Astrophysical Journal 628, 453 (July 2005). [8] Pauls, T. A., Young, J. S., Cotton, W. D., and Monnier, J. D., “A Data Exchange Standard for Optical (Visible/IR) Interferometry,” The Publications of the Astronomical Society of the Pacific 117, 1255–1262 (Nov. 2005). [9] Renard, S., Thiébaut, E., and Malbet, F., “Image reconstruction in optical interferometry: benchmarking the regularization,” Astronomy & Astrophysics 533, A64 (Aug. 2011). [10] Tallon-Bosc, I., Tallon, M., Thiébaut, E., Béchet, C., Mella, G., Lafrasse, S., Chesneau, O., Domiciano de Souza, A., Duvert, G., Mourard, D., Petrov, R., and Vannier, M., “LITpro: a model fitting software for optical interferometry,” Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series 7013 (July 2008). [11] Reid, R. I. and CASA Team, “CASA: Common Astronomy Software Applications,” in [American Astronomical Society Meeting Abstracts 215 ], Bulletin of the American Astronomical Society 42, 568 (Jan. 2010). [12] Hummel, C. A., “QC and Analysis of MIDI Data Using mymidigui and OYSTER,” in [2007 ESO Instrument Calibration Workshop ], Kaufer, A. and Kerber, F., eds., 471 (2008). [13] Rau, U. and Cornwell, T. J., “A multi-scale multi-frequency deconvolution algorithm for synthesis imaging in radio interferometry,” Astronomy & Astrophysics 532, A71 (July 2011). [14] Hager, W. W. and Zhang, H., “A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search,” SIAM Journal on Optimization 16, 170–192 (Jan. 2005). [15] Hager, W. W. and Zhang, H., “A New Active Set Algorithm for Box Constrained Optimization,” SIAM Journal on Optimization 17, 526–557 (Jan. 2006). [16] Hofmann, K. and Weigelt, G., “Iterative image reconstruction from the bispectrum,” Astronomy & Astrophysics 278, 328–339 (1993). [17] Lacour, S., Meimon, S., Thiébaut, E., Perrin, G., Verhoelst, T., Pedretti, E., Schuller, P., Mugnier, L., Monnier, J., Berger, J., and Others, “The limb-darkened Arcturus; Imaging with the IOTA/IONIC interferometer,” Astronomy & Astrophysics 485, 561–570 (2008).
A MERLIN Study of 6 GHz Excited-state OH & 6.7 GHz Methanol Masers in ON1 J. A. Green${}^{1}$, A. M. S. Richards${}^{1}$, W. H. T. Vlemmings${}^{1,2}$, P. Diamond${}^{1}$ and R. J. Cohen${}^{1}$ ${}^{1}$ University of Manchester, Jodrell Bank Observatory, Macclesfield, Cheshire, SK11 1DL; ${}^{2}$ Argelander Institute for Astronomy, University of Bonn, Auf dem Hügel 71, 53121 Bonn, Germany; E-mail:james.green@postgrad.manchester.ac.ukDeceased 2006. (Accepted 2007 September 3. Received 2007 August 31; in original form 2007 May 31) Abstract MERLIN observations of 6.668-GHz methanol and both 6.031- and 6.035-GHz hydroxyl (OH) emission from the massive star-formation region ON1 are presented. These are the first methanol observations made in full polarization using 5 antennas of MERLIN, giving high resolution and sensitivity to extended emission. Maser features are found to lie at the southern edge of the ultra-compact HII region, following the known distribution of ground-state OH masers. The masers cover a region $\sim$1 arcsec in extent, lying perpendicular to the H${}^{13}$CO${}^{+}$ bipolar outflow. Excited-state OH emission demonstrates consistent polarization angles across the strongest linearly polarized features which are parallel to the overall distribution. The linear polarizations vary between 10.0 and 18.5 per cent, with an average polarization angle of -60${}^{\circ}$$\pm$28${}^{\circ}$. The strongest 6.668-GHz methanol features provide an upper limit to linear polarization of $\sim$1 per cent. Zeeman splitting of OH shows magnetic fields between $-$1.1 to $-$5.8 mG, and a tentative methanol magnetic field strength of $-$18 mG is measured. keywords: masers – polarization – stars: formation – ISM:individual: ON1 ††pagerange: A MERLIN Study of 6 GHz Excited-state OH & 6.7 GHz Methanol Masers in ON1–References††pubyear: 2007 1 INTRODUCTION High-mass star-formation regions have been observed to demonstrate maser emission from a wide variety of molecules, including both methanol and hydroxyl (OH). One such region, ON1, is an ultra-compact HII region (UCHII) contained within the Onsala molecular cloud. The region was first observed by Elldér, Rönnäng & Winnberg in 1969 through its OH maser emission, before it was observed in CO, H${}_{2}$CO and HCO${}^{+}$ in 1983 by Israel & Wootten and shown to be part of an extended molecular cloud complex. NH${}_{3}$ and H76$\alpha$ recombination line observations led Zheng et al. (1985) to identify ON1 as a rapidly rotating condensation, but Kumar, Tafalla & Bachiller (2004) later identified two H${}^{13}$CO${}^{+}$ outflows, demonstrating a resolved bipolar structure with a velocity gradient of $\sim$30 km s${}^{-1}$ pc${}^{-1}$. ON1 is associated with the IRAS source 20081+3122. It is believed that ON1 contains a central binary massive protostar of type B0.3 surrounded by a young stellar cluster (Israel & Wootten 1983; Macleod et al. 1998; Kumar et al. 2004). The kinematic distance estimate to ON1 is between $\sim$1.4 to 8 kpc depending on the model used for the Galactic rotation (Israel & Wootten 1983; Kurtz, Churchwell & Wood 1994; Macleod et al. 1998). The currently favoured distance is the near kinematic distance of 1.8 kpc, as derived by Macleod et al. (1998), using the Wouterloot & Brand (1989) rotation curve. Macleod et al. (1998) estimate the age of ON1 to be 0.5 $\times$ 10${}^{5}$ years. The 6.7-GHz methanol maser has an intrinsic relationship with high-mass star-formation, having to date been observed in exclusive association with known massive star-formation regions (Minier et al. 2003). Methanol maser emission at 6668.512 MHz was first observed in ON1 with the 43-m Greenbank telescope in 1991 by Menten, who detected a feature with a peak flux density of 91 Jy at +15.1 km s${}^{-1}$. This was later observed by Szymczak, Hrynek & Kus (2000) and found to have a peak flux density of 109 Jy. The spectrum of this observation is given in Fig. 1, and reveals a weak feature at around 0 km s${}^{-1}$ in addition to the peak feature at +15.1 km s${}^{-1}$. 6.7-GHz methanol maser polarization has only been studied twice previously, Ellingsen (2002) found fractional polarizations between a few and 10 per cent in NGC6334F, whilst Vlemmings, Harvey-Smith & Cohen (2006a) found approximately 2 per cent, with levels up to 8 per cent, in W3(OH). The methanol molecule is diamagnetic and as such has low magnetic permeability and a small magnetic dipole moment and so methanol masers in the presence of an external magnetic field are not expected to demonstrate strong linear polarization, nor to show large fractional circular polarization. As the OH molecule is paramagnetic, OH masers are good tracers of the magnetic field. Ground-state OH towards ON1 was observed with MERLIN by Nammahachak et al. (2006). The 6-GHz excited-state of OH is also particularly effective for measuring the Zeeman effect (Caswell & Vaile 1995). Desmurs & Baudry (1998) conducted VLBI observations of 6.035-GHz OH in ON1 in 1994, finding 4 right hand circularly polarized (RHC) features and 3 left hand circularly polarized (LHC) features, with flux densities ranging from 0.7 to 7.1 Jy beam${}^{-1}$. The LSR velocities of these features range from 13.8 to 15.4 km s${}^{-1}$. This paper represents the first high resolution observations of 6.668-GHz methanol maser emission in ON1 combined with the first detailed studies of the polarization properties of both the 6.668-GHz Methanol and 6.031/6.035-GHz excited-state OH in ON1. Unfortunately, the MERLIN results presented here did not have a wide enough bandwidth both to give adequate spectral resolution and to cover both features, so they were centred near the main peak shown in Fig. 1. 2 OBSERVATIONS ON1 was observed using the Multi Element Radio Linked Interferometer Network (MERLIN) on 2005 January 12 and 16. These were the first data taken with new broadband 4-8 GHz e-MERLIN receivers on 5 telescopes (the MKII, Darnhall, Tabley, Knockin and Cambridge), enough for full synthesis imaging, in full polarization. Observations were taken of three maser transitions: 6030.747 MHz (F = 2 $-$ 2 hyperfine transition of the ${}^{2}$$\Pi$${}_{3/2}$, J = 5/2 excited-state OH); 6035.092 MHz (F = 3 $-$ 3 hyperfine transition of the ${}^{2}$$\Pi$${}_{3/2}$, J = 5/2 excited-state OH); and 6668.5192 MHz (5${}_{1}$$\rightarrow$6${}_{0}$ transition of A${}^{+}$ methanol). The longest baseline of MERLIN is 217 km, giving a synthesized beam size of 47 mas at 6.0 GHz and 43 mas at 6.7 GHz. Spectral line data were taken using a 0.5 MHz bandwidth, centred on each line frequency corrected to a V${}_{\rm LSR}$ of 12 km s${}^{-1}$, and split into 512 frequency channels. This gave a velocity span and resolution of 24.9 km s${}^{-1}$ in 0.049 km s${}^{-1}$ channels at 6.0 GHz and 22.5 km s${}^{-1}$ in 0.044 km s${}^{-1}$ channels at 6.7 GHz. To obtain all the Stokes parameters, both right and left hand circularly polarized signals were collected from each telescope in the network. The phase reference source 2013+340 ($\approx$3${}^{\circ}$ from ON1) was observed in continuum mode with 15 MHz useable bandwidth, for 2 min, before and after 7-min scans on ON1 at each frequency. Each line plus reference was observed in rotation, for a total of 5 hrs on ON1 at each OH frequency and 3.5 hrs at the methanol frequency. 3C286 and 3C84 were observed at all frequencies in both wide- and narrow-band configurations for totals of 6.0 and 9.2 hrs, respectively (with an approximately even split in time between the configurations). Using local MERLIN software, the flux density of 3C84 was established to be 15.5 Jy with respect to the primary amplitude calibration source, 3C286. The data were converted to FITS files, using 3C84 to apply a preliminary bandpass scaling to all data. All further processing was performed using the Astronomical Image Processing Software (aips), see the MERLIN User Guide (Diamond et al. 2003) for details. A small percentage of the data had anomalous phase and amplitude and was edited accordingly. Phase self-calibration was performed for all the calibration sources and 3C84 was used to derive the instrumental phase offset between the continuum and spectral configurations. Amplitude self-calibration was performed on 3C84 and this source was used to derive bandpass correction tables and also to measure the polarization leakage. Systematic leakage was calibrated and there was found to be a residual leakage of $\leq$ 0.5 per cent Stokes I in Stokes Q and U and $\leq$ 0.25 per cent in Stokes V. 3C286 was used to derive the polarization angle correction. The various corrections appropriate for each frequency were then applied to ON1, including phase and amplitude solutions derived from 2013+340. Inspection of the spectra of each of the ON1 lines in LHC and RHC enabled the selection of the brightest channel in a single hand of circular polarization, this was imaged to obtain an accurate reference position. The systematic position errors are $\sim$12 mas at 6$-$7 GHz due to uncertainty in the phase calibration, telescope positions and atmospheric variation (Etoka, Cohen & Gray 2005). The clean components of each reference image were then used as a model for phase and amplitude self-calibration and the solutions applied to all channels and both polarizations of that line. Finally, the brightest channel in the weaker hand of polarization for each line was selected and self-calibrated, applying the solutions to that hand of polarization only. In this way, good calibration was achieved without compromising the absolute position accuracy or the polarized phases and amplitudes. When all calibration was complete, the contributions of the antennas were re-weighted according to their sensitivity, their individual efficiency and to avoid baselines to a single antenna completely dominating the array (http://www.merlin.ac.uk/user_guide/). This allowed preparation of clean image datacubes for each line in all 4 Stokes parameters (I, Q, U and V) and in RHC and LHC. A circular restoring beam of 50 mas FWHM was used with a pixel size of 12 mas, giving in total an image field of view of 12 arcsec. The Stokes I images had a 1$\sigma_{\rm rms}$ noise level of 25 mJy beam${}^{-1}$ and 15 mJy beam${}^{-1}$ for methanol and OH respectively, except in the brightest total intensity channels, which were dynamic range limited with a noise level of up to 0.5 per cent of the peak. The LHC and RHC images had noise levels of 35, 20 and 25 mJy beam${}^{-1}$ for the methanol, 6.031- and 6.035-GHz OH lines, respectively. Elliptical Gaussian components were fitted to each patch of emission above 5$\sigma_{\rm rms}$ in each Stokes I, LHC and RHC channel and the position, deconvolved size, peak and total intensities, and the uncertainties were all measured. Any components not found at similar positions in a series of 5 or more consecutive channels were discarded. The relative positional errors for all components, determined from the errors in the peak position of the Gaussian fits, were 2 mas or better (for the accuracy of Gaussian component fitting see Condon 1997; Condon et al. 1998; Richards, Yates & Cohen 1999). The uncertainty in component size is the square root of 2 multiplied by the position uncertainty, which enables brighter component sizes to be determined with meaningful accuracy. The position and intensity of emission in the Q, U, and V channel maps were measured at the position of each Stokes I component for that line and channel. 3 RESULTS 3.1 6.7-GHz Methanol 51 individual channel Stokes I components were detected in the 6.667-GHz methanol transition. These individual channel components were grouped into features as described in section 2 and the flux density weighted mean properties for each feature are given in Tables 1 and 2. Four methanol features were observed with peak velocities between 14.46 km s${}^{-1}$ and 15.57 km s${}^{-1}$. The positions and velocities of the 6.7-GHz methanol features are given in Table 1 and the Stokes I spectrum and channel-averaged map are shown in Fig. 2. Features show typical FWHM linewidths of between 0.21 and 0.28 km s${}^{-1}$. Peak flux densities of the individual features varied between 0.48 Jy beam${}^{-1}$ and 73.62 Jy beam${}^{-1}$. Estimates of the deconvolved component sizes from the Gaussian fitting ranged from 16 to 52 mas (equivalent to 29 to 94 AU at the assumed distance of 1.8 kpc). Combined with the peak flux densities these allowed lower limits to the peak brightness temperatures to be determined (ranging between 1.99 $\times$ 10${}^{7}$ K and 8.97 $\times$ 10${}^{9}$ K). No previous high resolution observations of 6.7-GHz methanol are available in the literature for comparison. A larger restoring beam of 100 mas was used to search for extended emission, similar to that seen in W3(OH) by Harvey-Smith & Cohen (2005), but none was found. It is difficult to speculate how much of the single dish flux density is recovered by the current observations as the previous single-dish observations were taken in 1999 (Szymczak et al. 2000) and showed possible variability in comparison to the previous observations in 1991 (Menten 1991). The two flux densities recorded were, as noted in Section 1, 109 Jy and 91 Jy respectively, taken with the 32-m Torun and the 43-m Greenbank telescope. The errors in the absolute flux density calibrations for the two observations were $\pm$15 per cent and $\pm$10 per cent respectively, so the two measurements could be consistent to within the errors. However, there have been a number of studies into the variability of 6.7-GHz methanol masers, most recently by Goedhart, Gaylard & van der Walt (2005), and these have served to show that 6.7-GHz methanol masers are generally variable on timescales of years, but the nature of their variability is wide ranging from periodic/quasi-periodic through monotonic increases/decreases to sporadic behaviour, and so with just two previous epochs it is impossible to judge the nature of the possible increased flux density in 1999. Given there is a lack of extended emission from the 100 mas restoring beam and if minimal variability is assumed, then comparison with the spectrum of Szymczak et al. (2000) implies the flux density recovered is likely to be over 80 per cent. 3.2 6-GHz Excited-state OH 14 individual channel Stokes I components were detected in the 6.031-GHz OH transition, and 63 for that at 6.035-GHz OH. 9 individual channel LHC and 8 RHC components were detected for 6.031-GHz OH, whilst 53 individual channel LHC components and 50 RHC, were found for 6.035-GHz OH. These individual channel components were grouped into features as described in section 2 and the flux density weighted mean properties for each feature are given in Tables 3 and 4. A single OH LHC feature and a single OH RHC feature were found at 6.031 GHz with peak velocities of 14.19 km s${}^{-1}$ and 13.87 km s${}^{-1}$ respectively. 6 OH LHC and 5 OH RHC features were found at 6.035 GHz with peak velocities between 13.76 km s${}^{-1}$ and 15.50 km s${}^{-1}$. The positions and velocities of these OH features are given in Table 3, with the corresponding Stokes I feature details in Table 4. The RHC and LHC spectra and Stokes I channel-averaged maps are shown in Fig. 2. Linewidths vary between 0.15 and 0.34 km s${}^{-1}$. The peak flux densities varied between 0.35 Jy beam${}^{-1}$ and 15.72 Jy beam${}^{-1}$. Estimates of the deconvolved component sizes from the Gaussian fitting ranged from 13 to 38 mas (equivalent to 23 to 68 AU at the assumed distance of 1.8 kpc). Combined with the peak flux densities these allowed lower limits to the peak brightness temperatures to be determined (ranging between 1.02 $\times$ 10${}^{7}$ K and 2.71 $\times$ 10${}^{9}$ K). The overall shape of the 6.035-GHz OH emission spectrum is consistent with the VLBI spectrum of Desmurs & Baudry (1998) and the single-dish spectrum of Baudry et al. (1997), with peaks at similar velocities and similar relative intensities between the spectral features. However the overall flux density in the current observations is greater. Baudry et al. (1997) established from two epochs, separated by a year, that there is rapid time variability of the source, and hence over the 10 years between those observations and the current, significant flux density variation could have occurred and, similar to the methanol, any derivation of the recovered flux density would lack significant accuracy. Comparison between the features found by Desmurs & Baudry (1998) and the current observations is also difficult, again due to the variability, but also due to the error in absolute position of the previous observations (between 0.2 and 0.5 arcsec). However the range of velocities of the features between 13.6 and 15.4 km s${}^{-1}$ is similar, as are the relative positions of the 6.035- and 6.031-GHz OH emission (described in Section 4 and seen in Fig. 3). Specifically at 6.031-GHz OH, Fish et al. (2006) and Desmurs & Baudry (1998) found 1 LHC and 1 RHC feature at velocities of 14.2 and 13.8 km s${}^{-1}$ respectively, which are also found in the current observations. For the 6.035-GHz OH, the current observations fail to detect the LHC features at 5.47, 7.74 and 12.59 km s${}^{-1}$ seen by Fish et al., but do detect 2 previously unseen LHC features. The current study also fails to detect the RHC feature at 5.53 km s${}^{-1}$, but otherwise recovers the RHC features seen by Fish et al. and Desmurs & Baudry and finds 1 further feature. 3.3 Linear Polarization Two of the four 6.7-GHz methanol features demonstrate, in terms of random noise, statistically significant ($>$3$\sigma$) Stokes Q and U flux densities, resulting in linear polarizations of 0.2 per cent and 1.3 per cent. However, analysis of the bandpass calibrator 3C84, showed a residual polarization leakage of $\leq$0.5 per cent, reducing the significance of the measurements to an upper limit. The 6.031-GHz OH Stokes I feature had significant Q and U flux density giving 12 per cent linear polarization. Of the five 6.035-GHz OH Stokes I features, two had significant Q and U flux density giving linear polarizations of 19 per cent and 10 per cent. The 3 features that had statistically significant linear polarization showed reasonably consistent polarization angles of $-$60${}^{\circ}$ $\pm$28${}^{\circ}$. The upper limit of $\sim$1 per cent linear polarization in methanol is consistent with that found in W3(OH) by Vlemmings et al. (2006a) and in NGC6334F by Ellingsen (2002), where both had the majority of features displaying less than 5 per cent linear polarization. Linear polarization of excited-state OH in ON1 has not been studied before. The polarization vectors of the statistically significant OH results are plotted in Fig. 4 together with the upper limits of methanol polarization. The two methanol features with tentative linear polarization have a 90${}^{\circ}$ difference in polarization angle. As shown in the case of SiO and H${}_{2}$O masers, which are also diamagnetic, such a 90${}^{\circ}$ flip can be caused by a difference in the angle $\theta$ between the maser line of sight and the magnetic field. When $\theta$ is larger than the critical angle $\theta_{\rm crit}\approx 55^{\circ}$, the linear polarization direction is perpendicular to the magnetic field, whilst when $\theta<\theta_{\rm crit}$, the linear polarization is parallel. Furthermore, as the fractional linear polarization also decreases close to the critical angle, this could explain why the strongest methanol maser feature has the lowest significant linear polarization fraction (Vlemmings et al. 2006b, Vlemmings & Diamond 2006, references therein). 3.4 Circular Polarization & Magnetic Fields In order to investigate the degree of circular polarization within methanol the cross-correlation method of Modjaz et al. (2005) was employed. This method estimates Zeeman splitting via the cross-correlation of the RR and LL spectra and as such is independent of any polarization leakage that may affect the Stokes V spectrum. Also, assuming the magnetic field strength is similar across the blended maser features, it allows for a determination of Zeeman splitting when spectral blending makes a measurement using the V spectrum impossible. The rms of the cross-correlation method is a function of the spectral channel width and the rms noise on the RR and LL spectra. The method was shown by Modjaz et al. (2005), via Monte Carlo simulations, to have a sensitivity equivalent to the standard S-curve method. Through this method, methanol feature D is seen to have a Zeeman splitting of 0.0009$\pm$0.0003 km s${}^{-1}$. Compared with OH, methanol is believed to have a much lower Zeeman splitting coefficient, but a precise value has never clearly been defined. Using the only currently available value of the methanol g-Landé factor, determined by Jen (1951) from 25 GHz methanol maser lines, the Zeeman splitting coefficient of the 6.7-GHz methanol maser transition is calculated to be 0.0493 km s${}^{-1}$ G${}^{-1}$ (Vlemmings et al. 2006a). Consequently the Zeeman splitting of feature D corresponds to a field strength of $-$18$\pm$6 mG. This represents the first tentative detection of Zeeman splitting in 6.7-GHz methanol. As the noise is limited by the dynamic range, the noise increases in the peak channels to $\sim$60 mJy, and therefore means the field strength is at $\sim$3$\sigma$ (rather than $>$5$\sigma$, as it would have been without the dynamic range limitation). In terms of fractional circular polarization, statistically significant results could not be determined due to the residual leakage, but the feature demonstrating the Zeeman splitting had a circular polarization of 0.65 per cent at 2.5 $\sigma$. In total six excited-state OH Zeeman pairs were found, five at 6.035 GHz and one at 6.031 GHz, with the LHC and RHC features in each pair having a spatial association to within 1 mas. Splitting factors of 0.0564 km s${}^{-1}$ mG${}^{-1}$ and 0.0790 km s${}^{-1}$ mG${}^{-1}$, respectively, were assumed (Yen et al. 1969). This implies a magnetic field strength ranging from $-$1.1 to $-$5.8 mG for the pairs at 6.035 GHz and $-$3.9 mG for the single pair at 6.031 GHz. All the magnetic fields are directed towards us and concur in magnitude and direction with those found previously by Desmurs & Baudry (1998), which varied between $-$3.6 mG and $-$6.3 mG, and the single-dish measurements of Fish et al. (2006) of between $-$0.8 mG and $-$5.0 mG. Furthermore the fields seen in ON1 appear to be typical for excited-state OH in star-formation regions in general, which have been observed to vary between $+$9.1 mG and $-$13.5 mG (Fish et al. 2006). Fig. 4 shows the location of the measured field strengths. Overall the fields compare favourably with the field strengths found for the ground-state OH by Nammahachak et al. (2006), which varied between $-$0.4 and $-$4.6 mG. 4 DISCUSSION 4.1 Distribution & Coincidences The methanol features lie in a roughly linear south-east to north-west distribution, covering about 0.6 arcsec ($\sim$1080 AU at 1.8 kpc). The excited-state OH features meanwhile show a wider spread of about 1.2 arcsec ($\sim$2160 AU), with one feature offset from the linear distribution by about 0.5 arcsec ($\sim$900 AU). The linear distribution of both the 6.7-GHz methanol and the 6-GHz excited-state OH is parallel to the mainline ground-state OH distribution of Nammahachak et al. (2006), with the methanol consistently offset by on average $\sim$70 mas (130 AU). The three 6.035-GHz features towards the centre of the distribution, seen in Fig. 3, are systematically offset from the ground-state 1665-MHz OH features by an average of $\sim$57 mas (100 AU). It is worth considering if proper motion of the maser features between the observations of Nammahachak et al. observed in 1996 and the current observations in 2005 could explain the observed spatial offset between the ground-state OH and the methanol and excited-state OH maser features (both sets of observations used the same extragalactic phase reference source). The systematic offset between the 6.035-GHz OH and 1665-MHz OH of 100 AU would require a motion over $\sim$1.5 $\times$ 10${}^{10}$ km in 9 years, which is $\approx$50 km s${}^{-1}$. Both internal and external proper motion should be examined. Work by Bloemhof, Reid & Moran in 1992 looked at the internal proper motion of the 1665-MHz OH masers in the W3(OH) region, which is located at a similar distance, and found very few motions greater than 5 mas over the 7.5 year period they studied (velocities were typically a few km s${}^{-1}$). This implies that internal proper motion is unlikely to account for the separation seen. It is also likely that any internal proper motion of the ground-state OH masers would also affect the excited-state masers, as their masing gas clouds are likely to share a bulk proper motion. In terms of external proper motion it is possible to examine the effect of Galactic rotation on the region over the 9 year separation. Adopting the rotation curve of Brand & Blitz (1993), leads to an external east to west motion of $\sim$45 km s${}^{-1}$ for the region, which would amount to $\sim$45 mas over 9 years. This would result in a separation of $\sim$ 10 mas between the overall line of ground-state and excited-state OH masers and $\sim$ 25 mas between the ground-state OH and the methanol. Coupled with the uncertainties in these calculations, possible peculiar motion, and the absolute positional errors of the two sets of data (20 mas and 15 mas respectively), it is possible that proper motion may account for the spatial separation seen. An accurate determination of these offsets requires near simultaneous observations of both frequency regimes, but this is beyond the scope of the current paper. However, if this apparent lack of spatial coincidence is demonstrated it could be the result of differences in the specific column densities of the two species, as modeling of OH and methanol masers by Cragg, Sobolev & Godfrey (2002) showed that both OH and methanol require very similar high density, low temperature regimes (dust temperatures exceeding 100 K, gas temperatures $<$100 K, and densities in the range 10${}^{5}$ $<$ n${}_{\rm H}$ $<$ 10${}^{8.3}$ cm${}^{-3}$). On the other hand, subtle variations in the above parameters could pick out the exact conditions required for each species of maser. A plot of the approximate regions for maser emission for a dust temperature of 175 K is given in Fig. 5. For the excited-state OH and methanol frequencies of the current observations, based on a combination of the absolute and relative positional errors as per Etoka et al. (2005), 15 mas can be taken as the distance for association, within which it is not possible to determine if individual methanol and excited-state OH features are spatially separate and hence may indicate coincidence and/or co-propagation. In the current study, there is no association between the 6.7-GHz methanol and the 6-GHz excited-state OH (the closest emission peaks are separated by $\sim$23 mas, even taking into account the deconvolved component sizes there is still distinct separation). The two transitions also display significantly different magnetic field strengths as described in section 4.3. The RHC and LHC Zeeman pair at 6.031 GHz coincides with a 6.035-GHz OH Zeeman pair to $<$5.5 mas spatially and $<$0.049 km s${}^{-1}$ in velocity, implying co-propagation. This is a confirmation of the suspected coincidence which was identified in Desmurs & Baudry (1998) to within their accuracy of $\sim$200 mas. Gray, Field & Doel (1992) show that for the two transitions to be coincident, if the dust temperature is $\sim$50 K and assuming a hydrogen number density of 2.5 $\times$ 10${}^{7}$ cm${}^{-3}$, then the kinetic temperature must be $\sim$75 K. However the results of Cragg et al. (2002), using a more extensive model and a dust temperature of 175 K, allow for a range of kinetic temperatures (as shown in Fig. 5). The 100 per cent association of 6.031-GHz emission with 6.035-GHz is consistent with previous results, such as by Etoka et al. (2005). The fact that there is only one case of coincidence of all the features shows the physical conditions of the main group of maser spots must vary from the offset coincident pair. At face value the data sets we have show that the individual 6-GHz excited-state OH features do not appear to show any spatial coincidence with ground-state OH to within 20 mas (the closest separation of peak emission is $\sim$28 mas). There is one exception, a satellite line feature at 1612-MHz. There are only two 1612-MHz OH features in the region, one is separated by 34 mas from 6.035-GHz emission, the other has just a 7 mas separation. As mentioned earlier this may be accounted for by proper motion of the region, but if not then it may imply, contrary to previous models, that there might be an overlap in the conditions for maser emission between the 1612-MHz transition and the 6.035-GHz transition, which is not present for the other ground-state OH transitions. On the other hand the two transitions could be tracing higher density gas, but not be spatially coincident, and so exist in different temperature regions. Gray et al. (1992) suggest the 1612-MHz transition requires kinetic gas temperatures of $\geq$150 K and hydrogen number densities around 6 $\times$ 10${}^{6}$ cm${}^{-3}$. Fig. 5 also implies a higher density is required for both to be present, but if the dust temperature is consistent, the kinetic temperature regimes vary (but as noted in the caption to the Figure other factors may allow for the two to be coincident). It is also possible to conclude that the possible spatial separation between the ground-state and excited-state OH (with the exception of the one 1612-MHz feature), could mean we see the very highest density of OH in the excited-state regions, as the conditions for excited-state OH maser emission extend to higher densities than the ground-state (Fig. 5). As the methanol lies in the same, slightly offset, region of 6.035-GHz emission it too perhaps is in a similarly high density regime. However of course if both the gas density and temperature are high, then the collision rate will be increased and both species of maser are likely to be quenched. Combined with the 1612-MHz distribution, this would lead to the assumption the excited-state OH traces the slightly cooler, dense gas, the 6.668-GHz methanol the hotter dense gas and the 1612-MHz tracing possibly the hottest dense gas (although the abundances of the molecular species and the dust temperature may also vary). The higher density region could be indicative of a shock front, propagating away from the UCHII. Velocities of the maser features suggest a positional gradient, with 3 of the 4 methanol features showing a north-west to south-east gradient, which is also seen in the excited-state OH at 6.035 GHz in 3 of the 5 features. If a gradient does exist this would concur with the ground-state OH picture and the possibility that the masers are tracing an outflow or disk. Interestingly two of the methanol features and one of the 6.035-GHz OH features also demonstrate tentative velocity gradients within their individual channel components. The two methanol features’ internal gradients lie on a north-east to south-west direction, i.e. perpendicular to the overall velocity gradient across the features, and may suggest the masers are tracing a planar shock (Elitzur, Hollenbach & McKee 1992; Dodson, Ojha & Ellingsen 2004). The 6.035-GHz OH feature’s internal gradient meanwhile, lies parallel to the main distribution. For comparison W3(OH) represents a star-formation region at a similar distance to ON1, which has been studied at high resolution for multiple transitions of masers (Menten et el. 1992; Sutton et al. 2004; Wright et al. 2004; Harvey-Smith & Cohen 2005; Etoka et al. 2005; Harvey-Smith & Cohen 2006). The region has far more maser features across the transitions and there is far more “intermingling” between the species and transitions, without the possible separation that is seen in ON1. Within 15 mas in W3(OH) there is a high percentage of associations between the excited-state OH and the mainline ground-state OH (Etoka et al. 2005), which may not be the case for ON1. Both regions show a lack of association on the smallest scales between 6-GHz OH and 1720-MHz OH sources, but unlike W3(OH), ON1 demonstrates possible association of 6-GHz OH with 1612-MHz OH. W3(OH) shows 27 per cent of 6.7-GHz methanol masers had associated 6-GHz OH maser emission, whilst in ON1 there is no association to 15 mas. Some of these differences could be accounted for by a difference in the orientation of the regions and also the same caveat of internal and external proper motions may affect the W3(OH) results (in so far as the comparison of Etoka et al. 2005 compares data for mainline OH observed in 1996, 4.7-GHz OH observed in 1993 and excited-state OH and 6.7-GHz methanol seen in 2001). 4.2 Extended Emission The current observations did not detect any extended emission above the 3$\sigma$ limit of $\sim$75 mJy beam${}^{-1}$ at 100 mas resolution. This contrasts with the detection in the W3(OH) star-formation region (Harvey-Smith & Cohen 2006) at a well established distance of 1.95 kpc (Xu et al. 2006). If ON1 is at a similar distance, it lacks diffuse methanol emission; however, if it is at the far-kinematic distance, it might just be undetectable at present. Further study would be required, both to determine whether extended emission is prevalent in methanol maser sources, and thus expected to be seen, and better determination of the distance to ON1 (perhaps through maser astrometry as per Xu et al. 2006). 4.3 Magnetic Field Strength & Alignment The average polarization angle of $-$60${}^{\circ}$$\pm$28${}^{\circ}$ is consistent with the line of maser distribution, which is itself perpendicular to the known H${}^{13}$CO${}^{+}$ outflow with a PA of 44${}^{\circ}$ (Kumar et al. 2004). This suggests the emission could be a propagating shock front, which agrees with the excited-state OH and methanol lying in a potentially higher density region, as identified in the previous Section. Faraday rotation is inversely proportional to the square of the frequency and as such will strongly affect the ground-state OH transitions. Nammahachak et al. (2006) estimate the external rotation measure for ON1 to exceed $-$100 rad m${}^{-2}$, more than enough to disrupt any pattern present. For the excited-state OH at 6 GHz and methanol at 6.7 GHz internal Faraday rotation is minimal. External Faraday rotation, which is calculated from the standard Faraday rotation equation using typical values for interstellar electron density and magnetic field as per Vlemmings et al. (2006a), but adjusted for the distance of 1.8 kpc, is of the order of 12${}^{\circ}$ and 11${}^{\circ}$ for the 6 GHz OH and 6.7 GHz methanol respectively. The coincident 6.031-GHz and 6.035-GHz Zeeman pairs (Z${}_{1}$ and Z${}_{4}$) show the same field strength to within the errors, which concurs with the possibility of co-propagation mentioned previously. However, as this is the only coincidence of the two transitions, and there is a similar sized field at 6.035-GHz for Z${}_{2}$, the magnetic field strength may not be intrinsically linked to the conditions necessary for maser co-propagation. The similarity of the 6.031-GHz and 6.035-GHz fields for Z${}_{1}$ are in contrast to that found by Desmurs et al. (1998), where the 6031-GHz transition was seen to have stronger fields compared to the 6035 GHz transition. The tentative magnetic field strength of $-$18$\pm$6 mG derived from methanol for the current observations is larger than that of both the excited-state OH and the ground-state OH, although it is within the established upper limit for the field strength derived through methanol observations of W3(OH) of $-$22 mG (Vlemmings et al. 2006a). The measured field strength implies the methanol may be tracing a localised increase in density compared to the OH. If the standard scaling law of B${}^{0.5}$ of Crutcher (1991) is applied, which has been found to be valid up to the highest maser densities such as those probed by H${}_{2}$O masers (Vlemmings et al. 2006b), then the implication is that the methanol masers occur in gas denser by a factor of 5$-$10 compared to the OH masing gas at hydrogen densities of $\sim$10${}^{8}$. However, caution must still prevail as the magnetic field detection is only marginal. If further study demonstrated the presence of a stronger magnetic field in the regions of methanol maser emission, this could well lead to a better determination on the exact criteria for the 6.7 GHz emission or indeed the star-formation stage it traces. Comparison with the ground-state OH magnetic field strengths measured by Nammahachak et al. (2006) highlight several similarities. The field strength of the 6.035-GHz OH seen towards the middle of the linear distribution, that of Zeeman pair Z${}_{2}$, is comparable to the field strength of the nearby ($\sim$60 mas separation) 1665-MHz OH, both giving 4.6 mG directed towards us. The the field detected at the south-east end of the distribution, for the 6.035-GHz OH Zeeman pair Z${}_{4}$, is comparable to the 1720-MHz OH ($\sim$26 mas separation) of 1.0 mG. However, located between these two sets of similar fields, the field strengths found for the methanol Zeeman splitting and the 6.035-GHz OH Zeeman pair Z${}_{3}$ are both larger than the 1665-MHz OH field (1.5 mG), although in this case there is a larger separation of $\sim$190 mas. 5 CONCLUSIONS New high-resolution MERLIN data demonstrate for the first time the distribution and possible polarization properties of 6.7-GHz methanol maser emission in the ON1 star-forming region. When combined with new excited-state OH observations and existing ground-state OH data, and correcting for the effects of Galactic motion, we see all transitions lie in a similar region. We see the structure of ON1 to be that of two parallel, possibly offset, elongated distributions, one in ground-state OH, one in interwoven excited-state OH and 6.7-GHz methanol. The 6.031-GHz transition of excited-state OH shows a linear polarization of 12 per cent, whilst the 6.035-GHz OH transition shows linear polarization varying between 10 per cent and 19 per cent. In comparison the methanol, as expected, demonstrates a lower value of $\sim$1 per cent. Consistent magnetic field strengths were observed across the region for the excited-state OH, with a slight tendency for smaller field strengths towards the south-east of the distribution, which is in agreement with the known ground-state OH magnetic field strengths. Zeeman splitting was detected for the first time in the 6.7-GHz methanol maser emission, demonstrating a possible magnetic field strength of $-$18$\pm$6 mG. A Zeeman pair at 6.031 GHz is seen in coincidence with a 6.035-GHz OH Zeeman pair, with both having matching magnetic fields within the errors. This coincidence represented 100 per cent association to $\sim$5 mas for the 6.031-GHz OH with the 6.035-GHz OH emission, but only a 20 per cent association of 6.035 GHz with 6.031-GHz OH. To the same level of spatial separation there is no coincidence between the methanol and excited-state OH transitions. The observed interweaving of excited-state OH and methanol maser features along the elongated distribution, together with the separation of these masers from the ground-state OH is in agreement with the postulation of Caswell (1997), that the two species delineate similar or complimentary regions. The separation of the individual methanol and excited-state OH features on the scales afforded by high resolution observations, could just be due to variations in the relative abundances of the species, or it could be that the methanol is tracing a slightly higher gas temperature or even denser regions and thus a different component of the region surrounding the evolving massive star. This is complimented by the significantly higher magnetic field strength suggested for the 6.7-GHz methanol maser. Whether the maser features show a velocity gradient, and thus possibly trace a disk, is not possible to judge as the number of features are too few to draw statistically sound conclusions. However, the consistent polarization angles and offset nature of denser gas imply the masers trace a shock front, possibly in the form of a torus or ring around a young stellar object. This is also highlighted by possible orthogonal velocity gradients across the individual components of the methanol maser features. The shock front hypothesis concurs with the previous study of the ground-state transitions of OH of Nammahachak et al. (2006). The potential shock front lies orthogonal to the known H${}^{13}$CO${}^{+}$ outflow and future proper motion studies of the masers may be able to determine if they are moving in synchronization with the outflow. Acknowledgments JAG acknowledges the support of a Science and Technology Facilities Council (STFC) studentship. WHTV was supported by a Marie Curie Intro-European fellowship within the 6th European Community Framework Program under contract number MEIF-CT-2005-010393. JAG thanks L. Harvey-Smith for proposing the observations. Figure 1 is reproduced from Szymczak et al. 2000 and JAG would like to thank the authors M. Szymczak, G. Hrynek and A. J. Kus for this. MERLIN is a national facility operated by the University of Manchester on behalf of STFC. The authors would like to thank S. Ellingsen for his insightful comments and suggestions. The authors would like to dedicate this paper to the memory of R. J. Cohen. References Baudry & Diamond (1991) Baudry A., Diamond P. J., 1991, A&A, 247, 551 Baudry et al. (1997) Baudry A., Desmurs J. F., Wilson T. L., Cohen R. J., 1997, A&A, 325, 255 Baudry & Desmurs (2002) Baudry A., Desmurs J. F., 2002, A&A, 394, 107 Bloemhof, Reid & Moran (1992) Bloemhof E. E., Reid M. J., Moran J. M., 1992, ApJ, 397, 500 Brand & Blitz (1993) Brand J., Blitz L., 1993, A&A, 275, 67 Caswell & Vaile (1995) Caswell J. L., Vaile R. A., 1995, MNRAS, 273, 328 Caswell (1997) Caswell J. L., 1997, MNRAS, 289, 203 Condon (1997) Condon J. J., 1997, PASP, 109, 166 Condon et al. (1998) Condon J. J., Cotton W. D., Greisen E. W., Yin Q. F., Perley R. A., Taylor G. B., Broderick J. J., 1998, AJ, 115, 1693 Cragg, Sobolev & Godfrey (2002) Cragg D. M., Sobolev A. M., Godfrey P. D., 2002, MNRAS, 331, 521 Cragg, Sobolev & Godfrey (2005) Cragg D. M., Sobolev A. M., Godfrey P. D., 2005, MNRAS, 360, 533 Crutcher (1991) Crutcher R. M., 1991, ApJ, 520, 706 Desmurs & Baudry (1998) Desmurs J. F., Baudry A., 1998, A&A, 340, 521 Diamond et al. (2003) Diamond P. J., Garrington S. T., Gunn A. G., Leahy J. P., McDonald A., Muxlow T. W. B., Richards A. M. S., Thomasson P., 2003, MERLIN User Guide, v.3 Dodson et al. (2004) Dodson R., Ojha R., Ellingsen S. P., 2004, MNRAS, 351, 779 Downes et al. (1979) Downes D., Genzel R., Moran J. M., Johnston K. J., Matveyenko L. I., Kogan L. R., Kostenko V. I., Rönnäng B., 1979, A&A, 79, 233 Elldér et al. (1969) Elldér J., Rönnäng B., Winnberg A., 1969, Nature, 222, 67 Ellingsen (2002) Ellingsen, S. P. 2002 in: V. Mineese & M. Reid (ed.), Cosmic Masers: From Proto-Stars to Black Holes (IAU Symposium 206), p. 151 Elitzur et al. (1992) Elitzur M., Hollenbach D. J., McKee C. F., 1992, ApJ, 394, 221 Etoka, Cohen & Gray (2005) Etoka S., Cohen R. J., Gray M. D., 2005, MNRAS, 360, 1162 Fish et al. (2006) Fish V. L., Reid M. J., Menten K. M., Pillai T., 2006, A&A, 458, 485 Goedhart, Gaylard & van der Walt (2005) Goedhart S., Gaylard M. J., van der Walt J., 2005, Ap&SS, 295, 197 Gray, Field & Doel (1992) Gray M. D., Field D., Doel R. C., 1992, A&A, 262, 555 Harvey-Smith & Cohen (2005) Harvey-Smith L., Cohen R. J., 2005, MNRAS, 356, 637 Harvey-Smith & Cohen (2006) Harvey-Smith L., Cohen R. J., 2006, MNRAS, 371, 1550 Israel & Wootten (1983) Israel F. P., Wootten H. A., 1983, ApJ, 266, 580 Kumar, Tafalla & Bachiller (2004) Kumar M. S. N., Tafalla M., Bachiller R., 2004, A&A, 426, 195 Kurtz, Churchwell & Wood (1994) Kurtz S., Churchwell E., Wood D. O. S., 1994, ApJSS, 91, 659 Kurtz, Hofner & Alvarez (2004) Kurtz S., Hofner P., Alvarez C. V., 2004, ApJSS, 155, 149 Jen (1951) Jen C. K., 1951, PhRv, 81, 197 Menten (1991) Menten K. M., 1991, ApJ, 380, 75 Menten et al. (1992) Menten K. M., Reid M. J., Pratap P., Moran J. M., Wilson T. L., 1992, ApJ, 401, 39 Minier & Booth (2002) Minier V., Booth R. S., A&A, 387, 179 Minier et al. (2003) Minier V., Ellingsen S. P., Norris R. P., Booth R. S., 2003, A&A, 403, 1095 Macleod et al. (1998) Macleod G. C., Scalise E., Saedt S., Galt J. A., Gaylard M. J., 1998, ApJ, 116, 1897 Modjaz et al. (2005) Modjaz M., Moran J. M., Kondratko P. T., Greenhill L. J., 2005, ApJ, 626, 104 Nammahachak et al. (2006) Nammahachak S., Asanok K., Hutawarakorn Kramer B., Cohen R. J., Muanwong O., Gasiprong N., 2006, MNRAS, 371, 1550 Richards et al. (1999) Richards A. M. S., Yates J. A., Cohen R. J., 1999, MNRAS, 306, 954 Sutton et al. (2004) Sutton E. C., Sobolev A. M., Salii S. V., Malyshev A. V., Ostrovskii A. B., Zinchenko I. I., 2004, ApJ, 609, 231 Szymczak, Hrynek & Kus (2000) Szymczak M., Hrynek G., Kus A. J., 2000, A&AS, 143, 269 Vlemmings & Diamond (2006) Vlemmings W. H. T., Diamond P. J., 2006, ApJ, 648, L59 Vlemmings, Harvey-Smith & Cohen (2006a) Vlemmings W. H. T., Harvey-Smith L., Cohen R. J., 2006a, MNRAS, 371, L26 Vlemmings et al. (2006b) Vlemmings W. H. T., Diamond P. J., van Langevelde H. J., Torrelles J. M., 2006b, A&A, 448, 597 Xu et al. (2006) Xu Y., Reid M. J., Zheng X. W., Menten K., 2006, Science, 311, 54 Yen et al. (1969) Yen J. L., Zuckerman B., Palmer P., Penfield H., 1969, ApJ, 156, 27 Zheng et al. (1985) Zheng X. W., Ho P. T. P., Reid M. J., Schneps M. H., 1985, ApJ, 293, 522 Watson et al. (2003) Watson C., Araya E., Sewilo M., Churchwell E., Hofner P., Kurtz S., 2003, ApJ, 587, 714 Wouterloot & Brand. (1989) Wouterloot J. G. A., Brand J., 1989, A&AS, 80, 149 Wright et al. (2004) Wright M. M., Gray M. D., Diamond P. J., 2004, MNRAS, 350, 1253
Direct observation and temperature control of the surface Dirac gap in the topological crystalline insulator (Pb,Sn)Se B. M. Wojek http://bastian.wojek.de/ KTH Royal Institute of Technology, ICT MNF Materials Physics, Electrum 229, 164 40 Kista, Sweden    M. H. Berntsen KTH Royal Institute of Technology, ICT MNF Materials Physics, Electrum 229, 164 40 Kista, Sweden    V. Jonsson KTH Royal Institute of Technology, ICT MNF Materials Physics, Electrum 229, 164 40 Kista, Sweden Center for Quantum Materials, Nordic Institute for Theoretical Physics (NORDITA), Roslagstullsbacken 23, 106 91 Stockholm, Sweden    A. Szczerbakow Institute of Physics, Polish Academy of Sciences, Aleja Lotników 32/46, 02-668 Warsaw, Poland    P. Dziawa Institute of Physics, Polish Academy of Sciences, Aleja Lotników 32/46, 02-668 Warsaw, Poland    B. J. Kowalski Institute of Physics, Polish Academy of Sciences, Aleja Lotników 32/46, 02-668 Warsaw, Poland    T. Story Institute of Physics, Polish Academy of Sciences, Aleja Lotników 32/46, 02-668 Warsaw, Poland    O. Tjernberg oscar@kth.se KTH Royal Institute of Technology, ICT MNF Materials Physics, Electrum 229, 164 40 Kista, Sweden Center for Quantum Materials, Nordic Institute for Theoretical Physics (NORDITA), Roslagstullsbacken 23, 106 91 Stockholm, Sweden (November 20, 2020) Abstract Since the advent of topological insulators hosting symmetry-protected Dirac surface states, efforts have been made to gap these states in a controllable way. A new route to accomplish this was opened up by the discovery of topological crystalline insulators (TCIs) where the topological states are protected by real space crystal symmetries and thus prone to gap formation by structural changes of the lattice. Here, we show for the first time a temperature-driven gap opening in Dirac surface states within the TCI phase in (Pb,Sn)Se. By using angle-resolved photoelectron spectroscopy, the gap formation and mass acquisition is studied as a function of composition and temperature. The resulting observations lead to the addition of a temperature- and composition-dependent boundary between massless and massive Dirac states in the topological phase diagram for (Pb,Sn)Se (001). Overall, our results experimentally establish the possibility to tune between a massless and massive topological state on the surface of a topological system. pacs: 71.20.$-$b, 73.20.At, 77.80.$-$e, 79.60.$-$i I Introduction The study of topological properties of materials and corresponding phase transitions has received tremendous interest in the condensed-matter community in the recent years Hasan and Kane (2010); Ando (2013). A particularly interesting class of materials are three-dimensional topological crystalline insulators (TCIs) Fu (2011); Ando and Fu (2015) where degeneracies in the surface electronic band structures are protected by point-group symmetries of the crystals. The first material systems to realize this state were found within the class of IV-VI narrow-gap semiconductors: SnTe Hsieh et al. (2012); Tanaka et al. (2012), (Pb,Sn)Te Xu et al. (2012); Tanaka et al. (2013), and (Pb,Sn)Se Dziawa et al. (2012); Wojek et al. (2013). These materials, crystallizing in the rock-salt structure, host a set of four Dirac points on specific surfaces and a great advantage in particular of the solid solutions is the tunability of the bulk band gap—and hence their topological properties—by several independent parameters Nimtz and Schlicht (1983), such as composition Tanaka et al. (2013); Wojek et al. (2014), temperature Dziawa et al. (2012); Wojek et al. (2013, 2014), pressure Barone et al. (2013a), or strain Barone et al. (2013b). Moreover, since the surface Dirac points are protected by crystalline symmetries, it has been predicted that selectively breaking these symmetries can lift the surface-state degeneracies and the carriers acquire masses Hsieh et al. (2012); Serbyn and Fu (2014). Indeed, signatures of such massive surface Dirac fermions have been observed recently in Landau-level-spectroscopy experiments on (001) surfaces of (Pb,Sn)Se crystals Okada et al. (2013) and a surface distortion has been identified to be at the heart of the phenomenon Zeljkovic et al. (2015) (cf. Fig. 1). However, since these scanning-tunneling-microscopy/spectroscopy (STM/STS) studies only provide low-temperature data, the evolution of the surface-state mass/gap with temperature remains elusive. It is particularly interesting to ask whether the four (ungapped) Dirac points ever exist simultaneously on the (001) surface of (Pb,Sn)Se crystals in the TCI state, whether the partially gapped surface states can be observed directly by $k$-resolved methods, and whether one can switch between the massive and massless states by varying extrinsic parameters. In this study, we address these open questions using high-resolution angle-resolved photoelectron spectroscopy (ARPES) measurements on (Pb,Sn)Se mixed crystals. Consistent with the previous STM/STS studies we directly observe robust gapped Dirac states on the (001) surface at low temperature. Most importantly, however, the evolution of the gap/mass is tracked by composition- and temperature-dependent experiments and a transition to entirely massless surface Dirac states is witnessed. We suggest that the surface distortion detected by STM vanishes at sufficiently high temperatures, thus leading to the possibility of tuning the surface states employing structural changes. Subsequently, this is summarized in a revised topological phase diagram for (Pb,Sn)Se (001). II Experimental details and results The $n$-type (Pb,Sn)Se single crystals with typical doping levels of a few $10^{18}$ cm${}^{-3}$ studied in this work have been grown using the self-selecting vapor-growth method and characterized by X-ray diffraction and energy-dispersive X-ray spectroscopy Szczerbakow and Berger (1994); *Szczerbakow-ProgCrystalGrowth-2005. High-resolution ARPES studies were carried out on samples cleaved in a (001) plane in ultra-high vacuum at room temperature. Temperature-dependent measurements of solid solutions with SnSe/PbSe ratios between $0$ and $0.6$ were carried out at the BALTAZAR laser-ARPES facility using linearly polarized light with an excitation energy $h\nu=10.5$ eV and a THEMIS-1000 time-of-flight analyzer collecting the photoelectrons Berntsen et al. (2011). The overall crystal-momentum and energy resolution is better than $0.008$ Å${}^{-1}$ and about $5$ meV, respectively. While studying the detailed nature of the bulk band inversion in (Pb,Sn)Se Wojek et al. (2014), low-temperature data of certain samples with comparably high SnSe content were found to contain a peculiar distribution of spectral weight around the Dirac points. An example of such spectra (not included in Ref. Wojek et al., 2014) is depicted in Fig. 2(a). The characteristic band structure of the hybridized “parent surface Dirac cones” schematically shown in Fig. 1 is clearly observed close to the $\overline{X}$ point of the (001) surface Brillouin zone. However, also a distinct gap opening at the Dirac nodes on the $\overline{\Gamma}$-$\overline{X}$ line is revealed. Following this initial finding and reports of similar observations using STM/STS techniques Okada et al. (2013), the gap formation has been studied systematically across the available temperature and composition ranges. The decision whether the surface states in the TCI phase are massive/gapped is based on an analysis of the energy-distribution curves (EDCs) in the close vicinity of the Dirac nodes ($\overline{\Lambda}$). In a first step, the EDCs are modeled by sums of two Voigtian lines on a small strictly monotonic linear to cubic background. The resulting line widths are compared to a fit of a single Voigtian (on the same background) to the EDC at $\overline{\Lambda}$. In case the single line (full width at half maximum) at $\overline{\Lambda}$ is found to be broader by more than $5$ meV than the broadest two-line constituent, we call the states gapped and the two-line peak separation determines the gap size $\Delta_{\overline{\Lambda}}$ 5me . Using this criterion, we find most low-temperature spectra are gapped. Yet, at higher temperatures the Dirac nodes are—within the resolution of the experiment—overall intact. Figures 2(b) and 2(c) summarize the ascertained gap sizes for the samples investigated during this study. While one has to note that the values of $\Delta_{\overline{\Lambda}}$ vary by about $3$ meV to $4$ meV for nominally similar sample compositions, nevertheless, a general trend to larger values for higher SnSe contents and lower temperatures is apparent. Having directly established the low-temperature mass acquisition of the Dirac fermions at $\overline{\Lambda}_{\overline{X}}$ it remains to be investigated whether the Dirac points at $\overline{\Lambda}_{\overline{Y}}$ do exist also on the distorted surface as suggested by the STM/STS studies Okada et al. (2013); Zeljkovic et al. (2015). Unlike STM/STS which is a local-probe technique, ARPES averages over a lateral sample region with a diameter of several tens of micrometers. Hence, a priori the observation of both massive surface states at $\overline{\Lambda}_{\overline{X}}$ and massless states at $\overline{\Lambda}_{\overline{Y}}$ requires a long-range textured surface. If rather a multi-domain structure is formed with distortions along $x$ and $y$, respectively for different domains, a superposition of gapped and gapless states is expected to be seen. A further complication arises from the low photon energy and the experimental geometry of the ARPES set-up which necessitate the samples under study to be rotated by $\pm 90^{\circ}$ to enable the collection of electrons originating from $\overline{Y}$. Since most likely laterally slightly different parts of the surface are probed before and after such a manipulation, the long-range-texture precondition is further intensified. Our experiments show that this requirement is not always met (and hence, overall massive Dirac states are observed). Nevertheless, indeed we do find Dirac nodes, and thus massless states, at $\overline{\Lambda}_{\overline{Y}}$ coexisting with gapped states at $\overline{\Lambda}_{\overline{X}}$ at low temperatures. An example of the latter is shown in Fig. 3(a). It illustrates the evolution of the (001) surface states of Pb${}_{0.70}$Sn${}_{0.30}$Se across the different phases. Close to room temperature the sample is in a topologically trivial state characterized by the positive bulk gap at $\overline{X}$ ($L$) Wojek et al. (2014). Upon cooling to $T=150$ K, the bulk bands invert and the sample enters the TCI phase with massless surface states. Lowering the temperature further leads to the gap formation at the positions of the former Dirac points $\overline{\Lambda}_{\overline{X}}$, while the degeneracies at $\overline{\Lambda}_{\overline{Y}}$ prevail, thus confirming the overall situation depicted in Fig. 1. III Discussion In order to understand the full implications of this study, it is necessary and helpful to compare our results with the observations made in STM/STS experiments before. Zeljkovic et al. Zeljkovic et al. (2015) have demonstrated a low-temperature ($T=4$ K) surface Dirac gap in (Pb,Sn)Se reaching a value of about $25$ meV at a SnSe content of about $38$ % in the solid solution. With decreasing $x$ the gap is reported to decrease and it supposedly vanishes at a critical composition of about $17$ %. At the same time, the observed lattice distortion at the surface appears to have roughly the same magnitude for all studied samples. To reconcile the diminishing surface gap, it has been argued that the penetration depth of the surface state into the bulk increases when the size of the bulk gap decreases and thus the weight of the surface state in the outermost atomic layer probed by STM/STS decreases continuously. So, is our observation of the ($x$, $T$)-dependent gap merely the result of the varying surface-state penetration depth ($\propto\Delta_{\overline{X}}^{-1}$)? The answer to that question is found in Figs. 2(b) and 2(c). At $T=9$ K, $\Delta_{\overline{\Lambda}_{\overline{X}}}$ indeed decreases with decreasing $x$, but in total only by a few millielectronvolts from $x=0.37$ to $x=0.19$. A similar behavior is found at $x=0.23$: Upon increasing the temperature from $9$ K to $100$ K (spanning essentially the entire band-inverted region) the observed gap is reduced only minutely. Hence, $\Delta_{\overline{\Lambda}_{\overline{X}}}$ observed here is—if at all—only to a small extent influenced by the changing penetration depth of the surface states. Thus, possibly due to the increased probing depth of low-energy ARPES as compared to STM/STS (nanometers instead of ångströms), the gap can rather be regarded as a direct measure of the size of the lattice distortion at the surface Serbyn and Fu (2014). This conclusion is seemingly at variance with the claims of Ref. Zeljkovic et al., 2015, yet, it should be noted that in the STM/STS work the reliably determined gap sizes span a region between about $15$ meV and $30$ meV, in full agreement with our data. The nonobservation of distinct features in the STS spectra due to the diminishing spectral weight in the top-most atomic layer does not imply an overall vanishing surface-state gap for samples close to the transition. Altogether, our observations are summarized in Fig. 3(b). Based on the distinction of the massive and massless surface states in the bulk-band-inverted region we are able to determine a “transition line” below which the lattice distortion at least in the (001) surface occurs [Fig. 3(c)]. Given the finite resolution of our experiment, it should be regarded as a lower limit and hence the indicated TCI phase with completely massless surface states might occupy a somewhat smaller, yet still finite, area in the ($x$, $T$) phase diagram. Since STM also provided evidence for a lattice distortion in the topologically trivial phase at low temperature Zeljkovic et al. (2015), although inaccessible by our spectroscopic method, we suspect the transition to occur at finite temperatures in this phase of the solid solutions as well. The (Pb,Sn)Se mixed crystals with a SnSe content in the studied range are found to crystallize in the rock-salt structure without bulk structural phase transitions. Nevertheless, the occurrence of a surface distortion further corroborated by this study should not entirely catch us by surprise. SnSe itself features an orthorhombic crystal structure Wiedemeier and von Schnering (1978) and various closely related IV-VI solid solutions undergo ferroelectric phase transitions Lebedev and Sluchinskaya (1994). Also, in (Pb,Sn)Se the transverse-optical phonon mode at $\Gamma$ softens as the temperature is lowered and the SnSe content is increased Vodop’yanov et al. (1978). Occasionally, for Pb${}_{0.59}$Sn${}_{0.41}$Se even bulk structural changes were reported and found to be suppressed only with increasing carrier densities Volkov et al. (1978), suggesting displacive transitions driven by electron-phonon coupling Volkov and Pankratov (1978); Konsin (1982). It is too early to conclude on the exact mechanism which leads to the observed surface structure. Specifically, given the small but still clearly detectable variations between different measured samples with comparable SnSe content, it appears that the local nature of this lattice distortion somewhat depends on the details of each surface. However, its overall tunability by independent parameters inferred from the ($x$, $T$) phase diagram [Fig. 3(c)] lays the foundation for achieving a controlled mass generation in the topological surface Dirac states. It will be interesting to see in the future how other TCI surfaces behave Safaei et al. (2013); Liu et al. (2013), both of single crystals and of thin films Yan et al. (2014); Polley et al. (2014), where controlled tuning of the crystal structure, e.g. by strain, promises to be a viable avenue to open gaps in topological surface states in addition to breaking symmetries by electric Liu et al. (2014) or magnetic fields Serbyn and Fu (2014). Eventually, it is worth noting that by specifically breaking both mirror symmetries as well as the rotational symmetries of the surface, all Dirac points are expected to be gapped in the material studied here Serbyn and Fu (2014). If a domain structure is formed, a topologically protected 1D conducting state percolating through the surface along the domain walls would be the result Hsieh et al. (2012). Such states can potentially be engineered in very thin films grown on specific vicinal surfaces or by employing TCIs naturally featuring suitable lattice distortions. IV Summary In summary, the evolution of the surface-state gap/mass in (Pb,Sn)Se (001) has been determined systematically by means of high-resolution ARPES. Our results are consistent with the proposed underlying surface distortion at low temperatures. The observation of only massless Dirac surface states at higher temperatures indicates a structural transition of the surface. While its detailed nature is yet to be investigated by systematic structural and/or vibrational studies, we have shown experimentally the possibility to tune between massive and gapless topological states using this very transition. Acknowledgements. We thank S. S. Pershoguba and A. V. Balatsky for stimulating discussions. This work was made possible through support from the Knut and Alice Wallenberg Foundation, the Swedish Research Council, and the Polish National Science Centre (NCN) grants No. 2011/03/B/ST3/02659 and 2012/07/B/ST3/03607. References Hasan and Kane (2010) M. Z. Hasan and C. L. Kane, “Colloquium: Topological insulators,” Rev. Mod. Phys. 82, 3045–3067 (2010). Ando (2013) Y. Ando, “Topological Insulator Materials,” J. Phys. Soc. Jpn. 82, 102001 (2013). Fu (2011) L. Fu, “Topological crystalline insulators,” Phys. Rev. Lett. 106, 106802 (2011). Ando and Fu (2015) Y. Ando and L. Fu, “Topological Crystalline Insulators and Topological Superconductors: From Concepts to Materials,” Annu. Rev. Condens. Matter Phys. 6, 361–381 (2015). Hsieh et al. (2012) T. H. Hsieh, H. Lin, J. Liu, W. Duan, A. Bansil,  and L. Fu, “Topological crystalline insulators in the SnTe material class,” Nat. Commun. 3, 982 (2012). Tanaka et al. (2012) Y. Tanaka, Z. Ren, T. Sato, K. Nakayama, S. Souma, T. Takahashi, K. Segawa,  and Y. Ando, “Experimental realization of a topological crystalline insulator in SnTe,” Nat. Phys. 8, 800–803 (2012). Xu et al. (2012) S.-Y. Xu, C. Liu, N. Alidoust, D. Qian, M. Neupane, J. D. Denlinger, Y. J. Wang, L. A. Wray, R. J. Cava, H. Lin, A. Marcinkova, E. Morosan, A. Bansil,  and M. Z. Hasan, “Observation of Topological Crystalline Insulator phase in the lead tin chalcogenide Pb${}_{1-x}$Sn${}_{x}$Te material class,”  (2012), arXiv:1206.2088 . Tanaka et al. (2013) Y. Tanaka, T. Sato, K. Nakayama, S. Souma, T. Takahashi, Z. Ren, M. Novak, K. Segawa,  and Y. Ando, “Tunability of the $k$-space location of the Dirac cones in the topological crystalline insulator Pb${}_{1-x}$Sn${}_{x}$Te,” Phys. Rev. B 87, 155105 (2013). Dziawa et al. (2012) P. Dziawa, B. J. Kowalski, K. Dybko, R. Buczko, A. Szczerbakow, M. Szot, E. Łusakowska, T. Balasubramanian, B. M. Wojek, M. H. Berntsen, O. Tjernberg,  and T. Story, “Topological crystalline insulator states in Pb${}_{1-x}$Sn${}_{x}$Se,” Nat. Mater. 11, 1023–1027 (2012). Wojek et al. (2013) B. M. Wojek, R. Buczko, S. Safaei, P. Dziawa, B. J. Kowalski, M. H. Berntsen, T. Balasubramanian, M. Leandersson, A. Szczerbakow, P. Kacman, T. Story,  and O. Tjernberg, “Spin-polarized (001) surface states of the topological crystalline insulator Pb${}_{0.73}$Sn${}_{0.27}$Se,” Phys. Rev. B 87, 115106 (2013). Nimtz and Schlicht (1983) G. Nimtz and B. Schlicht, “Narrow-gap lead salts,” in Narrow-Gap Semiconductors, Springer Tracts in Modern Physics, Vol. 98 (Springer-Verlag, Berlin, 1983) pp. 1–117. Wojek et al. (2014) B. M. Wojek, P. Dziawa, B. J. Kowalski, A. Szczerbakow, A. M. Black-Schaffer, M. H. Berntsen, T. Balasubramanian, T. Story,  and O. Tjernberg, “Band inversion and the topological phase transition in (Pb,Sn)Se,” Phys. Rev. B 90, 161202 (2014). Barone et al. (2013a) P. Barone, T. Rauch, D. Di Sante, J. Henk, I. Mertig,  and S. Picozzi, “Pressure-induced topological phase transitions in rocksalt chalcogenides,” Phys. Rev. B 88, 045207 (2013a). Barone et al. (2013b) P. Barone, D. Di Sante,  and S. Picozzi, “Strain engineering of topological properties in lead-salt semiconductors,” Phys. Status Solidi RRL 7, 1102–1106 (2013b). Serbyn and Fu (2014) M. Serbyn and L. Fu, “Symmetry breaking and Landau quantization in topological crystalline insulators,” Phys. Rev. B 90, 035402 (2014). Okada et al. (2013) Y. Okada, M. Serbyn, H. Lin, D. Walkup, W. Zhou, C. Dhital, M. Neupane, S. Xu, Y. J. Wang, R. Sankar, F. Chou, A. Bansil, M. Z. Hasan, S. D. Wilson, L. Fu,  and V. Madhavan, ‘‘Observation of Dirac Node Formation and Mass Acquisition in a Topological Crystalline Insulator,” Science 341, 1496–1499 (2013). Zeljkovic et al. (2015) I. Zeljkovic, Y. Okada, M. Serbyn, R. Sankar, D. Walkup, W. Zhou, J. Liu, G. Chang, Y. J. Wang, M. Z. Hasan, F. Chou, H. Lin, A. Bansil, L. Fu,  and V. Madhavan, “Dirac mass generation from crystal symmetry breaking on the surfaces of topological crystalline insulators,” Nat. Mater. 14, 318–324 (2015). Szczerbakow and Berger (1994) A. Szczerbakow and H. Berger, “Investigation of the composition of vapour-grown Pb${}_{1-x}$Sn${}_{x}$Se crystals ($x\leq 0.4$) by means of lattice parameter measurements,” J. Cryst. Growth 139, 172–178 (1994). Szczerbakow and Durose (2005) A. Szczerbakow and K. Durose, “Self-selecting vapour growth of bulk crystals – Principles and applicability,” Prog. Cryst. Growth Charact. Mater. 51, 81–108 (2005). Berntsen et al. (2011) M. H. Berntsen, O. Götberg,  and O. Tjernberg, “An experimental setup for high resolution 10.5 eV laser-based angle-resolved photoelectron spectroscopy using a time-of-flight electron analyzer,” Rev. Sci. Instrum. 82, 095113 (2011). (21) The criterion is based on the scattering of the determined line width at high temperatures and the energy resolution of the experiment which are both on the order of $5$ meV. Wiedemeier and von Schnering (1978) H. Wiedemeier and H. G. von Schnering, “Refinement of the structures of GeS, GeSe, SnS and SnSe,” Z. Kristallogr. 148, 295–303 (1978). Lebedev and Sluchinskaya (1994) A. I. Lebedev and I. A. Sluchinskaya, “Ferroelectric phase transitions in IV-VI semiconductors associated with off-center ions,” Ferroelectrics 157, 275–280 (1994). Vodop’yanov et al. (1978) L. K. Vodop’yanov, I. V. Kucherenko, A. P. Shotov,  and R. Scherm, “Zero-gap state and phonon spectrum of the system of narrow-band compounds Pb${}_{1-x}$Sn${}_{x}$Se,” JETP Lett. 27, 92–95 (1978). Volkov et al. (1978) B. A. Volkov, I. V. Kucherenko, V. N. Moiseenko,  and A. P. Shotov, “Structural phase transition in the Pb${}_{1-x}$Sn${}_{x}$Se system under the influence of temperature and hydrostatic pressure,” JETP Lett. 27, 371–374 (1978). Volkov and Pankratov (1978) B. A. Volkov and O. A. Pankratov, “Crystal structures and symmetry of the electron spectrum of IV-VI semiconductors,” J. Exp. Theor. Phys. 48, 687–696 (1978). Konsin (1982) P. Konsin, “Microscopic theory of ferroelectric phase transitions in A${}^{\text{IV}}$B${}^{\text{VI}}$-type semiconductors,” Ferroelectrics 45, 45–50 (1982). Safaei et al. (2013) S. Safaei, P. Kacman,  and R. Buczko, ‘‘Topological crystalline insulator (Pb,Sn)Te: Surface states and their spin polarization,” Phys. Rev. B 88, 045305 (2013). Liu et al. (2013) J. Liu, W. Duan,  and L. Fu, “Two types of surface states in topological crystalline insulators,” Phys. Rev. B 88, 241303 (2013). Yan et al. (2014) C. Yan, J. Liu, Y. Zang, J. Wang, Z. Wang, P. Wang, Z.-D. Zhang, L. Wang, X. Ma, S. Ji, K. He, L. Fu, W. Duan, Q.-K. Xue,  and X. Chen, “Experimental Observation of Dirac-like Surface States and Topological Phase Transition in ${\mathrm{Pb}}_{1-{}x}{\mathrm{Sn}}_{x}\mathrm{Te}(111)$ Films,” Phys. Rev. Lett. 112, 186801 (2014). Polley et al. (2014) C. M. Polley, P. Dziawa, A. Reszka, A. Szczerbakow, R. Minikayev, J. Z. Domagala, S. Safaei, P. Kacman, R. Buczko, J. Adell, M. H. Berntsen, B. M. Wojek, O. Tjernberg, B. J. Kowalski, T. Story,  and T. Balasubramanian, “Observation of topological crystalline insulator surface states on (111)-oriented Pb${}_{1-x}$Sn${}_{x}$Se films,” Phys. Rev. B 89, 075317 (2014). Liu et al. (2014) J. Liu, T. H. Hsieh, P. Wei, W. Duan, J. Moodera,  and L. Fu, “Spin-filtered edge states with an electrically tunable gap in a two-dimensional topological crystalline insulator,” Nat. Mater. 13, 178–183 (2014).
Quantum entanglement and phase transition in a two-dimensional photon-photon pair model Jian-Jun Zhang ruoshui789@gmail.com Jian-Hui Yuan Jun-Pei zhang Ze Cheng School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China Abstract We propose a two-dimensional model consisting of photons and photon pairs. In the model, the mixed gas of photons and photon pairs is formally equivalent to a two-dimensional system of massive bosons with non-vanishing chemical potential, which implies the existence of two possible condensate phases. Using the variational method, we discuss the quantum phase transition of the mixed gas and obtain the critical coupling line analytically. Moreover, we also find that the phase transition of the photon gas can be interpreted as second harmonic generation. We then discuss the entanglement between photons and photon pairs. Additionally, we also illustrate how the entanglement between photons and photon pairs can be associated with the phase transition of the system. keywords: Bose-Einstein condensate, Quantum phase transition, Entanglement 1 Introduction Bose-Einstein condensate (BEC) is the remarkable state of matter that spontaneously emerges when a system of bosons becomes cold enough that a significant fraction of them condenses into a single quantum state to minimize the system’s free energy. Particles in that state then act collectively as a coherent wave. The phase transition for an atomic gas was first predicted by Einstein in 1924 and experimentally confirmed with the discovery of superfluid helium-4 in 1938. Obviously, atoms aren’t the only option for a BEC. In recent years, with the development of techniques, the phenomenon of BEC was observed in several physical system [1-9], including exciton polaritions, solid-state quasiparticles and so on. We know that photons are the simplest of bosons, so that it would seem that they could in principle undergo this kind of condensation. The difficulty is that in the usual blackbody configuration, which consists of an empty three-dimensional (3D) cavity, the photon is massless and its chemical potential is zero, so that the BEC of photons under these circumstances would seem to be impossible. However, very recently, J. Klaers, etc. have overcome both obstacles using a simple approach [10,11]: By confining laser light within a two-dimensional (2D) cavity bounded by two concave mirrors, they create the conditions required for light to thermally equilibrate as a gas of conserved particles rather than as ordinary blackbody radiation. What is more, it is well known that there are many fascinated optical effects in the nonlinear medium, for instance, reduced fluctuation in one quadrature (squeezing) [12], sub-Poissonian statistics of the radiation field [13], or the collapse-revivals phenomenon [14]. Especially, in the nonlinear medium, a photon from the laser beam can couple with other photons to form a photon-pair (PP) [15-18]. The essence of PP has been investigated by many authors [19-21]. However, inspired by the experimental discovery of BEC of photons, in this letter we construct another interesting 2D model consisting of photons and PPs. In this model, the mixed system of photons and PPs is formally equivalent to a 2D gas of massive bosons with non-vanishing chemical potential, which implies the existence of two possible condensate phases, the mixed photon-PP condensate phase and the pure PP condensate phase. By means of a variational method we investigate the quantum phase transition of the mixed photon gas. Especially, we find that the quantum phase transition of the photon gas can be interpreted as second harmonic generation. We then discuss the entanglement between photons and PPs. By investigating the entanglement in the ground state and the dynamics of entanglement, we also illustrate how the entanglement between photons and PPs can be associated with the phase transition of the system. The investigation of these questions is important both for its connection with quantum optics and for its practical applications to harmonic generation and quantum information. The remainder of this paper is organized as follows: In Sec. II, we theoretically investigate the phenomenon of BEC of photons and PPs in a 2D optical microcavity. The entanglement between photons and PPs is investigated in Sec. III. Finally, we make a simple conclusion. 2 Bose-Einstein condensation of photons and photon pairs 2.1 Description of the photon pair We start with the description of the PP. History speaking, the essence of the PP is presently still under discussion [19-21], and there exist many different ways to obtain it. However, here we will use the standard procedure [22] in the construction of harmonic generation to derive the PP. We know that the presence of an electromagnetic field in the nonlinear material causes a polarization of the medium and the polarization can be expanded in powers of the instantaneous electric field: $$\displaystyle{\bf{P}}({\bf{r}},t)=\chi^{(1)}{\bf{E}}({\bf{r}},t)+\chi^{(2)}{% \bf{E}}^{2}({\bf{r}},t)+....$$ (1) Here, the first term defines the usual linear susceptibility, and the second term defines the lowest order nonlinear susceptibility. Ignoring the high order parts (i.e. only expend the polarization to second order in electric field $E$), we find that the Hamiltonian describing the interaction of the radiation field with the dielectric medium is decomposed into two terms: $$\displaystyle\begin{array}[]{l}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1% .0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1% .0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}H_{{\mathop{\rm int}}% }=H_{line}+H_{nonline}\\ {\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}H_{line}=-% \int{\chi^{(1)}{\bf{E}}^{2}({\bf{r}},t)}d{\bf{r}}\\ H_{nonline}=-\int{\chi^{(2)}{\bf{E}}^{3}({\bf{r}},t)}d{\bf{r}}\\ .\end{array}$$ (2) where $H_{line}$ represents the energy of the linear interaction and $H_{nonline}$ the nonlinear interaction. It is well known that the electric field operator in a microcavity can be expanded in terms of normal modes [23] as $$\displaystyle{\bf{E}}(r,t)=i\sum\limits_{\bf{k}}{{\bf{e}}_{\bf{k}}}\left({% \frac{{\hbar\omega_{\bf{k}}}}{{2V\varepsilon}}}\right)^{1/2}\left({\hat{a}_{% \bf{k}}e^{-i\omega_{\bf{k}}t+i{\bf{k}}\cdot{\bf{r}}}+\hat{a}_{\bf{k}}^{+}e^{i% \omega_{\bf{k}}t-i{\bf{k}}\cdot{\bf{r}}}}\right),$$ (3) where $\hat{a}_{\bf{k}}$ and $\hat{a}_{\bf{k}}^{+}$ are the annihilation and creation operators of photons with frequency $\omega_{\bf{k}}$, and they all obey the usual boson commutation rules. $V$ is the normalization volume, $\varepsilon$ is the dielectric constant of the medium and ${\bf{e}}_{\bf{k}}$ is the unit polarization vector with the usual polarization indices omitted for simplicity. Substituting (3) into (2), for the linear interaction part, we find that it consists of two processes, dissipation and two-photon absorption(or emission). Here, dissipation is essentially also a two-photon process, in which one photon is absorbed by the medium, meanwhile another one is emitted. The linear interaction can be ignored, if the incident photon field frequency $\omega_{0}$ is well below the electronic transition frequencies of the medium. In that case, we need only consider the nonlinear interaction, which has the simple form $$\displaystyle H_{nonline}=\frac{\hbar}{{\sqrt{V}}}\sum\limits_{{\bf{k,k^{% \prime}}}}{\chi_{{\bf{k}},{\bf{k^{\prime}}}}\left({\hat{b}_{{\bf{k}}+{\bf{k^{% \prime}}}}^{+}\hat{a}_{\bf{k}}\hat{a}_{{\bf{k^{\prime}}}}+H.c.}\right)},$$ (4) under the requirements of phase matching. Above, the operator $\hat{a}$ represents the normal photons, $\hat{b}$ represents the coupling PP, and where $\chi_{{\bf{k}},{\bf{k^{\prime}}}}$ is the coupling matrix element. The interaction energy in (4) consists of two terms. The first term $b_{{\bf{k}}+{\bf{k^{\prime}}}}^{+}a_{\bf{k}}a_{{\bf{k^{\prime}}}}$ describes the process in which two normal photon with wave-vector ${\bf{k}}$ and ${\bf{k^{\prime}}}$ couple into a PP with wave-vector ${\bf{K}}={\bf{k}}+{\bf{k^{\prime}}}$, and the second term describe the opposite process. The energy is conserved in both the processes. 2.2 Free-photon dispersion relation inside the optical microcavity In this letter, we restrict out investigation inside a 2D optical microcavity. The microcavity, as shown in Fig. 1, consists of two curved dielectric mirrors with high reflectivity (about 99.9), which ensure prefect reflection of the longitudinal component of the electromagnetic field within the cavity. In addition, the transverse size of the cavity is much larger than its longitudinal one. We know that for a free photon, its frequency as a function of transversal ($k_{r}$) and longitudinal ($k_{z}$) wave number is $\omega=c\left[{k_{z}^{2}+k_{r}^{2}}\right]^{1/2}$. However, in the case of photons confined inside the microcavity, the vanishing of the electric field at the reflecting surfaces of the curved-mirrors imposes a quantization condition on the longitudinal mode number $k_{z}$, $k_{z}=n\pi/D(r)$, where $n$ is an integer and where $D(r)=D_{0}-2(R-\sqrt{R^{2}-r^{2}})$ is the separation of two curved-mirrors at distance $r$ from the optical axis, with $D_{0}$ the mirror separation at distance $r=0$ and $R$ the radius of curvature. In the present work, we consider to fix the longitudinal mode number of photons by inserting a circular filter into the cavity. The filter is filled with a dye solution, in which photons are repeatedly absorbed and re-emitted by the dye molecules. Thus, it also plays the role of photon reservoir. We know that the longitudinal size of the cavity (i.e., the distance between the mirrors) is very small. The small distance $D(r)$ between the mirrors causes a large frequency spacing between adjacent longitudinal modes, comparable with the spectral width of the dye. Modify spontaneous emission such that the emission of photons with a given longitudinal mode number, $n=q$ in our case, dominates over other emission processes. In this way, the longitudinal mode number is frozen out. For fixed longitudinal mode number $q$ and in paraxial approximation ($r\ll R$, $k_{r}\ll k_{z}$), we also find that the dispersion relation of photons approximatively becomes $\omega\approx q\pi c/D_{0}+ck_{r}^{2}D_{0}/2q\pi$. The above frequency-wavevector relation, upon multiplication by $\hbar$, becomes the energy-momentum relation for the photon $$\displaystyle E\approx m_{{\rm{ph}}}c^{2}+\frac{{(p_{r})^{2}}}{{2m_{{\rm{ph}}}% }},$$ (5) where $m_{{\rm{ph}}}=\hbar q\pi/D_{0}c=\hbar\omega_{eff}/c^{2}$ is the effective mass of the confined photons. At low temperatures, it is convenient to redefine the zero of energy, so that only the effective kinetic energy, $$\displaystyle E\approx\frac{{(p_{r})^{2}}}{{2m_{{\rm{ph}}}}},$$ (6) remains. The above analysis shows that for the photon confined inside the 2D microcavity, it is formally equivalent to a general boson having an effective mass $m_{{\rm{ph}}}=\hbar\omega_{eff}/c^{2}$, that is moving in the transverse resonator plane. Furthermore, we here consider the case that the microcavity (except the filter part) is filled with a Kerr nonlinear medium exhibiting significant third-order optical nonlinearity. Due to the nonlinear effect, photons can couple into PPs. If we connect the non-vanishing effective photon mass to the previous analysis of the PPs, in this case we then can rewrite the nonlinear interaction $H_{nonline}$ as $$\displaystyle H_{nonline}=\frac{\hbar}{{\sqrt{S}}}\sum\limits_{{\bf{k}}_{r}{% \bf{,k^{\prime}}}_{r}}{\chi_{{\bf{k}}_{r},{\bf{k^{\prime}}}_{r}}\left({b_{{\bf% {k}}_{r}+{\bf{k^{\prime}}}_{r}}^{+}a_{{\bf{k}}_{r}}a_{{\bf{k^{\prime}}}_{r}}+H% .c.}\right)}$$ (7) where $S$ is the surface area of the 2D cavity, and where $a_{{\bf{k}}_{r}}$ and $a_{{\bf{k^{\prime}}}_{r}}$ are the annihilation operators of massive photons with transverse wavevectors ${\bf{k}}_{r}$ and ${\bf{k}}_{r^{\prime}}$, respectively, and $b_{{\bf{k}}_{r}+{\bf{k^{\prime}}}_{r}}^{+}$ are the creation operator of the massive PPs with transverse wave-vector $K_{r}={\bf{k}}_{r}+{\bf{k^{\prime}}}_{r}$. Here, it should be remarked that the existence of effective photon mass makes the thermodynamics of this 2D mixed gas of photons and PPs different from the usual 3D photon gas. For the 2D system, thermalization is achieved in a photon-number-conserving way ($N=N_{a}+2N_{b}$) with nonvanishing chemical potential $\mu$, by multiple scattering with the dye molecules, which acts as heat bath and equilibrates the transverse modal degrees of freedom of the photon gas to the temperature of dye molecules. 2.3 BEC of photons and photon pairs In virtue of the above analysis, we consider the following basic Hamiltonian, to give a simple model of PP formation (with $\hbar=1$ throughout this letter) $$\displaystyle H_{\mu}=H_{a}+H_{b}+H_{ab},$$ (8) with $$\displaystyle\begin{array}[]{l}H_{a}=\sum\limits_{\bf{k}}{\frac{{{\bf{k}}^{2}}% }{{2m_{ph}}}a_{\bf{k}}^{+}a_{\bf{k}}}+\frac{{u_{aa}}}{S}\sum\limits_{{\bf{k}},% {\bf{k^{\prime}}},{\bf{k^{\prime\prime}}}}{a_{{\bf{k}}+{\bf{k^{\prime}}}-{\bf{% k^{\prime\prime}}}}^{+}a_{{\bf{k^{\prime\prime}}}}^{+}}a_{{\bf{k^{\prime}}}}a_% {\bf{k}}\\ H_{b}=\sum\limits_{\bf{k}}{\left({\frac{{{\bf{k}}^{2}}}{{4m_{ph}}}-2\mu}\right% )b_{\bf{k}}^{+}b_{\bf{k}}}+\frac{{u_{bb}}}{S}\sum\limits_{{\bf{k}},{\bf{k^{% \prime}}},{\bf{k^{\prime\prime}}}}{b_{{\bf{k}}+{\bf{k^{\prime}}}-{\bf{k^{% \prime\prime}}}}^{+}b_{{\bf{k^{\prime\prime}}}}^{+}}b_{{\bf{k^{\prime}}}}b_{% \bf{k}}\\ H_{ab}=\frac{{u_{a,b}}}{S}\sum\limits_{{\bf{k}},{\bf{k^{\prime}}}}{a_{\bf{k}}^% {+}b_{{\bf{k^{\prime}}}}^{+}}b_{{\bf{k^{\prime}}}}a_{\bf{k}}-\frac{1}{{\sqrt{S% }}}\sum\limits_{{\bf{k}},{\bf{k^{\prime}}}}{\chi_{{\bf{k}},{\bf{k^{\prime}}}}(% b_{{\bf{k}}+{\bf{k^{\prime}}}}^{+}}a_{{\bf{k^{\prime}}}}a_{\bf{k}}+H.c.)\\ \end{array}$$ (9) Above, $H_{a}$ and $H_{b}$ denote the pure photon and PP contributions, and $H_{ab}$ refers to the interaction between them. In the dilute gas limit $u_{a,a}$, $u_{b,b}$, $u_{a,b}$ are proportional to the two-body s-wave photon-photon, photon-PP, and PP-PP scattering lengths [23], respectively, and $\chi_{{\bf{k}},{\bf{k^{\prime}}}}$ characterizes the coupling strength, encoding that PPs are composed of two massive photons. Note that in (9) we ignore the chemical potential term $\mu N_{a}$ from the Hamiltonian [24]. Additionally, in (9), we also drop the subscript of the transverse wave-vectors of photons and PPs, i.e. we take ${\bf{k}}_{r}={\bf{k}}$. It is well known that for a general massive Boson-Boson pairs gas, at absolute zero temperature, there exists a BEC consisting of two possible condensate phases [25-27]: (i) Both the single boson and the pair of bosons are condensed. (ii) The pair of bosons are condensed but the single boson is not. Now we know that in the case of photons (or PPs) confined inside the microcavity, the subsystem of photons (or PPs) is formally equivalent to a 2D gas of massive bosons with non-vanishing chemical potential. Thus, the feature should also survive for the 2D mixed gas of photons and PPs. In the condensate phase, a macroscopic number of particles occupy the zero-momentum state, and it is useful to separate out the condensate modes from the Hamiltonian. Follow the process, we find that at the BEC state, the grand canonical Hamiltonian of the mixed system has the form $$\displaystyle H_{\mu}=H_{0}+\delta H,$$ (10) where $$\displaystyle\begin{array}[]{l}H_{0}=-\mu_{b}b^{+}b+\frac{{u_{aa}}}{S}\left({a% ^{+}a}\right)^{2}+\frac{{u_{bb}}}{S}\left({b^{+}b}\right)^{2}\\ {\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}% {\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}% {\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}% {\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}% {\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}+\frac{{u_{ab}}}{S}a^{+}ab% ^{+}b-\frac{\chi}{{\sqrt{S}}}\left({a^{+}a^{+}b+b^{+}aa}\right)\\ \end{array}$$ (11) is the condensate part of the Hamiltonian with $\mu_{b}=2\mu+u_{bb}/S$ the modified chemical potential of PPs, and where $\delta H=H\left({a_{\bf{k}},b_{\bf{k}}}\right)$ is the perturbation part and it is a complex function of the non-condensate modes. Here, we mention that at the condensate state we only need consider the case of single mode coupling, thus we can treat the coupling matrix element $\chi$ as an adjustable constant. Note that in (11) we also drop the zero-momentum subscript of the creation and annihilation operators of photons and PPs. In the present work, we mainly aim to investigate the phenomenon of BEC of photons and PPs, thus, hereafter we will ignore the perturbation part and approximately write the Hamiltonian as the form $H_{\mu}\approx H_{0}$. The Hamiltonian commutes with the total photon number $N=a^{+}a+2b^{+}b$ and $n=N/S$ is the particle density. Up to now we have not made a careful distinction between the two possible condensate phases. However, for the mixed system, working out the ground-state phase diagram is very important. Here, we intend to employ the variational principle method for finding the ground-state configurations of the present system and examining their dependence from the microscopic parameters. In other word, we aim to work out the ground-state phase diagram of the mixed system starting from the study the semiclassical equation. Clearly, we know that the chemical potential $\mu$ of the mixed system allows the total photon number (whether free or bound into PPs) to fluctuate around some constant average value $N$, then the total number of photons need only be conserved on the average value. For convenience, hereafter we assume that $N$ is an even number and thus $M=N/2$ denotes the maximum number of the PPs. Furthermore, in this case we also introduce a new operation, namely the double photon creation operation with the relation $$\displaystyle\begin{array}[]{l}\left({c^{+}}\right)^{m}\left|0\right\rangle% \equiv\sqrt{\frac{{m!}}{{\left({2m}\right)!}}}\left({a^{+}}\right)^{2m}\left|0% \right\rangle\\ {\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}% {\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}% c^{+}c\left|0\right\rangle\equiv 2a^{+}a\left|0\right\rangle\\ \end{array},$$ (12) where $\left|0\right\rangle$ is the vacuum state. Using the new operator, we construct the Gross-Pitaevskii (GP) states [28] $$\displaystyle\left|{\psi_{{\rm{GP}}}}\right\rangle=\frac{1}{{\sqrt{M!}}}\left[% {\alpha c^{+}+\beta b^{+}}\right]^{M}\left|0\right\rangle$$ (13) as the trial macroscopic state. Here, $\alpha=\left|\alpha\right|e^{i\theta_{a}}$ and $\beta=\left|\beta\right|e^{i\theta_{b}}$ are complex amplitudes with $\left|\alpha\right|^{2}=N_{a}/N$ and $\left|\beta\right|^{2}=2N_{b}/N$ the photon and PP densities, respectively. $\theta_{a}$ and $\theta_{b}$ (real valued) denoted the phases of each species. Obviously, the parameters $\alpha$ and $\beta$ satisfy the normalized condition $\left|\alpha\right|^{2}+\left|\beta\right|^{2}=1$. With the help of the GP state, the semiclassical model Hamiltonian $\bar{H}(\alpha,\beta)$ is given by $$\displaystyle\begin{array}[]{l}\bar{H}=\mathop{\lim}\limits_{N\to\infty}\frac{% {\left\langle{\psi_{GP}}\right|H_{\mu}\left|{\psi_{GP}}\right\rangle}}{{M{\chi% }\sqrt{2n}}}\\ {\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}% {\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}=\left[{-\mu_{b}\left|\beta\right|^{2}+% 2u_{aa}n\left|\alpha\right|^{4}+u_{bb}n\left|\beta\right|^{4}}\right./2\\ {\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}% {\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{{\left.{+u_{ab}n\left|% \alpha\right|^{2}\left|\beta\right|^{2}}\right]}\mathord{\left/{\vphantom{{% \left.{+u_{ab}n\left|\alpha\right|^{2}\left|\beta\right|^{2}}\right]}{{\chi}% \sqrt{2n}}}}\right.\kern-1.2pt}{{\chi}\sqrt{2n}}}-2\left|\alpha\right|^{2}% \sqrt{\left|\beta\right|^{2}}\cos\theta\\ \end{array},$$ (14) where $\theta=\theta_{b}-2\theta_{a}$ is the phase difference. Considering the conserved condition $\left|\alpha\right|^{2}+\left|\beta\right|^{2}=1$, we next introduce a new variables $s=\left|\alpha\right|^{2}-\left|\beta\right|^{2}$. Using the new notation, we rewrite the model Hamiltonian as $$\displaystyle\bar{H}=-\lambda s^{2}-2\gamma s+\xi-\sqrt{2\left({1-s}\right)}(1% +s)\cos\theta,$$ (15) with $\begin{array}[]{l}\lambda=\frac{{\sqrt{2n}}}{{\chi}}\left({\frac{{u_{ab}}}{4}-% \frac{{u_{aa}}}{2}-\frac{{u_{bb}}}{8}}\right)\\ \gamma=\frac{{\sqrt{2n}}}{{\chi}}\left({\frac{{u_{bb}}}{8}-\frac{{u_{aa}}}{2}-% \frac{{\mu_{b}}}{{4n}}}\right)\\ \xi=\frac{{\sqrt{2n}}}{{\chi}}\left({\frac{{u_{aa}}}{2}+\frac{{u_{ab}}}{4}+% \frac{{u_{bb}}}{8}-\frac{{\mu_{b}}}{n}}\right)\\ \end{array}$. According to the variational principle, we minimize the energy $\bar{H}(\alpha,\beta)$ with $s$ and $\theta$ as variational parameters. We then obtain the optimum values [i.e.($\bar{s}$, $\bar{\theta}$)] of parameters for the ground state as follows: $$\displaystyle\left({\bar{s},\bar{\theta}}\right)=\left\{\begin{array}[]{l}(-1,% {\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}% {\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}\theta),{\kern 1.0pt}{\kern 1.0pt}{% \kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{% \kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{% \kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{% \kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{% \kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{% \kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{% \kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{% \kern 1.0pt}\gamma-\lambda+1<0\\ (s,{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}0{\kern 1.0pt}{\kern 1.% 0pt}{\kern 1.0pt}or{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}\pi),{\kern 1.0pt}{% \kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{% \kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{% \kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{% \kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{% \kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{% \kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}{\kern 1.0pt}\gamma-\lambda% +1>0\\ \end{array}\right.,$$ (16) where $-1<s<1$ is the solution of the equation $\lambda s+\gamma=\left({3s-1}\right)/2\sqrt{2(1-s)}$ (the explicit value can be obtain by graphical solution method, it is generally too messy to be shown here). The result, together with the fact that the parameters $\left|\alpha\right|^{2}$ and $\left|\beta\right|^{2}$ denote the photon and PP densities, indicates that when $\gamma-\lambda+1<0$ the system converts from the mixed photon-PP phase to the pure PP phase. We therefore can interpret this line $\gamma-\lambda+1=0$ as the threshold coupling for the formation of a predominantly PP state. Here, it is to be mentioned that for the pure PP phase, $\bar{s}=-1$ thus the relative phase $\theta$ cannot be defined. 3 Entanglement between photons and photon pairs 3.1 Entanglement of the ground state At the BEC state, one may consider the mixed gas of photons and PPs as a bipartite system of two modes. For the present system, the entanglement of two modes is always closely associated with the phase transition of the system. Moreover, the two modes, be they spatially separated, and differing in some internal quantum number, are clearly distinguishable subsystems. Thus, the state of each mode can be characterized by its occupation number. By using the fact that the total number of photons $N$ is constant, a general state of the system (in the Heisenberg picture) can be written for even $N$ in terms of the Fock states by $$\displaystyle\left|\psi\right\rangle=\sum\limits_{m=0}^{M}{c_{m}}\left|{2m,M-m% }\right\rangle,$$ (17) where $m$ is the half population of particles in photon mode $a$, and $c_{m}$ is the coefficients of the state. In the Fock representation, the GP state also can be reexpressed as $\left|{\psi_{GP}}\right\rangle=\sum\limits_{m=0}^{M}{g_{m}}\left|{2m,M-m}\right\rangle$, with coefficients $g_{m}=\sqrt{\frac{{M!}}{{m!\left({M-m}\right)!}}}\alpha^{m}\beta^{M-m}$. The standard measure of entanglement of the bipartite system is the entropy of entanglement $S(\rho)$ $$\displaystyle S(\rho)=-\sum\limits_{m=0}^{M}{\left|{c_{m}}\right|^{2}}\log_{2}% \left({\left|{c_{m}}\right|^{2}}\right),$$ (18) which is the von Neumann entropy of the reduced density operator of either of the subsystems[29]. In the present system, the maximal entanglement also can be obtain by optimizing the expression (18) with respect to $\left|{c_{m}}\right|^{2}$. By imposing the normalization condition $\sum\limits_{m=0}^{M}{\left|{c_{m}}\right|^{2}}=1$, we finally get $S_{\max}=\log_{2}\left({M+1}\right)$, which is related to the dimension $M+1$ of the Hilbert space of the individual modes. Using expression (18) and the coefficients $c_{m}$ obtained through exact diagonalization of the Hamiltonian (11) as done in the atom-molecule model [30], we plot in Fig. 2 the entropy of entanglement of the ground state as a function of the parameters $\lambda$ and $\gamma$. We note that in this letter we restrict our attentions to the repulsive case, i.e., we restrict $\lambda\leq 0$ throughout the letter. From Fig. 2, we observe that the entanglement entropy exhibits a sudden decrease close $\gamma-\lambda+1=0$. This is indicative of the fact that across the line $\gamma-\lambda+1=0$ a quantum phase transition occurs. To gain more information associated with the quantum phase transition of the present system, we also depict in Fig. 3 the entropy of entanglement (solid line) and the expectation value (dashed line) of the scaled PP number operator of the ground state as a function $\gamma$ for fixed parameter value $\lambda$. From Fig. 3, we see that the average value of the number of PPs increases as $\gamma$ increases. Especially, when $\gamma-\lambda+1<0$, the average number of PPs is maximal. The result confirms that there indeed exists a phase transition for the present system in the ground state. Furthermore, we also find that the ground-state entanglement entropy is not maximal at the critical line, i.e. in the region $\gamma-\lambda+1>0$, the system is always strongly entangled. Ref. (28) gives the property responsible for the long-range correlation. Additionally, we also consider that the trait is associated with the symmetry-broking of the coupling term of the system. Due to the asymmetric form of the coupling term, the pure photon condensation will be forbidden. As a result, in the mixed condensate phase the imbalance $\left({2N_{b}-N_{a}}\right)/N$ between the two modes is always very small, which is responsible for the strongly entanglement. In addition, if we connect the non-vanishing photon mass $m_{ph}$ to the longitudinal wave number $k_{z}$ by the relation $m_{{\rm{ph}}}=\hbar k_{z}/c$ with $k_{z}=q\pi/D_{0}$, then we find that the quantum phase transition of the photon system can be interpreted as second harmonic generation. When $\gamma-\lambda+1<0$, almost all photons with frequency $\omega=ck_{z}$ couple into PPs with frequency $\omega=2ck_{z}$. In this case, the entanglement between the photons and PPs is very small, and the entropy of entanglement is close to zero. 3.2 Dynamics of entanglement In the above analysis, we have investigated the entanglement of the ground state. We found that in the ground state, across the phase transition line the entanglement entropy exhibits a sudden change. To gain a better understanding of the influence of ground-state phase transition to entanglement, in this subsection we investigate the dynamics of entanglement. In studying the dynamics of the system, we first need express a general state in the form of temporal evolution (i.e., need change the expression of a general state from Heisenberg picture to Schrodinger picture). Following the standard procedure, we can obtain $$\displaystyle\begin{array}[]{l}\left|{\psi\left(t\right)}\right\rangle=U\left(% t\right)\left|{\psi\left(0\right)}\right\rangle\\ =\sum\limits_{m=0}^{M}{c_{m}}\left(t\right)\left|{2m,M-m}\right\rangle\\ \end{array},$$ (19) where, $U\left(t\right)=\sum\limits_{n=0}^{M}{\left|{\psi_{n}}\right\rangle\left% \langle{\psi_{n}}\right|}\exp\left({-iE_{n}t}\right)$ is the temporal operator with $\left|{\psi_{n}}\right\rangle$ the eigenstates of the system having energy $E_{n}$, and $\left|{\psi\left(0\right)}\right\rangle$ is the initial state. Here, the time dependence of coefficients $c_{m}\left(t\right)$ are given by $c_{m}\left(t\right)=\left\langle{2m,M-m}\right|U\left(t\right)\left|{\psi\left% (0\right)}\right\rangle$. Subsequently, the entanglement entropy given in (18) can be rewritten as $S(\rho)=-\sum\limits_{m=0}^{M}{\left|{c_{m}\left(t\right)}\right|^{2}}\log_{2}% \left({\left|{c_{m}\left(t\right)}\right|^{2}}\right)$. In this case, the entanglement entropy depends on both the choice of initial states and the value of microscopic parameters $\left\{{\lambda,\gamma}\right\}$. At the present work, we consider that the mixed system is in the BEC state, thus here choosing GP state as the initial state is suitable. By adjusting the GP coefficients $\left\{{\left|\alpha\right|^{2},\left|\beta\right|^{2}}\right\}$ and the microscopic parameters $\left\{{\lambda,\gamma}\right\}$, in this subsection we also want to know if the ground-state phase transition also characterizes different dynamics. The time evolution of the entanglement entropy for different initial state and interaction parameters is shown in Fig. 4. From Fig.4 we observe the features of quantum dynamics, such as the collapse and revival of oscillations and non-periodic oscillations. Additionally, we also find that the amplitude of the entanglement entropy is smaller in the region $\gamma-\lambda+1<0$ contrast with in the region $\gamma-\lambda+1>0$. Especially, we note that the greater the imbalance $\left|\beta\right|^{2}-\left|\alpha\right|^{2}$ between the two modes in the initial state, the clearer the difference can be observed. To understand the physical reason for the above phenomenon, we also need rewrite the general state given in (19) in terms of the eigenstates of the system $\left|{\psi\left(t\right)}\right\rangle=\sum\limits_{n=0}^{M}{c(n,t)}\left|{% \psi_{n}}\right\rangle$, where $\left|{c(n,t)}\right|^{2}=\left|{c(n,0)}\right|^{2}=\left|{\left\langle{\psi_{% n}}\right|\exp(-iE_{n}t)\left|{\psi\left(0\right)}\right\rangle}\right|^{2}$ can be explained as the transition probability of the system from the initial state $\left|{\psi\left(0\right)}\right\rangle$ to the corresponding energy eigenstates $\left|{\psi_{n}}\right\rangle$ at any time $t$. We have already known that for the ground state across the phase transition line $\gamma-\lambda+1=0$ the entanglement entropy exhibits a sudden decrease. This is why in the region $\gamma-\lambda+1<0$ the amplitude of the entanglement entropy becomes smaller. In addition, in this phase transition region we also investigative the dependence relation between the ground-state transition probability $\left|{c\left({0,t}\right)}\right|^{2}$ and the imbalance $\left|\beta\right|^{2}-\left|\alpha\right|^{2}$ of the initial state. The result is shown in Fig. 5. From Fig. 5, it is obvious that with the increasing of the initial-state imbalance the ground-state transition probability becomes greater. Thus the greater imbalance between the two modes in the initial state can lead the clearer difference of the amplitude for different region. 4 Conclusion In this work, we have proposed a 2D model consisting of photons and PPs. In the model, the mixed gas of photons and PPs is formally equivalent to a 2D system of massive bosons with non-vanishing chemical potential, which implies the existence of two possible condensate phase. Based on the GP state and using the variational method, we have also discussed the quantum phase transition of the mixed gas and have obtained the critical coupling line analytically. Especially, we have found that the phase transition of the photon gas can be interpreted as second harmonic generation. Moreover, by investigating the entanglement entropy in the ground state and general state, we have illustrated how the entanglement between photons and PPs can be associated with the phase transition of the system. References (1) M. H. Anderson, J. R. Ensher, M. R. Matthews, C. E. Wieman, and E. A. Cornell, Science 269 (1995) 198. (2) K. B. Davis, M-O. Mewes, M. R. Andrews, N. J. van Druten, D. M. Kurn, and W. Ketterle, Phys. Rev. Lett. 75 (1995) 3969. (3) C. C. Bradley, C. A. Sackett, and R. G. Hulet, Phys. Rev. Lett. 78 (1997) 985. (4) S. Jochim, et al, Science 302 (2003) 2101. (5) H. Deng, G. Weihs, J. Bloch, and Y. Yamamoto, Science 298 (2002) 199. (6) J. Kasprzak, et al, Nature 443 (2006) 409. (7) R. Balili, V. Hartwell, D. Snoke, L. Pfeiffer, and K. West, Science 316 (2007) 1007. (8) S.O. Demokritov, et al, Nature 443 (2006) 430. (9) G. Ortiz and J. Dukelsky, Phys. Rev. A 72 (2005) 043611. (10) J. Klaers, J. Schmitt, F. Vewinger, and M. Weitz, Nature 468 (2010) 545. (11) J. Klaers, F. Vewinger, and M. Weitz, Nature Phys. 6 (2010) 512. (12) D. F. Walls, and P. Zoller Phys. Rev. Lett. 47 (1981) 709. (13) G. Rempe, F. Schmidt-Kaler, and H. Walther, Phys. Rev. Lett 64 (1990) 2783. (14) O. V. Kibis, G. Ya. Slepyan, S. A. Maksimenko, and A. Hoffmann, Phys. Rev. Lett 102 (2009) 023601. (15) D. F. Walls, and R. Barakat, Phys. Rev. A 1 (1970) 446. (16) Z. Cheng, Phys. Rev. Lett. 67 (1991) 2788. (17) T. Yamamoto, M. Koashi, S. K. Özdemir, and N. Imoto, Nature 421 (2003) 343. (18) Z. D. Walton, A. V. Sergienko, B. E. A. Saleh, and M. C. Teich, Phys. Rev. A 70 (2004) 052317. (19) W. Denk, J. H. Strickler, and W. W. Webb, Science 6 (1990) 73. (20) T. Binoth, J.Ph. Guillet, E. Pilon, and M. Werlen, Eur. Phys. J. C 16 (2000) 311. (21) Z. Cheng, J. Opt. Soc. Am. B 19 (2002) 1962. (22) N. Bloembergen, Non-Linear Optics (W. A. Benjamin, Inc., New York, 1965) (23) J. Klaers, J. Schmitt, T. Damm, F. Vewinger, and M. Weitz Appl. Phys. B 105 (2011) 17. (24) It may be asked why this term can be ignored. The answer is connected with the fact that in this letter we only consider the infinite particle number case, i.e., we consider the total photon number $N$ as a constant. In the case, the chemical potential term $\mu N_{a}$ cannot give new restriction to the Hamiltonian. Especially, in the following analysis we will point out that at the BEC state the mixed gas of photons and PPs can convert from the mixed photon-PP condensate phase to the pure PP condensate phase. In the pure PP condensate phase, the term $\mu N_{a}$ is meaningless. We therefore ignore the restriction term $\mu N_{a}$ in the model Hamiltonian. (25) L. Radzihovsky, J. Park, and P. B. Weichman, Phys. Rev. Lett. 92 (2004) 160402. (26) M. W. J. Romans, R. A. Duine, S. Sachdev, and H. T. C. Stoof, Phys. Rev. Lett. 93 (2004) 020405; (27) G. Santos, A. Tonel, A. Foerster, and J. Links, Phys. Rev. A 73 (2006) 023609; M. Duncan, A. Foerster, J. Links, E. Mattei, N. Oelkers, and A. P. Tonel, Nucl. Phys. B 767 (2007) 227; G. Santos, A. Tonel, A. Foerster, and J. Links, Phys. Rev. A 73 (2006) 023609. (28) S. C. Li, J. Liu, and L. B. Fu, Phys. Rev. A 83, (2011) 042107; S. C. Li, and L. B. Fu, Phys. Rev. A 84, (2011) 023605. (29) A. P. Hines, R. H. McKenzie, and G. J. Milburn, Phys. Rev. A 67, (2003) 013609. (30) A. P. Tonel, J. Links, and A. Foerster, J. Phys. A:Math. Gen. 38, (2005) 1235; J. Links, H. Q. Zhou, R. H. McKenzie, and M. D. Gould, J. Phys. A 36 (2003) R63.
Collision integral for multigluon production in a model for scalar quarks and gluons D. S. Isert and S. P. Klevansky111Current address: DB AG, Kleyerstr.27, 60236 Frankfurt a. M., Germany Institut für Theoretische Physik, Philosophenweg 19, D-69120 Heidelberg, Germany E-mail: D.Isert@tphys.uni-heidelberg.de Abstract A model of scalar quarks and scalar gluons is used to derive transport equations for quarks and gluons. In particular, the collision integral is studied. The self-energy diagrams are organized according to the number of loops. A generalized Boltzmann equation is obtained, which involves at the level up to two loops all possible two $\to$ two parton scattering processes and corrections to the process $q\bar{q}\to g$. 1 Model of scalar quarks and gluons, and transport theory The model underpinning our calculations successfully gives a $\ln s$ behavior in high energy $qq$ scattering [1]. Quarks and antiquarks are described by complex scalar fields $\phi$, and gluons as the scalar field $\chi$ coupled through the Lagrangian $$\displaystyle{\cal L}$$ $$\displaystyle=$$ $$\displaystyle\partial^{\mu}{\phi}^{\dagger i,l}\partial_{\mu}\phi_{i,l}+\frac{% 1}{2}\partial^{\mu}\chi_{a,r}\partial_{\mu}\chi^{a,r}-\frac{m^{2}}{2}\chi_{a,r% }\chi^{a,r}$$ (1) $$\displaystyle-gm{\phi}^{\dagger i,l}(T^{a})^{j}_{i}(T^{r})^{m}_{l}\phi_{j,m}% \chi_{a,r}-\frac{gm}{3!}f_{abc}f_{rst}\chi^{a,r}\chi^{b,s}\chi^{c,t}.$$ The quarks are massless, while the gluons are assigned a mass $m$ in order to avoid infra-red divergences. Since in QCD the quartic interaction between gluons leads to terms which are sub-leading in ln $s$, it is not included here. The two labels on each of the fields refer to the direct product of two color groups which is necessary to make the three gluon vertex symmetric under the exchange of two bosonic gluons. To describe non-equilibrium phenomena, we use the Green functions of the Schwinger-Keldysh formalism [2]. The equations of motion for the Wigner transforms of the Green functions, the so-called transport and constraint equations, are derived [3, 4] and read $$-2ip^{\mu}\partial_{X\mu}D^{-+}(X,p)=I_{\rm coll}+I_{-}^{A}+I_{-}^{R}\quad% \quad{\rm transport}$$ (2) and $$\left(\frac{1}{2}\partial_{X}^{\mu}\partial_{X\mu}-2p^{2}+2M^{2}\right)D^{-+}(% X,p)=I_{\rm coll}+I_{+}^{A}+I_{+}^{R},\quad\quad{\rm constraint},$$ (3) where $D^{-+}$ is a generic Green function, $D^{-+}=S^{-+}$ for quarks and $D^{-+}=G^{-+}$ for gluons. $M$ is the appropriate mass, and $I_{\rm coll}$ is the collision term, $$I_{\rm coll}=\Pi^{-+}(X,p)\hat{\Lambda}D^{+-}(X,p)-\Pi^{+-}(X,p)\hat{\Lambda}D% ^{-+}(X,p)=I_{\rm coll}^{\rm gain}-I_{\rm coll}^{\rm loss}.$$ (4) $I_{\mp}^{R}$ and $I_{\mp}^{A}$ are terms containing retarded and advanced components: $$\displaystyle I_{\mp}^{R}$$ $$\displaystyle=$$ $$\displaystyle-\Pi^{-+}(X,p)\hat{\Lambda}D^{R}(X,p)\pm D^{R}(X,p)\hat{\Lambda}% \Pi^{-+}(X,p)$$ (5) $$\displaystyle I_{\mp}^{A}$$ $$\displaystyle=$$ $$\displaystyle\Pi^{A}(X,p)\hat{\Lambda}D^{-+}(X,p)\mp D^{-+}(X,p)\hat{\Lambda}% \Pi^{A}(X,p).$$ (6) In Eqs.(4) to (6), $\Pi$ is a generic self-energy ($\Sigma_{q}$ for quarks and $\Sigma_{g}$ for gluons) and the operator $\hat{\Lambda}$ is given by $$\hat{\Lambda}:={\rm exp}\left\{\frac{-i}{2}\left(\overleftarrow{\partial}_{X}% \overrightarrow{\partial}_{p}-\overleftarrow{\partial}_{p}\overrightarrow{% \partial}_{X}\right)\right\}.$$ (7) For the Green functions, we use the quasiparticle approximation: $$\displaystyle iD^{-+}(X,p)$$ $$\displaystyle=$$ $$\displaystyle\frac{\pi}{E_{p}}\{\delta(E_{p}-p^{0})f_{a}(X,p)+\delta(E_{p}+p^{% 0})\bar{f}_{\bar{a}}(X,-p)\}$$ (8) $$\displaystyle iD^{\mp\mp}(X,p)$$ $$\displaystyle=$$ $$\displaystyle\frac{\pm i}{p^{2}-M^{2}\pm i\epsilon}+\Theta(-p^{0})iD^{+-}(X,p)% +\Theta(p^{0})iD^{-+}(X,p),$$ (9) $D^{+-}$ is obtained from the expression for $D^{-+}$ by replacing $f$ with $\bar{f}$ and vice versa. $f_{a}$ is the quark ($a=q$) or the gluon ($a=g$) distribution function, $\bar{f}_{a}$ is an abbreviation for $1+f_{a}$. Then an evolution equation for the parton distribution function is obtained by integrating Eq.(2) over an interval $\Delta^{+}$ that contains the energy $E_{p}$. To lowest order in an expansion that sets $\hat{\Lambda}=1$, the integral for the collision term is performed in the next section. 2 The collision integral For the collision integral, one has $$\displaystyle J_{\rm coll}$$ $$\displaystyle=$$ $$\displaystyle\int_{\Delta^{+}}\!dp_{0}\,\Pi^{-+}(X,p)\,D^{+-}(X,p)-\int_{% \Delta^{+}}\!dp_{0}\,\Pi^{+-}(X,p)\,D^{-+}(X,p)$$ $$\displaystyle=$$ $$\displaystyle-i\frac{\pi}{E_{p}}\,\Pi^{-+}(X,p_{0}\!=\!E_{p},\vec{p})\,\bar{f}% _{a}(X,\vec{p})+i\frac{\pi}{E_{p}}\,\Pi^{+-}(X,p_{0}\!=\!E_{p},\vec{p})\,f_{a}% (X,\vec{p}),$$ i.e. the off-diagonal quasiparticle self-energies are required to be calculated on shell. In the following, only the results for the quarks are presented, the generalization to the gluons is then obvious. The self-energies are organized now according to their number of loops. 2.1 Hartree and Fock self-energies The Hartree self-energies (self-energy with a quark or a gluon loop) vanish due to their color factors as it is the case for real QCD. The Fock self-energy gives a contribution to the loss term of the collision integral in Eq.(LABEL:e:collint) given by [5] $$\displaystyle J_{\rm coll}^{\rm loss}$$ $$\displaystyle=$$ $$\displaystyle-\frac{\pi}{E_{p}}\int\frac{d^{3}p_{1}}{(2\pi)^{3}2E_{1}}\frac{d^% {3}p_{2}}{(2\pi)^{3}2E_{2}}(2\pi)^{4}\delta^{(4)}(p+p_{1}-p_{2})$$ (11) $$\displaystyle\times|{\cal M}_{q\bar{q}\to g}|^{2}f_{q}(X,\vec{p})f_{\bar{q}}(X% ,\vec{p}_{1})\bar{f}_{g}(X,\vec{p}_{2}).$$ Here, ${\cal M}_{q\bar{q}\to g}$ means the scattering amplitude of the process $q\bar{q}\to g$ of order $gm$ which is pictured in Fig.1 a). Processes as $qg\to q$, $q\to qg$ or $q\bar{q}g\to$Ø are kinematically forbidden and therefore do not occur in this collision integral. The gain term can be obtained by exchanging $f$ with $\bar{f}$ in Eq.(11). 2.2 Two loop self-energies The two loop self-energy graphs are pictured in Fig.2. According to their topology, they are called rainbow (R), ladder (L), cloud (C), exchange (E) and quark-loop (QL) graphs. Let us first consider the contribution of the seven diagrams Ra), La), Ca), Cb), Ea), Eb) and QLa) to the collision integral. The first five of these diagrams lead to the (lowest order) scattering amplitudes of the processes $q\bar{q}\to gg$ and $qg\to qg$, while the latter two lead to the (lowest order) scattering amplitudes of the processes $qq\to qq$ and $q\bar{q}\to q\bar{q}$ [5]. Thus, to obtain all possible $2\to 2$ processes it is necessary to consider all types of diagrams. But an analysis of the color factors shows that the quark-loop diagrams are subleading as it is the case in real QCD. Therefore, in an additional expansion in the inverse number of colors, $1/N_{c}$, one can neglect the processes $qq\to qq$ and $q\bar{q}\to q\bar{q}$. That means that gluon production is favored. The three diagrams Rd), Ld) and QLd) vanish due to the momentum structure of their propagators [6]. To see the purpose of the remaining ten diagrams [6], i.e. Rb), Rc), Lb), Lc), Cc), Cd), Ec), Ed), QLb) and QLc), we investigate first the process $q\bar{q}\to g$ which is shown up to order $g^{3}m^{3}$ in Fig.1. $|{\cal M}_{q\bar{q}\to g}|^{2}$ was given in lowest order by the Fock term. To construct it up to order $g^{4}m^{4}$, one has to take the hermitian conjugate of the scattering amplitude of Fig.1a) and multiply it with each of the scattering amplitudes of Fig.1b)-g). It is also necessary to take the hermitian conjugate of each of these products, too. Using a symbolical notation, we write these products as $a^{\dagger}b,ab^{\dagger},a^{\dagger}c,...$. Then a detailed analysis shows that each of these products, with the exception of $a^{\dagger}g$ and $ag^{\dagger}$, is provided by one of the above mentioned ten self-energy diagrams. The diagram g) of Fig.1 does not enter into the collision integral, as it is a renormalization diagram for the incoming quark, for which the momentum $p$ is fixed externally. So far, we have constructed the collision integral out of self-energy diagrams with graphs up to two loops. Let us now return to the transport equation (2). If we integrate the left hand side over the interval $\Delta^{+}$ and neglect the contributions of $I_{A}$ and $I_{R}$ on the right hand side, then we get for the transport equation up to order $g^{4}m^{4}$ $$\displaystyle 2p^{\mu}\partial_{X\mu}f_{q}(X,\vec{p})=\int\frac{d^{3}p_{1}}{(2% \pi)^{3}2E_{1}}\frac{d^{3}p_{2}}{(2\pi)^{3}2E_{2}}(2\pi)^{4}\delta^{(4)}(p+p_{% 1}-p_{2})$$ $$\displaystyle\times|{\cal M}_{q\bar{q}\to g}|^{2}\left\{\bar{f}_{q}(X,\vec{p})% \bar{f}_{\bar{q}}(X,\vec{p}_{1})f_{g}(X,\vec{p}_{2})-f_{q}(X,\vec{p})f_{\bar{q% }}(X,\vec{p}_{1})\bar{f}_{g}(X,\vec{p}_{2})\right\}$$ $$\displaystyle+\int\frac{d^{3}p_{1}}{(2\pi)^{3}2E_{1}}\frac{d^{3}p_{2}}{(2\pi)^% {3}2E_{2}}\frac{d^{3}p_{3}}{(2\pi)^{3}2E_{3}}(2\pi)^{4}\delta^{(4)}(p+p_{1}-p_% {2}-p_{3})$$ $$\displaystyle\times\left[\frac{1}{2}|{\cal M}_{q\bar{q}\to gg}|^{2}\left\{\bar% {f}_{q}(X,\vec{p})\bar{f}_{\bar{q}}(X,\vec{p}_{1})f_{g}(X,\vec{p}_{2})f_{g}(X,% \vec{p}_{3})\right.\right.$$ $$\displaystyle\left.\qquad\qquad\qquad\;-f_{q}(X,\vec{p})f_{\bar{q}}(X,\vec{p}_% {1})\bar{f}_{g}(X,\vec{p}_{2})\bar{f}_{g}(X,\vec{p}_{3})\right\}$$ $$\displaystyle\;\;\;+|{\cal M}_{qg\to qg}|^{2}\left\{\bar{f}_{q}(X,\vec{p})\bar% {f}_{g}(X,\vec{p}_{1})f_{q}(X,\vec{p}_{2})f_{g}(X,\vec{p}_{3})\right.$$ $$\displaystyle\left.\left.\qquad\qquad\qquad-f_{q}(X,\vec{p})f_{g}(X,\vec{p}_{1% })\bar{f}_{q}(X,\vec{p}_{2})\bar{f}_{g}(X,\vec{p}_{3})\right\}\right].$$ (12) The generalization to higher orders is then obvious. Including, e.g., the three loop self-energy into the collision term leads to cross sections of all kinematically allowed processes with two (three) partons in the initial state and three (two) partons in the final state. Furthermore, they give all additional corrections of order $g^{6}m^{6}$ to the process $q\bar{q}\to g$, e.g. the product of scattering amplitudes of the diagrams b)-f) in Fig.2, and also corrections to the scattering processes of two partons into two partons. 3 Closing comments One may speculate that pinch singularities could enter into the collision integral up to the two loop level. In fact this is not so, and has been explicitly demonstrated [6]. References [1] J.R. Forshaw and D.A. Ross, Quantum Chromodynamics and the Pomeron (Cambridge University Press, Cambridge, 1997). [2] J. Schwinger, J. Math. Phys. 2 , 407 (1961); L.V. Keldysh, JETP 20, 1018 (1965). [3] S. Mrówczyński and U. Heinz, Ann. Phys. (N.Y.) 229, 1 (1994). [4] S.P. Klevansky, A. Ogura, and J. Hüfner, Ann. Phys. (N.Y.) 261, 261 (1997). [5] D.S. Isert and S.P. Klevansky, hep-ph/9905289. [6] D.S. Isert and S.P. Klevansky, hep-ph/9912203.
A Controllable Interaction between Two-Level Systems inside a Josephson Junction L. Tian and K. Jacobs Abstract Two-level system fluctuators (TLS’s) in the tunnel barrier of a Josephson junction have recently been demonstrated to cause novel energy splittings in spectroscopic measurements of superconducting phase qubits. With their strong coupling to the Josephson junction and relatively long decoherence times, TLS’s can be considered as potential qubits and demonstrate coherent quantum effects. Here, we study the effective interaction between the TLS qubits that is mediated by a Josephson junction resonator driven by an external microwave source. This effective interaction can enable controlled quantum logic gates between the TLS’s. Our study can be extended to other superconducting resonators coupling with TLS’s. Superconducting resonators, Quantum theory, Superconducting device noise 11footnotetext: Manuscript was received January 12, 202122footnotetext: This work was supported in part by the Karel Urbanek Postdoc Fellowship in the Department of Applied Physics at Stanford University. L. Tian was with the Department of Applied Physics at Stanford University, Stanford, CA 94305 USA. She is at the University of California, Merced, CA 95344 USA (phone: 209-228-4209; e-mail: ltian@ucmerced.edu). 33footnotetext: K. Jacobs is with the Department of Physics at the University of Massachusetts at Boston, 100 Morrissey Blvd, Boston, MA 02125 USA (phone:617-287-6044; email: kurt.jacobs@umb.edu). I Introduction Two-level system fluctuators (TLS’s) are a ubiquitous source of decoherence for solid-state qubits [1, 2]. In superconducting qubits [3], TLS’s have been widely studied both experimentally and theoretically and are often considered as the source of low-frequency ($1/f$) noise [4, 5, 6, 7, 8, 9, 10]. In recent experiments, energy splittings in spectroscopic measurements have been observed in superconducting qubits, showing coherent coupling between the TLS’s and the qubits [11, 12]. The TLS’s have demonstrated decoherence times much longer than that of the superconducting qubits [13] and hence can themselves be considered as effective qubits for testing quantum information protocols [11, 12, 14, 15, 16, 17, 18, 19, 20]. Josephson junctions can be operated as microwave resonators [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]. In a previous work [32], a cavity QED approach [33] to characterizing the coupling between TLS’s inside a Josephson junction and the junction resonator was suggested where a Jaynes-Cummings model was derived and the coupling between the TLS’s and the resonator can be modulated with an applied magnetic field. By measuring microwave transmissions in the junction resonator, various properties of the TLS’s can be probed including their spatial distribution and the coupling mechanisms to the junction. In the following, we study the effective interaction between the TLS’s that is mediated by the junction resonator. In our system, because the junction resonator can have a decay rate that is much stronger than that of the TLS’s, the effect of the resonator decay needs to be taken into account. We will present the effective interaction both for coupling with a high-Q resonator and for coupling with a strongly damped-resonator. A microwave source can be applied to the junction resonator [32] which provides us with a tool to control the coupling between the TLS’s and the resonator. This paper is organized as follows. In Sec. II, we present the model and of the coupling between the TLS’s and the junction resonator . In Sec. III, we derive the effective interaction between the TLS’s mediated by the resonator mode. In Sec. IV, we will briefly discuss the implementation of quantum logic gates between the TLS qubits and the decoherence effect. We discuss the readout of the TLS qubits in Sec. V and give the conclusions in Sec. VI. II Model The system is depicted in Fig. 1. A Josephson junction can be described in terms of the gauge invariance phase $\Phi$ and its conjugate momentum $P_{\Phi}$ [34] with a capacitive energy $P_{\Phi}^{2}/2C_{0}$ and a potential energy $-E_{J}\cos(2e\Phi/\hbar)$, where $C_{0}$ is the total capacitance and $E_{J}$ is the Josephson energy. When combined with the inductance $L$ in an RF SQUID loop, the Hamiltonian of the resonator can be written as $$H_{c}=\frac{P_{\Phi}^{2}}{2C_{0}}-E_{J}\cos(2e\Phi/\hbar)+\frac{(\Phi+\Phi_{ex% })^{2}}{2L}$$ (1) with external magnetic flux $\Phi_{ex}$ inside the SQUID loop. The Hamiltonian describes an oscillator mode: $H_{c}\approx P_{\Phi}^{2}/(2C_{0})+C_{0}\omega_{c}^{2}(\Phi-\Phi_{s})^{2}/2$ with a shift $\Phi_{s}$ and a frequency $$\omega_{c}=\sqrt{\frac{1}{LC_{0}}+\frac{4e^{2}E_{J}\cos(2e\Phi_{s}/\hbar)}{% \hbar^{2}C_{0}}}$$ (2) both depending on the magnetic flux $\Phi_{ex}$. Different coupling mechanisms between TLS’s and the junction have been discussed and observed. For example, TLS’s can couple with the critical current of the junction in the form $-(2e/\hbar)E_{J}\Phi\sum_{n}\vec{j}_{n}\cdot\vec{\sigma}_{n}$ where $\vec{j}_{n}$ and $\vec{\sigma}_{n}$ are, respectively, the polarization vector and the vector of the Pauli spin matrices for the $n^{\mbox{\scriptsize th}}$ TLS. Below we assume $\vec{j}_{n}=(j_{x},0,0)$ for simplicity. To the lowest order of $\Phi-\Phi_{s}$ and with the notation $\Phi-\Phi_{s}=\sqrt{\hbar/(2C_{0}\omega_{c})}(a+a^{\dagger})$, the total Hamiltonian of the coupled system is $$\displaystyle H_{t}$$ $$\displaystyle=$$ $$\displaystyle\hbar\omega_{c}a^{\dagger}a+H_{i}+\epsilon(t)(a+a^{\dagger})$$ (3) $$\displaystyle H_{i}$$ $$\displaystyle=$$ $$\displaystyle\sum_{n}\frac{\hbar\omega_{n}}{2}\sigma_{nz}+g_{n}(a\sigma_{n+}+a% ^{\dagger}\sigma_{n-})$$ (4) where $a$ ($a^{\dagger}$) is the annihilation (creation) operator of the resonator mode, $\omega_{n}$ is the frequency of the $n^{\mbox{\scriptsize th}}$ TLS, and $g_{n}=E_{J}j_{xn}\sqrt{\hbar/(2C_{0}\omega_{c})}\sin(2e\Phi_{s}/\hbar)$ is the coupling constant. The Hamiltonian includes a driving on the resonator mode with a driving amplitude $\epsilon(t)$. Eq. (3) has the form of a Jaynes-Cummings model that has been widely studied in cavity QED systems. Note that coupling with electrical (dielectric) field inside the junction can be derived similarly [11, 32]. Dissipative effects can be an important factor when we study the effective coupling between TLS’s. The decay rate of a junction resonator is much stronger than that of the TLS’s. In the following, we treat the resonator decay as a bosonic bath [35] that couples to the resonator with the Hamiltonian $H_{\kappa}=\sum\hbar\omega_{k}a_{k}^{\dagger}a_{k}+c_{k}(a_{k}^{\dagger}a+a^{% \dagger}a_{k})$, where $a_{k}$ ($a_{k}^{\dagger}$) is the annihilation (creation) operator for the bath modes, $\omega_{k}$ is the frequency of the modes, and $c_{k}$ is the coupling constant that is related to the decay rate $\kappa$ with: $\pi\sum_{k}^{2}\delta(\omega-\omega_{k})=\kappa$. In the master equation approach, the decay can be described in the Lindblad forms as $$\displaystyle\frac{\partial\rho}{\partial t}$$ $$\displaystyle=$$ $$\displaystyle-i[H_{t},\,\rho]+\kappa\mathcal{L}(a)\rho.$$ (5) In this work, we are interested in TLS’s that are in the same frequency range as that of the resonator, i.e. gegahertz, so thermal fluctuations can be neglected. The Lindblad forms can hence be expressed as $\mathcal{L}(o)=\frac{1}{2}(2o\mathcal{L}o^{\dagger}-\mathcal{L}o^{\dagger}o-o^% {\dagger}o\mathcal{L})$ for an operator $o$. The intrinsic noise bath of the TLS’s is neglected given their long decoherence times. III Effective Interaction TLS’s do not interact directly due to their low density inside the Josephson junction. However, because of their coupling to the same cavity mode, an effective interaction can be obtained. In this paper, we study the effective interaction in two situations: TLS’s coupling with a high-Q resonator with moderate to high decay rate [36], and TLS’s coupling with a strongly-damped resonator where the resonator decay is stronger than the detuning and the coupling between the TLS’s and the resonator. High-Q resonator. We consider the dispersive regime [37, 38, 39] where the resonator is far detuned from the TLS’s with $g_{n}\ll|\Delta_{nc}|$. Here, $\Delta_{nc}\equiv\Delta_{n}-\Delta_{c}$ is the detuning between the $n^{\mbox{\scriptsize th}}$ TLS and the resonator mode. The effective interaction can be derived by applying a unitary transformation to the Hamiltonian: $\widetilde{H}_{t}=UH_{t}U^{\dagger}$ where the transformation is $$U=e^{-\epsilon(a-a^{\dagger})/\Delta_{c}}\prod_{n}e^{-g_{n}(a^{\dagger}\sigma_% {n-}-\sigma_{n+}a)/\Delta_{nc}}.$$ (6) The total Hamiltonian then becomes $\widetilde{H}_{t}=H_{c}+\widetilde{H}_{eff}+\widetilde{H}_{x}$, where $H_{c}=\hbar\Delta_{c}a^{\dagger}a$ denotes the Hamiltonian of the resonator, $\widetilde{H}_{eff}$ the effective Hamiltonian of the TLS’s, and $\widetilde{H}_{x}$ the residual coupling between the TLS’s and the resonator [36]. We derive $$\widetilde{H}_{eff}=\sum_{n}\left[\frac{\hbar\bar{\Delta}_{n}}{2}\sigma_{nz}+% \frac{\Omega_{nx}}{2}\sigma_{nx}\right]+H_{int}+\widetilde{H}_{k}$$ (7) which includes single qubit terms with effective detuning $\bar{\Delta}_{n}=\Delta_{n}+(g_{n}^{2}/\Delta_{nc})(1-2\epsilon/\Delta_{c})$ and effective Rabi frequency $\Omega_{nx}=2\epsilon g_{n}/\Delta_{nc}$, an effective interaction $H_{int}=\sum\lambda_{mn}(\sigma_{n+}\sigma_{m-}+\sigma_{m+}\sigma_{n-})/2$ with the coupling constant $$\lambda_{nm}=\frac{g_{n}g_{m}(\Delta_{nc}+\Delta_{mc})}{2\Delta_{nc}\Delta_{mc% }},$$ (8) and an induced coupling to the bath modes of the resonator $\widetilde{H}_{\kappa}=\sum_{n,k}(g_{n}c_{k}/\Delta_{nc})(\sigma_{n+}a_{k}+a_{% k}^{\dagger}\sigma_{n-})$. The residual coupling has the form $$\widetilde{H}_{x}=\sum_{n}\frac{g_{n}^{2}}{\Delta_{nc}}\sigma_{nz}\left[a^{% \dagger}a+\epsilon\left(\frac{\Delta_{c}-2\Delta_{nc}}{2\Delta_{nc}\Delta_{c}}% \right)(a+a^{\dagger})\right]$$ (9) with a Stark shift for the resonator and an extra coupling to the resonator amplitude. The unitary transformation shifts the amplitude of the resonator to zero ($\langle a\rangle\approx 0$) for finite driving $\epsilon$, so that the effect of the Stark shift is always small. Strongly-damped resonator. The decay rate can reach ten’s of megahertz in lossy resonators and cannot be neglected when compared with the coupling between TLS’s and the resonator. We set the driving to be $\epsilon(t)=2\epsilon_{0}\cos\omega_{d}t$ with a frequency $\omega_{d}$ and an amplitude $\epsilon_{0}$. To study the dynamics in the Heisenberg picture, we start from Eq. (3). With $\dot{o}=i[H_{t}+H_{\kappa},\,o]$ for an arbitrary operator $o$, we have $$\displaystyle\dot{a}$$ $$\displaystyle=$$ $$\displaystyle-i\omega_{c}a-i\sum g_{n}\sigma_{n-}-i\epsilon-i\sum c_{k}a_{k}$$ (10) $$\displaystyle\dot{a}_{k}$$ $$\displaystyle=$$ $$\displaystyle-ic_{k}a-i\omega_{k}a_{k}$$ (11) which gives $$a_{k}=a_{k}(0)e^{-i\omega_{k}t}-ic_{k}\int dt^{\prime}e^{-i\omega_{k}(t-t^{% \prime})}a(t^{\prime}),$$ (12) with $a_{k}(0)$ being the noise operator in the Schödinger picture. Substituting Eq. (12) into Eq. (10), we derive the following relation for the resonator mode: $$\dot{a}=-i\Delta_{c}a-i\sum_{n}g_{n}\sigma_{n-}-i\epsilon_{0}-\kappa a-i\sqrt{% \kappa}a_{in}$$ (13) where $\Delta_{c}=\omega_{c}-\omega_{d}$ is the detuning of the resonator and $a_{in}=(1/\sqrt{\pi})\sum a_{k}(0)e^{-i\omega_{k}t}$ is the input field of the bath modes. Similar equations can be derived for the TLS’s: $$\displaystyle\dot{\sigma}_{n-}$$ $$\displaystyle=$$ $$\displaystyle-i\Delta_{n}\sigma_{n-}+ig_{n}\sigma_{mz}a$$ $$\displaystyle\dot{\sigma}_{n+}$$ $$\displaystyle=$$ $$\displaystyle+i\Delta_{n}\sigma_{n+}-ig_{n}a^{\dagger}\sigma_{mz}$$ (14) $$\displaystyle\dot{\sigma}_{nz}$$ $$\displaystyle=$$ $$\displaystyle 2ig_{n}a^{\dagger}\sigma_{n-}-2ig_{n}\sigma_{n+}a$$ where $\Delta_{n}=\omega_{n}-\omega_{d}$ is the detuning of the $n^{\mbox{\scriptsize th}}$ TLS. In the bad cavity limit with $\kappa\sim\Delta_{n,c},g_{n}\gg\gamma_{n1},\gamma_{n2}$, we can eliminate the resonator mode by setting the right hand side of Eq. (13) to zero. This gives $$a=-\frac{i\epsilon_{0}+i\sum_{n}g_{n}\sigma_{n-}+i\sqrt{\kappa}a_{in}}{\kappa+% i\Delta_{c}},$$ (15) where the resonator adiabatically follows the dynamics of the TLS’s. The conjugate relation for $a^{\dagger}$ can be derived as well. Now substituting Eq. (15) into Eq. (14), we can derive a set of equations that govern the dynamics of the TLS’s: $$\displaystyle\left(\begin{array}[]{c}\dot{\sigma}_{n-}\\ \dot{\sigma}_{n+}\\ \dot{\sigma}_{nz}\end{array}\right)$$ $$\displaystyle=$$ $$\displaystyle A_{n}\left(\begin{array}[]{c}\sigma_{n-}\\ \sigma_{n+}\\ \sigma_{nz}\end{array}\right)-\bar{\gamma}_{1}\left(\begin{array}[]{c}0\\ 0\\ 1\end{array}\right)+B_{n}$$ (16) where $A_{n}$ determines the dynamics of a single TLS, i.e. determines the parameters in the Bloch Equation for a single TLS, with $$A_{n}=\left(\begin{array}[]{ccc}-i\bar{\Delta}_{n}-\bar{\gamma}_{2}&0&i\Omega_% {n}+\Lambda_{n}\\ 0&i\bar{\Delta}_{n}^{\star}-\bar{\gamma}_{2}&-i\Omega_{n}^{\star}+\Lambda_{n}^% {\dagger}\\ 2i\Omega_{n}^{\star}-2\Lambda_{n}^{\dagger}&-2i\Omega_{n}-2\Lambda_{n}&-\bar{% \gamma}_{1}\end{array}\right)$$ and $B_{n}$ determines the effective interaction with $$B_{n}=\sum_{m}\left(\begin{array}[]{c}i\lambda_{nm}\sigma_{nz}\sigma_{m-}\\ -i\lambda_{mn}\sigma_{m+}\sigma_{nz}\\ 2i\lambda_{mn}\sigma_{m+}\sigma_{n-}-2i\lambda_{nm}\sigma_{n+}\sigma_{m-}\end{% array}\right).$$ The parameters for a single TLS in matrix $A_{n}$ are: the effective detuning $\bar{\Delta}_{n}=\Delta_{n}-\Delta_{c}g_{n}^{2}/(\kappa^{2}+\Delta_{c}^{2})$, the effective Rabi frequency $\Omega_{n}=-ig_{n}\epsilon_{0}/(\kappa+i\Delta_{c})$, the induced dephasing rate $\bar{\gamma}_{2}=g_{n}^{2}\kappa/(\kappa^{2}+\Delta_{c}^{2})$, and an induced decay rate $\bar{\gamma}_{1}=2\bar{\gamma}_{2}$. The induced dephasing (decay) is due to the effective bath $$\Lambda_{n}=\frac{g_{n}\sqrt{\kappa}a_{in}}{\kappa+i\Delta_{c}},$$ (17) in matrix $A_{n}$. For TLS’s to exhibit quantum coherence, the decoherence rates $\bar{\gamma}_{1,2}$ must be weaker than the other time-scales in the system, as will be discussed below. The effective coupling constant can be derived from $B_{n}$ with $$\lambda_{nm}=\frac{-ig_{n}g_{m}}{\kappa+i\Delta_{c}}$$ (18) and satisfies $\lambda_{nm}=\lambda_{mn}^{\star}$. The coupling depends on $\Delta_{c}$ in a similar way as does the Rabi frequency $\Omega_{n}$. The following effective Hamiltonian can then be derived for the TLS’s: $$\displaystyle\widetilde{H}_{eff}$$ $$\displaystyle=$$ $$\displaystyle\sum\frac{\bar{\Delta}_{n}}{2}\sigma_{nz}+\Omega_{n}\sigma_{+}+% \Omega_{n}^{\star}\sigma_{-}$$ $$\displaystyle+$$ $$\displaystyle\sum_{\langle n,m\rangle}\lambda_{nm}\sigma_{n+}\sigma_{m-}+% \lambda_{nm}^{\star}\sigma_{m+}\sigma_{n-}.$$ An interesting difference between the coupling in Eq. (8) and in Eq. (18) is that the effective coupling in Eq. (18) doesn’t depend on the frequencies of the TLS’s. In Fig.2, we plot the magnitude of these two couplings for comparison. IV Quantum Logic Operations The above system can be used to implement universal quantum operations on the TLS’s. The driving amplitude, driving frequency, and the resonator frequency $\omega_{c}$ (or detuning $\Delta_{c}$) can be independently adjusted. Choosing two linearly-independent Hamiltonians $H_{1}$ and $H_{2}$ from the general expression of $\widetilde{H}_{eff}$, a complete set of operators can be constructed from the commutators such as $[H_{1},H_{2}]$, $[H_{1},[H_{1},H_{2}]]$, etc. [40]. This shows that universal quantum gates can be realized by adjusting the above parameters. Details on how to realized the gates such as the SWAP gate and the Hadamard gate can be found elsewhere [36]. The coupling between the TLS’s and the resonator induces extra decoherence on the TLS’s due to the decay of the junction resonator. For coupling with a high-Q resonator, a noise term $\widetilde{H}_{\kappa}$ is generated; for coupling with a strongly-damped resonator, a term $\Lambda_{n}$ is generated. To realize quantum logic gates, it is necessary that the decoherence rates are much smaller than the effective Rabi frequency $\Omega_{nx}$ and the effective coupling constant $\lambda_{mn}$. It can be shown that the decoherence rate for coupling with a high-Q resonator is $\tau_{d}^{-1}\sim g_{n}^{2}\kappa/\Delta_{nc}^{2}$. With $\kappa\ll|\Delta_{nc}|$, hundreds of quantum operations can be performed within the decoherence time even when the decay rate of a resonator is a few megahertz, as has been studied in detail in Ref. [36]. However, the decoherence rate for coupling with a strongly-damped resonator is $\tau_{d}^{-1}\sim g_{n}^{2}\kappa/(\kappa^{2}+\Delta_{c}^{2})$. With $\kappa\geq\Delta_{c}$, we have $\tau_{d}^{-1}\sim\Omega_{nx},\lambda_{mn}$, and only a few operations can be performed at the best. V Readout The junction resonator can function as a readout device for the TLS’s. We consider the measurement of the $n^{\mbox{\scriptsize th}}$ TLS in the dispersive regime. Let the frequency of the resonator be close to the frequency of this TLS, but have an off-resonance that satisfies $g_{n}\ll|\Delta_{nc}|$. All the other TLS’s are very far off-resonance from the resonator, and so have much smaller Stark shifts. This can be achieved by switching the resonator frequency and the driving frequency at a nanosecond time-scale. Because the dynamics of the TLS’s happens at a much slower rate than this switching rate, the state of the TLS can be considered unaffected during the switching. A measurement of the transmission or reflection in the junction resonator can be used to reveal the qubit states. Meanwhile, phase sensitive detection of the stationary state of the resonator can give direct measurement of the TLS’s for a strongly-damped resonator, according to Eq. (15). For example, we have $$a+a^{\dagger}=\frac{-2\epsilon_{0}\Delta_{c}}{\kappa^{2}+\Delta_{c}^{2}}-\frac% {\sum_{n}g_{n}\kappa\sigma_{ny}+g_{n}\Delta_{c}\sigma_{nx}}{\kappa^{2}+\Delta_% {c}^{2}}.$$ (20) When the couplings ($g_{n}$’s) are different for different TLS’s, the output of the resonator provides a readout of multiple TLS’s in a single measurement [21]. In addition, by choosing the phase of the measured canonical variable of the resonator, we can choose which TLS operator will be measured. With $\phi=\arg(-ig_{1}/(\kappa+i\Delta_{c}))$, we have $ae^{-i\phi}+a^{\dagger}e^{i\phi}\propto\sigma_{1x}$ and a measurement of $\sigma_{1x}$ is performed. This scheme also provides a measurement of the time dependence of the properties of the TLS’s. It can be shown that $$\langle a^{\dagger}(t)a(t)\rangle=\textrm{Tr}_{s,r}[e^{i\bar{H}_{t}t}a^{% \dagger}e^{-i\bar{H}_{t}t}aW(0)]$$ (21) where $W(0)$ is the initial density matrix including the environmental degrees of freedom and $\bar{H}_{t}$ is the total Hamiltonian including the system and the bath. The trace is taken over the system modes (index $s$) and the bath modes (index $r$). Substituting the expression in Eq. (15) into Eq. (21) and considering one TLS ($n=1$) for simplicity, we derive that $$\langle a^{\dagger}(t)a(t)\rangle=\frac{g_{1}^{2}C(t)+g_{1}\epsilon_{0}M(t)+% \epsilon_{0}^{2}}{\kappa^{2}+\Delta_{c}^{2}}$$ (22) where $C(t)=\langle\sigma_{1+}(t)\sigma_{1-}(t)\rangle$ and $M(t)=\langle\sigma_{1+}(t)+\sigma_{1-}(t)\rangle$. The measured results for the correlation function $C(t)$ can be used to interpret the parameters in the matrix $A_{n}$ to reveal the properties of the TLS’s via the quantum regression theorem [35]. VI Conclusions To conclude, we studied the effective coupling between the TLS’s inside a Josephson junction using quantum optics approaches. Two situations are studied and compared: TLS’s coupling with a high-Q resonator mode and TLS’s coupling with a strongly-damped resonator mode. Our results indicate that the couplings in these two regime have very different properties. Universal quantum gates can be realized on the TLS’s when they are coupled with a high-Q resonator. While the fast decay of a strongly-damped resonator can destroy the coherence of the qubits and affect the successful realization of the quantum gates. We also discussed the readout of the TLS qubits via the detection of the junction resonator. References [1] P. Dutta and P. M. Horn, Rev. Mod. Phys. 53, 497 (1981). [2] M. B. Weissman, Rev. Mod. Phys. 60, 537 (1988). [3] Y. Makhlin, G. Schön, and A. Shnirman, Rev. Mod. Phys. 73, 357 (2001). [4] O. Astafiev and et al., Phys. Rev. Lett. 96, 137001 (2006). [5] D. J. Van Harlingen and et al., Phys. Rev. B 70, 064517 (2004). [6] F. C. Wellstood, C. Urbina, and J. Clarke, Appl. Phys. Lett. 85, 5296 (2004). [7] I. Martin, L. Bulaevskii, and A. Shnirman, Phys. Rev. Lett. 95, 127002 (2005). [8] A. Shnirman, G. Schön, I. Martin, and Y. Makhlin, Phys. Rev. Lett. 94, 127002 (2005). [9] R. H. Koch, D. P. DiVincenzo, and J. Clarke, Phys. Rev. Lett. 98, 267003 (2007). [10] H. Paik and et al., Phys. Rev. B 77, 214510 (2008). [11] J. M. Martinis and et al., Phys. Rev. Lett. 95, 210503 (2005). [12] R. W. Simmonds and et al., Phys. Rev. Lett. 93, 077003 (2004). [13] M. Neeley and et al., Nat. Phys. 4, 523 (2008). [14] A. M. Zagoskin, S. Ashhab, J. R. Johansson, and F. Nori, Phys. Rev. Lett. 97, 077001 (2006). [15] S. Ashhab, J. R. Johansson, and F. Nori, New J. Phys. 8, 103 (2006). [16] B. S. Palmer and et al., Phys. Rev. B 76, 054501 (2007). [17] J. A. Schreier and et al., Phys. Rev. B 77, 180502(R) (2008). [18] Z. Kim and et al., arXiv:0810.2455. [19] S. Pottorf, V. Patel, and J. E. Lukens, arXiv:0809.3272. [20] Y. Yu and et al., arXiv:0807.0766. [21] A. Blais and et al., Phys. Rev. A 69, 062320 (2004). [22] S. O. Valenzuela and et al., Science 314, 1589 (2006). [23] I. Chiorescu and et al., Nature (London) 431, 159 (2004). [24] A. Lupascu and et al., Nature Phys. 3, 119 (2007). [25] R. H. Koch and et al., Phys. Rev. Lett. 96, 127001 (2006). [26] A. Wallraff and et al., Nature (London) 431, 162 (2004). [27] M. A. Sillanpaa, J. I. Park, and R. W. Simmonds, Nature 449, 438 (2007). [28] K. D. Osborn, J. A. Strong, A. J. Sirois, and R.W. Simmonds, IEEE Trans. Appl. Super. 17, 166 (2007). [29] J. Majer and et al., Nature 449, 443 (2007). [30] A. A. Houck and et al., Nature 449, 328 (2007). [31] R. Migliore and A. Messina, Phys. Rev. B 72, 214508 (2005). [32] L. Tian and R. W. Simmonds, Phys. Rev. Lett. 99, 137002 (2007). [33] C. J. Hood and et al., Science 287, 1447 (2000). [34] T.P. Orlando and K.A. Delin, Introduction to Applied Superconductivity, (Addison Wesley, 1991). [35] D. F. Walls and G. J. Milburn, Quantum Optics, (Springer, Berlin, 1994). [36] L. Tian and K. Jacobs, submitted for publication (2008). [37] A. Blais and et al., Phys. Rev. A 75, 032329 (2007). [38] J. Q. You and F. Nori, Phys. Rev. B 68, 064509 (2003). [39] O. Gywat, F. Meier, D. Loss, and D. D. Awschalom, Phys. Rev. B 73, 125336 (2006). [40] S. Lloyd, Phys. Rev. Lett. 75, 346 (1995).
$\overline{D3}$’s – Singular to the Bitter End Iosif Bena, Mariana Graña, Stanislav Kuperstein and Stefano Massai Institut de Physique Théorique, CEA Saclay, CNRS URA 2306, F-91191 Gif-sur-Yvette, France iosif.bena, mariana.grana, stanislav.kuperstein, stefano.massai@cea.fr Abstract We study the full backreaction of anti-D3 branes smeared over the tip of the deformed conifold. Requiring the 5-form flux and warp factor at the tip to be that of anti-D3 branes, we find a simple power counting argument showing that if the three-form fluxes have no IR singularity, they will be necessarily imaginary-anti-self-dual. Hence the only solution with anti-D3 branes at the tip of the conifold that is regular in the IR and the UV is the anti-Klebanov-Strassler solution, and there is no regular solution whose D3-charge is negative in the IR and positive in the UV. Introduction. A nonzero positive cosmological constant appears to be the most plausible cause for the observed accelerated expansion of our universe, and thus, in order to be a candidate for a theory of everything, string theory must contain low-energy de Sitter (dS) space solutions. On the other hand, the generic low-energy compactifications of string theory on six-dimensional manifolds with flux produces very large numbers of Anti de Sitter (AdS) vacua, but does not produce classical dS solutions with a cosmological constant smaller than the compactification scale. To obtain phenomenologically-relevant dS solutions one needs to uplift the negative cosmological constant of AdS to a positive one, without disturbing the delicate balance needed to keep the compact dimensions stable, and the only known mechanism for doing this is to place objects with D-brane charge opposite to that of the background (like anti-D3 branes Kachru:2003aw ) in regions of high redshift (or high warp factor) of the latter. This ensures that the contribution of the anti-branes to the cosmological constant can be parametrically small, and implies that the many AdS low-energy flux compactifications can be uplifted to dS vacua, and hence string theory has a landscape of dS low-energy vacua. The best-studied model for a highly-warped region of a flux compactification is the so-called Klebanov-Strassler warped deformed conifold (KS) solution Klebanov:2000hb , and anti-D3 branes placed in this solution have been argued to be metastable Kachru:2002gs and are the key ingredient in the KKLT mechanism for uplifting AdS vacua and producing a de Sitter landscape Kachru:2003aw . The suitability of anti-D3 branes in KS throats for describing metastable vacua and for uplifting AdS to dS vacua has been recently put into question by the perturbative investigation of the backreaction of these anti-branes Bena:2009xk ; Bena:2011wh , which found that near the anti-branes the solution develops an unphysically-looking singularity, and hence anti-D3 branes in KS may not give an asymptotically-decaying small deformation of this solution. The stakes raised by this investigation are very high. If the singularity is not an artifact of perturbation theory (as suggested by Dymarsky:2011pm ), and if moreover it cannot be resolved in string theory, this implies that all solutions with anti-D3 branes in backgrounds with D3 charge dissolved in fluxes are unphysical. This would invalidate the KKLT mechanism for uplifting AdS vacua to dS ones, and imply that string theory does not have a landscape of vacua with a small positive cosmological constant. The purpose of this letter is to demonstrate that there exists no fully-backreacted singularity-free solution describing smeared anti-D3 branes in a warped deformed conifold (KS) background with positive D3 charge dissolved in fluxes. Furthermore, the only fully-backreacted regular solution with anti-D3 branes in the infrared has anti-D3 charge dissolved in fluxes, and hence it is just the supersymmetric KS solution with a different charge orientation (which we will refer to as anti-KS). This was first conjectured in Bena:2009xk (based on an analogy with the brane-bending calculation of Bena:2006rg ) and our results confirm this conjecture. The setup is shown on Figure 1. To make such a statement one may naively try to construct the fully-backreacted anti-D3 solution by solving analytically or numerically the underlying 8 nonlinear coupled second-order differential equations Papadopoulos:2000gj , but this is not necessary. We believe there exist at least three ways to demonstrate that imposing regularity near the anti D3-branes cannot give a solution with positive D3 charge at infinity, and in this note we present the three proofs: 1. We solve brute-force the equations in a Taylor expansion around the infrared. Setting to zero all the coefficients that give singular metric and 3-form fluxes, we found that the full solution up to order $\tau^{10}$ (where $\tau$ is the radial coordinate away from the KS tip) has three independent parameters all of which, as we will show below, are singular in the ultraviolet. The only regular solution is hence the BPS anti-KS solution with anti-D3 branes. 2. We explore the boundary conditions for the fields and their derivatives in the infrared, and show that if one imposes singularity-free boundary conditions the right hand sides of some of the equations are zero at all orders in perturbation theory. The remaining equations only have UV-singular solutions, and the only possible regular solution is the supersymmetric one. 3. A more elegant way to prove that there is no regular solution whose D3-brane charge changes sign from IR to UV is to find a topological argument similar to that of Blaback:2011nz . We present an argument along these lines. This argument may be generalizable to the case of localized anti-D3 branes. The setup. As argued in Bena:2009xk , the Ansatz for the solution describing smeared D3 and anti-D3 branes in the KS solution is Papadopoulos:2000gj : $$\displaystyle ds_{10}^{2}$$ $$\displaystyle=$$ $$\displaystyle e^{2\,A+2\,p-x}\,ds_{1,3}^{2}+e^{-6\,p-x}\,\left(d\tau^{2}+g_{5}% ^{2}\right)$$ (1) $$\displaystyle+e^{x+y}\,\left(g_{1}^{2}+g_{2}^{2}\right)+e^{x-y}\,\left(g_{3}^{% 2}+g_{4}^{2}\right)$$ $$\displaystyle H_{3}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}\left(k-f\right)\,g_{5}\wedge\left(g_{1}\wedge g_{3}+g% _{2}\wedge g_{4}\right)$$ $$\displaystyle+\,d\tau\wedge\left(\dot{f}\,g_{1}\wedge g_{2}+\dot{k}\,g_{3}% \wedge g_{4}\right)$$ $$\displaystyle F_{3}$$ $$\displaystyle=$$ $$\displaystyle F\,g_{1}\wedge g_{2}\wedge g_{5}+\left(2P-F\right)\,g_{3}\wedge g% _{4}\wedge g_{5}$$ (2) $$\displaystyle+\,\dot{F}\,d\tau\wedge\left(g_{1}\wedge g_{3}+g_{2}\wedge g_{4}\right)$$ $$\displaystyle F_{5}$$ $$\displaystyle=$$ $$\displaystyle{\cal F}_{5}+*{\cal F}_{5}\,,\quad{\cal F}_{5}=K\,g_{1}\wedge g_{% 2}\wedge g_{3}\wedge g_{4}\wedge g_{5}\,,$$ with $$K=-\frac{\pi}{4}Q+(2P-F)f+kF\,,$$ (3) where all the functions depend only on the radial variable $\tau$ and the angular forms $g_{i}$ are defined in Klebanov:2000hb . The constant $P$ is proportional to the $5$-brane flux of the KS solution and $Q$ is the number of (anti) D3 branes. In order to handle the second-order equations of motion for the scalars of the PT Ansatz, we found crucial to define particular combinations of fields, inspired by the GKP Giddings:2001yu notations. The warp factor $e^{4A+4p-2x}$ and the five-form flux $K\text{vol}_{5}$ are combined into scalar modes $\xi^{\pm}_{1}$, defined as $$\xi_{1}^{\pm}=-e^{4(p+A)}\left(\dot{x}-2\dot{p}-2\dot{A}\mp\frac{1}{2}e^{-2x}K% \right)\,.$$ (4) These modes have a clear physical interpretation: they parametrize the force on probe D3 and anti-D3 branes in a given solution: $$F_{D3}=-2e^{-2x}\xi_{1}^{+}\ ,\quad F_{\overline{D3}}=-2e^{-2x}\xi_{1}^{-}\,.$$ (5) We also introduce ISD and IASD three–form fluxes: $$G_{\pm}=(\star_{6}\pm i)G_{3}\,,$$ (6) with $\star_{6}$ the six-dimensional Hodge star and $G_{3}=F_{3}+ie^{-\phi}H_{3}$. The scalar components of $G_{\pm}$ will be called $\xi_{f}^{\pm}$, $\xi_{k}^{\pm}$ and $\xi_{F}^{\pm}$. This notation follows from the fact that these modes are the conjugate momenta to the fields $f,k$ and $F$ in (2), in the reduced one-dimensional system that describes the dynamics of the 8 scalar functions (seven in (1) and (2) plus the dilaton $\phi$). Supersymmetry imposes either that $G_{-}=F_{D3}=0$ or $G_{+}=F_{\overline{D3}}=0$, depending on which supersymmetries are preserved. We will refer to the solutions with ISD and IASD fluxes as KS and anti-KS respectively. With this notation the KS solution has $\xi_{a}^{+}=0$, $a=1,f,k,F$, while for the anti-KS solution $\xi^{-}_{a}=0$. A crucial fact is that the equations of motion for the scalars $\xi^{\pm}_{a}$ are just first-order ODEs. For the $\xi_{a}^{-}$ modes we find: $$\displaystyle\dot{\xi}^{-}_{1}+Ke^{-2x}\xi^{-}_{1}=$$ (7) $$\displaystyle\quad 4e^{2x-4(p+A)}\left[e^{\phi+2y}(\xi^{-}_{f})^{2}+e^{\phi-2y% }(\xi^{-}_{k})^{2}+\frac{1}{2}e^{-\phi}(\xi^{-}_{F})^{2}\right]$$ and $$\displaystyle\dot{\xi}^{-}_{f}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}e^{-2x}(2P-F)\xi^{-}_{1}+\frac{1}{2}e^{-\phi}\xi^{-}_{F}$$ $$\displaystyle\dot{\xi}^{-}_{k}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}e^{-2x}F\xi^{-}_{1}-\frac{1}{2}e^{-\phi}\xi^{-}_{F}$$ (8) $$\displaystyle\dot{\xi}^{-}_{F}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}e^{-2x}(k-f){\xi}^{-}_{1}+e^{\phi}\left(e^{2y}\xi^{-}_% {f}-e^{-2y}\xi^{-}_{k}\right)\,.$$ Remarkably, these are the only equations that we will need in this letter. One can define additional scalars $\xi^{\pm}_{b}$ which are the conjugate momenta for the four additional modes $x,y,p,\phi$, in such a way that the BPS KS solution with $Q$ mobile D3-branes has all the $\xi^{+}$ modes equal to zero. The 8 integration constants of the BPS system $\xi^{+}=0$ are fixed as follows: 1. The zero-energy condition of the effective Lagrangian fixes the $\tau$-redefinition gauge freedom and is automatically solved when $\xi_{a}=0$, but the constant shift $\tau\to\tau+\tau_{0}$ still remains unfixed, and so $\tau_{0}$ appears as a “trivial” integration constant. 2. The conifold deformation parameter $\epsilon$ and the constant dilaton $e^{\phi_{0}}$ give two other free parameters. 3. An additional parameter renders the conifold metric singular in the IR Candelas:1989js and has to be discarded. 4. The three equations for the flux functions $f$, $k$ and $F$ appear to have three free parameters Kuperstein:2003yt . One gives singular BPS fluxes in the IR, the second gives a $(0,3)$ complex 3-form $G_{3}\equiv F_{3}+ie^{-\phi}H_{3}$ that is singular in the UV, and the third corresponds to a $B$-field gauge transformation $(f,k)\to(f+c,k+c)$ that can be absorbed in the redefinition of $Q$. 5. The warp function $h\equiv e^{-4(p+A)+2x}$ can only be determined up to a constant, which is fixed requiring that $h$ vanishes at infinity. To summarize, the KS solution with $Q$ mobile D3-branes and the free parameters $\epsilon$ and $e^{\phi_{0}}$ is the only (IR and UV) regular solution with $\xi^{+}_{a}=0$, where by IR-regular we denote a solution whose only singularities are those coming from D-branes. The boundary conditions for anti-D3-branes. The main goal of this letter is to show that there is no IR-regular solution with smeared anti-D3 branes ($Q<0$, hence $K>0$) at the tip of the conifold and with KS asymptotics ($K<0$) in the UV. Starting with a singularity-free anti-brane solution in the IR, one necessarily ends up with an anti-KS solution in the UV. Moreover, we will prove that the only regular solution with $|Q|$ anti-D3 branes is the anti-KS flip of the solution with $Q$ mobile branes we reviewed above. To obtain IR-regular solutions we require that: • the $6d$ conifold metric has the tip structure of the KS solution: the 2-sphere shrinks smoothly at $\tau=0$ and the 3-sphere has finite size. • The warp factor comes from $|Q|$ anti-D3 branes smeared on the 3-sphere, and hence goes like $h\sim|Q|/\tau$. As a result the Taylor expansions of the functions $x$, $p$, $A$ and $y$ start with the same logarithmic and constant terms as in the KS solution with mobile branes and can differ only by linear (and higher) terms. The constant term in $A$ cannot be fixed by the regularity condition, since it corresponds to the conifold deformation parameter $\epsilon$. • There is no singularity in the three-form fluxes; their energy densities, $H_{3}^{2}$ and $F_{3}^{2}$, do not diverge at $\tau=0$. Hence, the Taylor expansions of the functions $f$, $k$ and $F$ start from $\tau^{3}$, $\tau$ and $\tau^{2}$ terms respectively, exactly like in the KS background. To be more precise, in a solution with branes at the tip the functions $f$, $k$ and $F$ can also start with non-integer powers ($\tau^{9/4}$, $\tau^{1/4}$ and $\tau^{5/4}$), but one can show that the logarithmic terms in the metric imply that the IR expansion of the solution only has integer powers of $\tau$. • The dilaton is finite at $\tau=0$. It is important to stress that we do not impose any kind of anti-KS IR boundary conditions for the 3-form fluxes, and a-priori the 3-form can be either ISD or IASD (or have both components). On the other hand, we do require the singularities in the warp factor and the five-form flux to correspond to objects that exist in string theory. These observations are helpful to determine the possible leading-order behaviors of the $\xi^{+}_{a}$’s and $\xi^{-}_{a}$ (for our argument we mostly need the latter). Let us denote by $n_{a}$ the lowest possible leading orders of the fields $\xi^{-}_{a}$. For small $\tau$, the metric regularity conditions imply that the functions $e^{2x}$ and $e^{4(p+A)}$ go like $\tau$ and $\tau^{2}$ respectively. From the explicit definitions of the $\xi_{a}^{-}$ modes we find: $$\left(n_{1},n_{f},n_{k},n_{F}\right)=\left(2,1,3,2\right)\,.$$ (9) The IR obstruction. Our goal is to show that when solving the equation of motions (7), ($\overline{D3}$’s – Singular to the Bitter End) for $\xi^{-}_{1}$, $\xi^{-}_{f}$, $\xi^{-}_{k}$ and $\xi^{-}_{F}$ in the IR (small $\tau$) and imposing the IR regularity conditions, one finds only trivial solutions for these functions. This essentially means that the IASD conditions $\xi^{-}_{f}=\xi^{-}_{k}=\xi^{-}_{F}=0$ will be satisfied all the way to the UV and not only at $\tau=0$. To prove this, a simple counting argument is sufficient, as we will now prove. Let us assume that $\xi^{-}_{F}$ and $\xi^{-}_{1}$ start from $\tau^{n}$ and $\tau^{n+l}$ for some $n\geqslant 2$. We treat separately the two possibilities: 1. $l>-1$. Recalling that $e^{y}\approx\frac{\tau}{2}+\ldots$, we can see from a simple power analysis that the $\xi^{-}_{1}$ term is subleading both in the $\xi^{-}_{k}$ and $\xi^{-}_{F}$ equations in ($\overline{D3}$’s – Singular to the Bitter End). In the latter equation the $\xi^{-}_{f}$ term is also subleading. We arrive at the set of two simple equations near $\tau=0$: $\dot{\xi}^{-}_{k}\approx-\frac{1}{2}e^{-\phi_{0}}\xi^{-}_{F}$ and $e^{-\phi_{0}}\dot{\xi}^{-}_{F}\approx-4\tau^{-2}\xi^{-}_{k}$. They have only two solutions, $\xi^{-}_{F}\sim\tau^{-2}$ and $\xi^{-}_{F}\sim\tau$ and both fall short of the regularity conditions (9). Remarkably, in showing that the system has no regular solution we have not used (7). 2. $l\leqslant-1$. Now the right hand side of (7) is certainly negligible with respect to the left hand side. This means that we have $\dot{\xi}^{-}_{1}+\tau^{-1}\cdot\xi^{-}_{1}\approx 0$ for small $\tau$ leading to the singular solution $\xi^{-}_{1}\sim\tau^{-1}$. For non-integer powers, the argument above can be straightforwardly extended, and the two regimes of parameters corresponding to those above are $l>-1/4$ and $l\leqslant-1/4$. To conclude, we see that regularity in the IR implies that the functions $\xi^{-}_{1}$, $\xi^{-}_{f}$, $\xi^{-}_{k}$ and $\xi^{-}_{F}$ vanish identically. Consequently, the solution will remain IASD for any value of $\tau$, and the force on probe anti-D3 branes will remain identically zero. A second way to see that a solution with negative D3 charge in the infrared remains anti-KS all the way to the UV is to solve directly the second-order equations for the scalar in the PT Ansatz in a power expansion around the origin. Upon eliminating all the singular modes, we find that to order $\tau^{10}$ the space of solutions is parameterized by three constants. None of these constants breaks the IASD condition, which confirms the results of the previous section. One can also use the fact that $\xi^{-}_{1}$, $\xi^{-}_{f}$, $\xi^{-}_{k}$, $\xi^{-}_{F}$ are necessarily zero to identify the three IR modes we find, and to show that they correspond actually to UV singular solutions: 1. Plugging $\xi^{-}_{1,f,k,F}=0$ into the remaining equations of motion, it is easy to show that there exists an IR regular but UV divergent one parameter family of solution. 2. A second mode is the $(3,0)$-form solution of the superpotential $\xi^{-}_{a}=0$ equations, which breaks supersymmetry and diverges in the UV (see Kuperstein:2003yt ). 3. A third “superpotential mode” is related to the shift of the warp function and, following our previous discussion, has to be excluded. Summarizing, we see that the only solution with smeared anti-D3 branes at the KS tip that is regular both in the UV and in the IR is the anti-KS solution. Stated differently, the only way to obtain a sensible supersymmetry-breaking solution corresponding to the backreaction of smeared anti-D3’s is to allow for IR singularities in the energy densities of the 3-form fluxes. The global obstruction. We can also present a “global” argument why the functions $\xi^{-}_{1}$, $\xi^{-}_{f}$, $\xi^{-}_{k}$ and $\xi^{-}_{F}$ have to vanish in a regular solution, without focusing on their Taylor expansions. The proof for the remaining four functions proceeds precisely as above. Our key observation is that the flux functions $f(\tau)$, $k(\tau)$ and $F(\tau)$ appear only in equations (7) and ($\overline{D3}$’s – Singular to the Bitter End). None of the remaining $\dot{\xi}^{-}_{a}$ equations has any flux function in it. Next, the equations in ($\overline{D3}$’s – Singular to the Bitter End) might be derived from the following reduced Lagrangian: $$\displaystyle\mathcal{L}_{\rm fluxes}$$ $$\displaystyle=4e^{2x-4(p+A)+\phi}\Big{[}e^{2y}(\xi^{-}_{f})^{2}+e^{-2y}(\xi^{-% }_{k})^{2}$$ $$\displaystyle\qquad+\frac{1}{2}e^{-2\phi}(\xi^{-}_{F})^{2}\Big{]}+e^{-4(p+A)}(% \xi^{-}_{1})^{2}\,.$$ (10) We treat $\mathcal{L}_{\rm fluxes}$ as the effective Lagrangian only for the fields $f(\tau)$, $k(\tau)$ and $F(\tau)$ with the remaining five fields being free but subject to the proper boundary conditions ensuring the IR regularity. This means that the first three terms in (10) are kinetic terms, while the last one is a potential term. Recall also that the $\xi^{-}$’s are first order in the derivatives of $\phi$’s and so the Lagrangian is of the second order, precisely as it should be. This Lagrangian has a remarkable property: it is strictly non-negative and vanishes only for $\xi^{-}_{1},\xi^{-}_{f},\xi^{-}_{k},\xi^{-}_{F}=0$. In other words, the global minimum of (10) corresponds to the IASD solution. The only way to arrive at a different solution, which describes only a local minimum of the action, is to impose boundary conditions (either in the IR or in the UV) that are at odds with the IASD solution $\xi^{-}_{f,k,F,1}=0$. The regularity requirement, however, constrains all the three flux functions and their conjugate momenta $\xi^{-}_{f,k,F}$ in the IR. Indeed, we saw that both $(f,k,F)$ and $(\xi^{-}_{f},\xi^{-}_{k},\xi^{-}_{F})$ have to vanish at $\tau=0$ for a regular solution. Similarly $\xi^{-}_{1}=0$ in the IR, thus the IR boundary conditions following solely from the regularity are consistent with the “trivial” IASD solution. We conclude again that requiring regularity forces upon us the anti-KS solution. Note, however, that the equations derived from (10) are singular in the IR, and so our arguments should be taken very cautiously. At the same time, this approach may prove efficient for the localized anti D3 branes, where one cannot use the Taylor expansion argument. Conclusions. We presented a detailed analysis of the nonlinear backreaction of smeared anti-D3 branes on the KS geometry. In the near-brane (IR) region we impose boundary conditions coming from the presence of a smeared source: singular warp factor and commensurate five-form flux, while in the UV region we require absence of highly divergent modes. We showed that with these assumptions there is a unique solution of the equations of motion, namely the supersymmetric anti-KS solution. We have thus proven that any supersymmetry-breaking solution associated to the backreaction of smeared anti-D3 branes on the KS geometry has singularities in the IR region not directly associated with the anti-D3 branes themselves. Moreover, these singularities appear in the three-form fluxes, confirming the linearized analysis of Bena:2009xk . A singularity in a supergravity solution does not necessarily mean that this solution should be automatically discarded. However, physically acceptable singularities are those which are resolved in the full string theory (for example, the singularity in the supergravity solution for a brane is resolved by the open strings on the brane, other singularities are resolved by brane polarization Myers:1999ps ; Polchinski:2000uf or geometric transitions). Here it is not at all clear that there is any mechanism capable of explaining the present singularities. If there is no such mechanism, then the singularity in the supergravity background is telling us that there is no (meta)stable anti-D3 brane solution, and the whole system of branes with charge opposite to that of the background is unstable. This would invalidate the anti-brane AdS to dS uplifting mechanism, and therefore most of the String Theory de Sitter landscape. Acknowledgements: We would like to thank Thomas van Riet for stimulating discussions. This work was supported in part by the ANR grant 08-JCJC-0001-0 and the ERC Starting Grants 240210 - String-QCD-BH and 259133 – ObservableString. References (1) S. Kachru et al. Phys. Rev. D 68 (2003) 046005. (2) I.R.Klebanov and M.J.Strassler, JHEP 0008 (2000) 052. (3) S. Kachru, J. Pearson and H. L. Verlinde, JHEP 0206 (2002) 021. (4) I. Bena, M. Grana and N. Halmagyi, JHEP 1009, 087. (5) I. Bena et al. arXiv:1106.6165 [hep-th]. (6) A. Dymarsky, JHEP 1105 (2011) 053. (7) I. Bena et al. JHEP 0611 (2006) 088. (8) G. Papadopoulos and A. A. Tseytlin, Class. Quant. Grav.  18 (2001) 1333. (9) J. Blåbäck et al. JHEP 1108 (2011) 105. (10) S. B. Giddings, S. Kachru and J. Polchinski, Phys. Rev. D 66, 106006 (2002). (11) P. Candelas and X. C. de la Ossa, Nucl. Phys. B 342 (1990) 246. (12) S. Kuperstein and J. Sonnenschein, JHEP 0402 (2004) 015. (13) V. Borokhov and S. S. Gubser, JHEP 05 (2003) 034. (14) R. C. Myers, JHEP 9912 (1999) 022 (15) J. Polchinski and M. J. Strassler, hep-th/0003136.
Observations of Protostellar Outflow Feedback in Clustered Star Formation Fumitaka Nakamura National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, 181-8588, Japan Abstract We discuss the role of protostellar outflow feedback in clustered star formation using the observational data of recent molecular outflow surveys toward nearby cluster-forming clumps. We found that for almost all clumps, the outflow momentum injection rate is significantly larger than the turbulence dissipation rate. Therefore, the outflow feedback is likely to maintain supersonic turbulence in the clumps. For less massive clumps such as B59, L1551, and L1641N, the outflow kinetic energy is comparable to the clump gravitational energy. In such clumps, the outflow feedback probably affects significantly the clump dynamics. On the other hand, for clumps with masses larger than about 200 M${}_{\odot}$, the outflow kinetic energy is significantly smaller than the clump gravitational energy. Since the majority of stars form in such clumps, we conclude that outflow feedback cannot destroy the whole parent clump. These characteristics of the outflow feedback support the scenario of slow star formation. 1 Introduction Most stars form in clustered environments ([Lada 2010]). Radio and infrared observations have revealed that star clusters form in parsec-scale dense molecular clumps with masses of the order of 10${}^{2}-10^{3}$ M${}_{\odot}$. In such clumps, stellar feedback from forming stars shapes their internal structure and affects the formation of next-generation stars. Therefore, understanding the role of stellar feedback in the process of cluster formation is a central problem of star formation study. There are several types of stellar feedback such as outflows, winds, and radiation ([Bally 2010], [Krumholz et al. 2014]). Among them, some theoretical studies suggest that the radiation pressure from high-mass stars is the most dominant feedback mechanism in the presence of high-mass stars. However, in the process of high-mass star formation, the outflow feedback from low-mass stars may play a dominant role in regulating star formation before high-mass stars are formed. In other words, the outflow feedback can control the formation of high-mass stars by injecting momentum and energy in the surrounding gas. In addition, there are many cluster-forming regions where no high-mass stars form. In such regions, outflow feedback should play a dominant role in regulating star formation. In this contribution, we focus on the protostellar outflow feedback and discuss its role on the clump dynamics on the basis of observations. Protostellar outflows have been shown theoretically to be capable of maintaining supersonic turbulence in cluster-forming clumps and keeping the star formation rate per free-fall time as low as a few percent ([Nakamura & Li 2007]). However, its exact role in clustered star formation remains controversial. Two main scenarios have been proposed for the role of outflow feedback in clustered star formation. In the first scenario, the outflow feedback is envisioned to destroy the cluster-forming clump as a whole, which terminates further star formation. In this case, star formation should be rapid and brief (e.g., [Mac Low & Klessen 2004]). On the other hand, in the second scenario, the outflow feedback is envisioned to play the role of maintaining the internal turbulent motions. In this scenario, star formation should be slow and can last for several free-fall times or longer (e.g., [Tan et al. 2006]). Below we constrain these theoretical models using the observational results of the protostellar outflow feedback. 2 Data: Protostellar Outflows The outflowing gas from protostars accelerates entrained gas to large velocities. Such components are observed as molecular outflows and such molecular outflows are important to inject momentum in the surrounding gas. The CO lines such as the $J=1-0$, $J=2-1$, and $J=3-2$ transitions are excellent tracers to measure the parameters of molecular outflows. In Table 1, we summarize some properties of nearby cluster-forming clumps toward which CO outflow surveys were carried out. In the next section, we discuss how CO outflows affect the parent clumps using the observational results presented in the references listed in Table 1. Outflow feedback has two different roles. One is negative effect, slowing down or terminating star formation by injecting momentum and energy. In this case, the mean densities of regions decrease by feedback. Another role is positive effect, triggering future star formation by dynamically compressing the surroundings. In this case, the local densities increase by compression. For outflow feedback, the negative effect is likely to be more important than positive one because the outflow feedback happens inside the star-forming regions, dispersing gas outwards. Therefore, in the following we focus on the negative effect of the outflow feedback. In the next section, we address the following two important questions of star formation using the observational data. (1) Can protostellar outflow feedback maintain supersonic turbulence in clustered environment? (2) Can protostellar outflow feedback directly destroy the parent clumps? The two models of star formation give the different answers to these questions. In Table 2, we summarize the predicted answers of these two questions for the two star formation models. 3 Results First, using the observational data of the outflow surveys listed in Table 1, we calculated some physical parameters of the outflow feedback in Table 3. Here, we assumed that outflow gas is optically-thin and that the inclination angles of all the identified outflows are fixed to be the mean value of 57.3 degree. However, low-velocity wings sometimes become optically thick. Therefore, the assumption of the optically-thin leads to underestimation of the outflow mass by a factor of 10 or more. However, we note that we may underestimate momentum and energy only by a factor of a few because the optically-thick gas have low-velocity, and have less momentum and energy than high-velocity components. 3.1 Can outflow feedback maintain supersonic turbulence? To answer this question, we compare the turbulence dissipation rate ($dP_{\rm turb}/dt$) and outflow momentum injection rates ($dP_{\rm out}/dt$) towards target clumps. In previous studies, the energy is used for the analysis. However, because the outflow feedback is the momentum feedback ([Krumholz et al. 2014]), we use momentum instead of energy. The dissipation rate of turbulence is defined as the total cloud momentum divided by turbulence crossing time ($dP_{\rm turb}/dt=0.21M_{\rm cl}\sigma_{\rm 3D}/t_{\rm diss}$). The numerical factor of 0.21 is determined from comparison with turbulence simulations. The outflow momentum injection rate is defined by the outflow momentum divided by the outflow dynamical time. In Figure 1, we show the ratio between momentum injection rate and dissipation rate as a function of clump mass. The momentum injection rates are larger than the dissipation rates for all clumps except IC 348. Thus, we conclude that the outflow feedback has enough momentum to maintain supersonic turbulence in these clumps. 3.2 Can outflow feedback directly destroy the parent clump? To answer this question, we adopt the virial theorem, $d^{2}I/dt^{2}=2E_{\rm cl}-E_{\rm grav}$. First, we list the virial ratios ($\alpha=2E_{\rm cl}/E_{\rm grav}$) of the clumps in Table 3. Almost all clumps are close to virial equilibrium within a factor of a few. In Figure 2, we show the ratios of $2E_{\rm out}$ and $E_{\rm grav}$ as a function of clump mass. Figure 2 indicates that for less massive clumps (B59, L1551, L1641N, $<200$ M${}_{\odot}$), the outflow energy is comparable to the gravitational energy. The dynamics of such clumps may be significantly influenced by the outflow feedback. On the other hand, for more massive clumps ($>$ 200 M${}_{\odot}$), the outflow kinetic energy appears to be significantly smaller than the gravitational energy. In other words, the outflow feedback may not directly destroy the whole clumps. But, a fraction of gas may be dispersed by outflows. This gentle ejection of gas may lead to clump dispersal eventually. 4 Slow vs. Rapid Star Formation From the results presented in the previous section, we found that the outflow feedback can maintain supersonic turbulence in the nearby cluster-forming clumps. In addition, the outflow kinetic energy is significantly smaller than the clump gravitational energy except for the three least massive clumps, B59, L1551, and L1641N. Therefore, we conclude that the outflow feedback is not enough to disperse the whole clump at least for the clumps with masses greater than 200 M${}_{\odot}$. Since the majority of stars form in such clumps, we conclude that the observations of the outflow feedback support the slow star formation. References [Arce et al. (2010)] Arce, H. G., Borkin, M. A., Goodman, A. A., Pineda, J. E., & Halle, M. W. 2010, ApJ, 715, 1170 [Bally 2010] Bally, J. 2010, in Computational Star Formation, Proceedings of the International Astronomical Union, IAU Symposium, 270, p.247 [Duarte-Cabral et al. (2012)] Duarte-Cabral, A., Chrysostomou, A., Peretto, N., et al. 2012, A&A, 543, 140 [Krumholz et al. 2014] Krumholz, M. R., Bate, M. R., Arce, H. G., et al. 2014, in Protostars and Planets VI, H. Beuther, R. S. Klessen, C. P. Dullemond, and T. Henning (eds.), University of Arizona Press, Tucson, p. 243 [Lada 2010] Lada, C. J. 2010, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 368, p. 713 [Mac Low & Klessen 2004] Mac Low, M.-M., & Klessen, R. 2004, Reviews of Modern Physics, 76, p. 125 [Maury et al. (2009)] Maury, A. J., Andre, P., &Li, Z.-Y., 2009, A&A, 499, 175 [Nakamura & Li 2007] Nakamura, F. & Li, Z.-Y. 2007, ApJ, 783, 8 [Nakamura & Li 2014] Nakamura, F. & Li, Z.-Y. 2014, ApJ, 662, 395 [Nakamura et al. (2011a)] Nakamura, F., Kamada, Y., Kamazaki, T. et al. 2011a, ApJ, 726, 46 [Nakamura et al. (2011b)] Nakamura, F., Sugitani, K., Shimajiri, Y., et al. 2011b, ApJ, 737, 56 [Nakamura et al. (2012)] Nakamura, F., Miura, T., Kitamura, Y., et al. 2012, ApJ, 746, 25 [Stojimirovic et al. (2006)] Stojimirovic, I., Narayanan, G., Snell, R. L., & Bally, J. 2006, ApJ, 649, 280 [Sugitani et al. (2010)] Sugitani, K., Nakamura, F., Tamura, M., et al. 2010, ApJ, 716, 299 [Tan et al. 2006] Tan, J. C., Krumholz, M. R., & McKee, C. F. 2006, ApJ, 641, 121
DyOb-SLAM : Dynamic Object Tracking SLAM System Rushmian Annoy Wadud Master Student, School of Aerospace and Mechanical Engineering University of Oklahoma Norman, OK 73019 rushmian@gmail.com             Wei Sun Assistant Professor, School of Aerospace and Mechanical Engineering University of Oklahoma Norman, OK 73019 wsun@ou.edu Abstract Simultaneous Localization & Mapping (SLAM) is the process of building a mutual relationship between localization and mapping of the subject in its surrounding environment. With the help of different sensors, various types of SLAM systems have developed to deal with the problem of building the relationship between localization and mapping. A limitation in the SLAM process is the lack of consideration of dynamic objects in the mapping of the environment. We propose the Dynamic Object Tracking SLAM (DyOb-SLAM), which is a Visual SLAM system that can localize and map the surrounding dynamic objects in the environment as well as track the dynamic objects in each frame. With the help of a neural network and a dense optical flow algorithm, dynamic objects and static objects in an environment can be differentiated. DyOb-SLAM creates two separate maps for both static and dynamic contents. For the static features, a sparse map is obtained. For the dynamic contents, a trajectory global map is created as output. As a result, a frame to frame real-time based dynamic object tracking system is obtained. With the pose calculation of the dynamic objects and camera, DyOb-SLAM can estimate the speed of the dynamic objects with time. The performance of DyOb-SLAM is observed by comparing it with a similar Visual SLAM system, VDO-SLAM and the performance is measured by calculating the camera and object pose errors as well as the object speed error. I INTRODUCTION The Simultaneous Localization and Mapping [1] problem can be considered as maintaining a mutual relationship between the mapping and localization of a robot in an unexplored environment. Without mapping, the subject cannot be localized and without the pose estimation of the subject, the map cannot be formed. With the help of the sensors, the significant landmarks or key-features can be located, which will be processed by the device to match and link them with the previously observed landmarks, as well as store them for mapping purpose. The state and position of a robot can be estimated after updating the landmark features and can be used for mapping as well. Many sensors have been utilized in SLAM, such as laser range sensors, rotary encoders, inertial sensors, GPS, and cameras. Depending on the sensors, SLAM can be classified into various types. In this paper, we will focus on Visual SLAM based on camera sensors. The main objective of Visual SLAM is to estimate the camera trajectory and reconstruct the surrounding environment as a map. Most of Visual SLAM algorithms are based on an assumption called \sayScene Rigidity Assumption or the static world assumption. This assumption has been developed in many approaches of SLAM systems where it is considered that the environment does not contain dynamic objects and is completely static. While the scene rigidity assumption is required for the ease of computation since dealing with dynamic objects is computationally expensive, it creates a limitation for the real world based applications of Visual SLAM. SLAM systems developed in recent years start to take dynamic objects into consideration. These algorithms function mostly in two ways: 1. The moving objects detected from the sensors are treated as outliers and removed from the estimation process. 2. The moving objects are tracked separately using multi-target tracking approaches after they are detected. DynaSLAM [2] functions in the former way, where it detects \sayprior dynamic objects, i.e. objects which are potentially dynamic, and then segment the objects using a Mask-RCNN model before removing the segmented portions from the frames. The map generated by DynaSLAM is based on the static objects in the surrounding environment. VDO-SLAM [3], on the other hand, is a system which functions in the latter way, i.e. it tracks the dynamic objects and estimates the object poses (both static and dynamic). The Dynamic Object Tracking SLAM (DyOb-SLAM) we propose in this paper is a combination of DynaSLAM and VDO-SLAM. Figure 1 shows the output of the DyOb-SLAM system. Different features are mentioned below: • Our system consists of a Mask-RCNN module for segmenting out the dynamic objects based on prior, and for a better segmentation result, the Multi-View geometric segmentation algorithm from DynaSLAM has been added. • For getting a robust system of dynamic object tracking, the optical flow and scene flow algorithms are implemented. • The back-end of our system consists of Bundle Adjustment feature for the static object points, the Partial Batch Optimization module for creating local maps and a Full Batch Optimization module for the final result - a global map. • With the help of the Bundle Adjustment algorithm, a sparse map can be obtained with the static feature points. The following outputs can be obtained: 1. A current frame showing the ORB features along with mask information and object labels. 2. A sparse map based on the static features. 3. A global map showing the dynamic contents and their motion updated with time. The rest of the paper is organized as follows. The literature review is discussed in Section II. The methodology of DyOb-SLAM is discussed in Section III. The experimental data and results are presented in Section IV. Section V provides a summary of the results. II Literature Review In SLAM methods, the most common assumption made is that the observed scene in the environment is static occluding out the dynamic contents from calculation. In earlier designs of SLAM system [4] [5] [6] [7], this assumption was established. In [5] and [6], dynamic object points are considered as outliers and sparse maps of only static features are obtained. In a system where there is no moving object, any motion of an object is treated as an outlier and thus occluded from the tracking and mapping [8]. This will result in failure of tracking and subsequently the mapping in many realistic scenarios. For this purpose, dynamic object detection is widely utilized currently to deal with this issue. Some of the SLAM systems use object detection algorithm to form a semantic map of the environment [9] [10] [11], but the dynamic object features are not taken into calculation and occluded in the tracking process. Similar to these systems, in Detect-SLAM [12], ORB-SLAM2 [6] and the Single Shot Detector (SSD) [13] co-exist in the system to detect the dynamic objects for occlusion in the local map, and with the help of SSD, an instance level semantic map is formed based on the static objects. ORB-SLAM2 creates a sparse reconstruction of the environment from the extracted ORB (Oriented FAST and Rotational BRIEF) [14] features and SSD produces discretized bounding boxes and generates scores for presence of each object category in each default box. With the help of the single-layered convolutional layer YOLOv3 [15], the dynamic objects which gets occluded in the tracking stage, are detected in [8] creating an instance level segmentation using 3D geometric segmentation method. In [16], a purely geometric map of the static scene is constructed by occluding the features detected from the motion disturbances in the static scene. DynaSLAM [2] is a system which is built on ORB-SLAM [5] and ORB-SLAM2 [6] and can be implemented with monocular, stereo and RGB-D image sets. The system is comprised of a neural network Mask-RCNN [17] to segment out the dynamic objects along with a multi-view Geometry algorithm, a tracking component to track the static objects and a mapping component to map out the static feature points. This system has a dynamic object occlusion algorithm, along with a background inpainting algorithm to fill in the occluded spaces in the frames with previously observed background. On the other hand, systems like DOT or Dynamic Object Tracking [18] combine instance segmentation and multi-view geometry to generate masks for dynamic objects, which works like a tracker. Visual Dynamic Object-aware SLAM (VDO-SLAM) system [3] is the first dynamic SLAM system to perform motion segmentation. The system’s novelty is that it can track multiple dynamic objects with the semantic information, estimate the camera pose of both static and dynamic structures and it has an object velocity extracting algorithm. The tracking of multiple dynamic objects is done using a dense optical flow algorithm which propagates a unique object ID assigned to the points in the segmented regions. Although DynaSLAM is a better performing SLAM system, occlusion of dynamic features and background inpainting characteristics are not very reliable for real world based applications. On the other hand, VDO-SLAM fails to produce real time based outputs due to the pre-processing of the frames. Both of the systems use deep learning in SLAM systems to produce semantic information which needs a lot of energy for computation and cannot be run efficiently in real time. One solution to improve a dynamic object based SLAM system by producing better computational speed and energy is cloud computation. The higher computations are done in the cloud to get real time data for building up the map. In robotics, there are many cloud computation platforms for robots like - Rapyuta [19] which is an open source Platform-as-a-Service (PaaS) framework designed specifically for robotics applications (SLAM), DAvinCi [20] which is a software framework that provides the scalability and parallelism advantages of cloud computing for service robots in large environments, ROS-bridge which bridges communication between a robot and a single ROS (Robot Operating System) [21] environment in the cloud. III Methodology DyOb-SLAM is a system which tracks dynamic objects, maps both the static and dynamic objects separately and simultaneously estimates the camera and object poses by comparing with the ground truth information. The system is comprised of: • an object detector module • a tracking component • two different mapping algorithms • an orientation optimizing back-end The input to the system are stereo, RGB-D images. Figure 2 shows the block diagram of the DyOb-SLAM system. The object detector module at first creates instance-level semantic segmentation information of the dynamic objects present in each frames. The dynamic object segmentation is based on prior, i.e. the objects which are potential dynamic objects or movable in real world, for example - car, people, etc. Due to the semantic segmentation of a priori dynamic contents, the static and dynamic objects are separated which will be easier to track the objects separately. With the help of a dense optical flow algorithm, the number of dynamic objects to be tracked is maximized. The dense optical flow information is pre-processed using PWC-Net, which samples all the points of dynamic contents in the frame from the semantic information. The semantic information along with the optical flow information are then passed onto the tracking module which tracks the dynamic points extracted from the semantic and optical flow information and produces camera and object pose information. It also compares the pose information with the ground truth information provided and calculates the pose errors which are obtained as output. Next, the tracking information is projected into two different maps - a sparse point-cloud map for the static contents and a global map for the dynamic contents along with the camera which provides trajectory information along each frames. The sparse map is optimized using a Bundle Adjustment algorithm and the global map is optimized using batch optimization process (both full and partial). The different stages are described in the subsections below. III-A Object Detector III-A1 Mask-RCNN In the system, a Convolutional Neural Network (CNN) is used to segment out the potential dynamic objects from the frame. An instance-level semantic segmentation algorithm module, Mask-RCNN is used which is an extended version of Faster-RCNN with an added branch for predicting an object mask in parallel with the bounding box feature. It can extract both pixel-wise semantic segmentation and instance labels of objects. In this system, both the functions are used - the priori dynamic objects are segmented out and instance labels are obtained to track the dynamic pixels. The input to the Mask-RCNN module are stereo, RGB-D images. The network has been trained in such a way that it can detect different potential dynamic objects, such as people, car, trains, truck, birds, dog, cat, etc. The network are trained on MS COCO to segment out the selected classes. The main concept is to segment out these classes and obtain an output matrix of size $m\times n\times l$ where m,n are the row and column of the input matrix of images and l is the number of objects in the frames. For each class of objects, a specific ID value has been assigned which is obtained in the output matrix where the pixel has been masked by the neural network. For each value in the matrix, a specific color of mask is assigned for visualizing dynamic object segmentation in the tracking output (see figure 4). The figure 3 shows the semantic segmentation of a frame scene. The dataset that is used to test the system contains cars and other vehicles, which are the only objects segmented out. The ground truth of the dataset contains the object pose and semantic information. The Mask-RCNN’s output matrix is then processed for the other modules (Tracking and Mapping) in such a way that the dynamic objects detected are relabelled according to the ground truth. As a result the output becomes closer to ground truth information. III-A2 Dense Optical Flow PWC-Net, a dense optical flow algorithm is used to at first pre-process the optical flow information of the input images. These pre-processed optical information are used as input to the system. The dense optical flow information sample all the points from the dynamic objects within the segmented masks. This helps to maximize the number of tracked points and later on used for tracking multiple objects. Even if the semantic segmentation fails at one point, dense optical flow information can help obtain the object masks again by tracking the unique ID for each points in the mask around the object. Since sparse feature matching method is not very effective for tracking dynamic contents in the long term consecutive frames, optical flow estimation has been used for that purpose. III-B Tracking The input to the tracking component are the RGB images, the depth information of each frames, the segmentation masks and the optical flow information obtained from the object detector module. Multiple functions take place simultaneously in the tracking component and they are divided in 3 modules : • ORB Feature Extraction • Camera Pose Estimation • Object Motion Tracking III-B1 ORB Feature Extraction This module is similar to the process used in ORB-SLAM or ORB-SLAM2. This module consists of the following aspects, • Localization: It localizes the camera, finding feature matches in every frame and forms visual odometry tracks of unmapped regions. • Loop Closing: It uses a place recognition algorithm to detect and validate large loops. For place recognition, a Bag of Words module DBoW2 [22] is used. Figure 5 shows the output of ORB feature extraction. It detects the corner features in the frames, i.e. ORB features and extract them to form a sparse map as output. The ORB features are extracted from the static part of the image frames excluding the segmented mask portions of dynamic objects. The ORB features are used for tracking, place recognition (Loop Closing) and local mapping functions and these features are very robust to rotation and scale [14] [6]. ORB, which is in short for Oriented FAST and Rotated BRIEF, is a combination of FAST (Features from Accelerated Segment Test) keypoint detection method and BRIEF (Binary Robust Independent Elementary Features) descriptor which uses binary test for smoothing the noisy patches of pixels. Since the system deals with stereo and RGB-D image types, the ORB extraction is done for both the images (left and right) and the ORB extractor searches for an ORB match in both the images. This matched keypoint associated with the depth information are used for differentiating between close and far points. Keypoints are considered close points according to [5] when the associated depth is less than 40 times than the stereo/RGB-D baseline. Otherwise, the keypoint will be considered a far point. The baseline has been calculated according to [23]. The close points are usually considered and triangulated while the far points are discarded. But the far points are triangulated if they are viewed from multiple view points. III-B2 Camera Pose Estimation After the sparse and dynamic features are separated using the object detector module, the camera pose is estimated using the static feature points. For initializing the process, motion models are generated to compare the inlier numbers depending on the camera reprojection error. Two models are formed for robust estimation - one is used by considering the previous camera motion and the other produces a new motion transform using PnP based RANSAC algorithm. Each of the models creates a number of inliers and the model with the most inliers is chosen for initialization. For camera pose estimation, the reprojection error equation is at first established. If $P_{k-1}$ is a set of static points at frame (k-1) in the global reference frame and $C_{k}$ is the set of corresponding static feature points in the image at frame k, then the camera pose $X_{k}$ according to [3] is estimated by minimizing the reprojection error - $$e(X_{k})=C_{k}-\pi({X_{k}}^{-1}P_{k-1})$$ (1) Here $\pi(.)$ is a projection function. A least squares error function is established from equation 1 using Lie-Algebra parameterization of SE(3). This least squares error function is then minimized using the Levenberg-Marquardt algorithm [24]. III-B3 Object Motion Tracking The Mask-RCNN and optical flow modules segment out the potentially dynamic objects separating the static and dynamic features. For updating the dynamic object segmentation information, an optical flow algorithm called scene flow algorithm is executed. Using the scene flow algorithm, the motion of the dynamic objects are calculated. This algorithm helps to further detect the dynamic objects properly, i.e. can decide whether an object is in motion. Since the scene flow estimation of static objects is zero, a threshold is selected to decide whether the object is static or dynamic. If the magnitude of a scene flow vector of a certain point in the frame is greater than the threshold, that point is considered as dynamic. The scene flow vector is calculated using the camera pose and the motion of the object’s point between two consecutive frames. With the help of the dense optical flow information, a point label which is a unique object identifier is associated with the dynamic object points. If the first dynamic object is detected, the point label will read $l=1$ where $l\in\mathcal{L}$ and $\mathcal{L}$ is a fixed tracking label set. For static objects and background, the value of $l$ is considered to be 0. So for frame k, the point labels will be aligned with the corresponding point labels obtained in previous frame k-1. Similar to the camera pose estimation, object pose estimation is also calculated at first by finding the reprojection error and then Lie-Algebra parameterization of SE(3). If the object point motion from frame k-1 to k in the global reference frame is ${}^{k-1}{O_{k}}$, the motion estimation equation can be derived as : $$P_{k}={{}^{k-1}{O_{k}}}P_{k-1}$$ (2) This equation is the point motion estimation equation. Here, $P_{k}$ is the set of static points in the frame k of image and $P_{k-1}$ is the static points in the frame $k-1$. Using equation 2 the reprojection error between the object point in global reference frame and the static points in image frame : $$e(^{k-1}{O_{k}})=C_{k}-\pi({X_{k}}^{-1}[^{k-1}{O_{k}}]P_{k-1})$$ (3) After Lie Algebra parameterization of SE(3) the optimal solution is obtained minimizing the least squares error function. With the help of the dynamic object motion tracking, the object speed is also calculated from the difference of the estimated speed $v_{e}$ and ground truth speed $v_{g}$, i.e., $$E=v_{g}-v_{e}$$ (4) III-C Mapping The mapping component produces two types of map - Sparse Map for the static features and Global Map for the dynamic contents and camera motion. III-C1 Sparse Map The ORB features [5] [6] extracted in the tracking component are used to produce a sparse point-cloud map of the static background. This sparse map is also a local map consisting of triangulated ORB features from connected keyframes. The corresponding ORB features are matched with the previous keyframes and new points are generated into the local map. For triangulation of the ORB matches, the parallax error, reprojection error and scale consistency are checked. Figure 6 shows the sparse map obtained from the ORB extraction. III-C2 Global Map The inputs to the formation of global map are the output from the tracking component, i.e. the camera pose information and the object motion. With each frame and gradual change in time step, the detected object motion and camera pose are saved and continuously updated. The inlier points obtained from the previous frames are utilized to gather the track correspondences in the current frame to estimate the camera pose and object motion. With different assigned colors for different values in the pixel matrices, the camera and object trajectories are visualized in the global map. It is only based on the dynamic content and camera. Figure 4 shows the global trajectory map of the detected dynamic objects. III-D SLAM Back-end The SLAM back-end is the part of the system where the data and output obtained from the other modules are optimized to get a better and more optimized output as a whole. DyOb-SLAM uses the $g^{2}o$ [25] module for all the optimization functions and the Levenberg-Marquardt [24] method is implemented from it to locate a local minimum of the multivariate function for the pose error that is expressed as the sum of squares of non-linear, real-valued functions. The SLAM system is designed using three different types of optimization techniques for optimized static featured map, camera pose error and dynamic object motion estimation - • Bundle Adjustment • Local Batch Optimization • Global Batch Optimization III-D1 Bundle Adjustment (BA) The selected keyframes and sparse map points obtained from the mapping component are optimized by the Bundle Adjustment. It is used only for optimization of different attributes of the static features. The Levenberg-Marquadt method is used for Bundle Adjustment which is implemented in g2o. Three different types of Bundle Adjustment are used in the system. To optimize the camera pose estimation obtained from the tracking component, a motion-only bundle adjustment is performed. This optimizes the camera orientation and minimizes the reprojection error obtained from the matched keypoints. Local bundle adjustment is used to optimize the selected local keyframes and local map points of the static features. After loop closure, Full bundle adjustment is utilized to optimize all the map points and keyframes except the origin keyframe to achieve the optimal solution. III-D2 Local Batch Optimization Local Batch Optimization is used for formation of the local map and to be used as input of Global Batch Optimization. Its purpose is to ensure correct camera pose which is assisted by the Bundle Adjustment algorithm and sent to the global batch optimizer to form a precise global map. It optimizes the camera pose estimation by minimizing the reprojection error using the Levenberg-Mardquardt method. For local optimization, only the static features are optimized as dynamic contents are big constraints. III-D3 Global Batch Optimization After local batch optimization, its output along with the output of tracking components are directly used to optimize the global map. The tracked object points are optimized fully to form the global map with every consecutive time steps and obtain updated object poses. This optimization minimizes the pose error of both camera and objects. The global map is obtained after all the time steps and frames are processed and the pose estimations are globally optimized. III-E Cloud Computing Cloud computing helps such SLAM systems with object detection algorithms to operate in real time, as these algorithms require high computational power, such as high end GPUs (Graphic Processing Unit). In the proposed SLAM system, deep neural network Mask-RCNN requires high level GPUs where the onboard computer system may not be able to provide. In [26], [27] and [28], there are mentions of different algorithms which are used to deploy a certain portion of the entire SLAM system in the cloud. For example - The Cloud Chaser [26] system deployed its object detection algorithm in cloud to get better performance and avoid latency issues. DyOb-SLAM consists of a dynamic object detector which is computationally heavier in comparison with the other modules in the system. Besides, the output of object detector is also pre-processed which results in a slower computation time, leading to weaker performance for the SLAM system. To obtain a much better performing system, we deploy the proposed SLAM system in a cloud environment. The output results are then compared to the ones resulting in the local computer. IV Experimental Data The proposed system has been evaluated on the basis of camera pose, object motion, object speed and moving object tracking performance. At first the experiment is done in a local computer system. The experiment is done in an Intel Core i7-8700 CPU @ 3.20GHz × 12 system with a 4 GB GPU of NVIDIA Quadro P620 processor. Then the SLAM system is run in the OSCER environment. The SLAM system, at present, is only applicable for outdoor scenes, mostly with scenes having objects like - car or any other vehicles. The main dataset that has been effective to evaluate the performance is the KITTI Tracking dataset [25]. IV-A KITTI Tracking Dataset The KITTI Tracking Dataset [29] [30] is a dataset collection for the use of autonomous driving and mobile robotics research. The calibrated, synchronized and time-stamped dataset collection consists of real world traffic situations. The data are developed for stereo, optical flow, visual odometry/SLAM and 3D Object detection experiments. The KITTI data sequence that was chosen for this experiment is kitti-0000-0013. This dataset contains 55 sequences with RGB images (see figure 7), depth images, timestamps and ground truth information of camera and object pose. The sequences are of a simple scene where two cars driving in a road. The setting file for the KITTI dataset is calibrated to tune the parameters for running the SLAM system. The camera parameters like the camera calibration and distortion parameters, the camera frames per second, frame height and width are chosen according to the dataset which have not been altered. Since the dataset is comprised of RGB-D images, the depth parameters need to be set as well. The depth value has been set for the two different features - static and dynamic. The variable ThDepth denote the depth value for the static features and ThDepthObj denote the depth value for dynamic features. The depth values help to differentiate the close and far points in the images. The depth map factor is also set which is a scale factor that multiplies the input depthmap. IV-B Camera and Object Pose The main comparison of performance of the DyOb-SLAM system is established with VDO-SLAM. The table in figure 8 shows the average of both Camera pose error and object pose error containing both translational and rotational errors. Each sequence has been run 5 times for both DyOb-SLAM and VDO-SLAM to locate the change in the non-deterministic output. Let the ground truth motion transform be denoted as $\mathcal{T}$, where $\mathcal{T}\in SE(3)$ and the estimated motion transform be denoted as $\mathcal{T}_{est}$. The pose error P will generally be calculated as - $$P=\mathcal{T}_{est}^{-1}\mathcal{T}$$ (5) At the different camera frames and time steps, the root mean squared errors (RMSE) for both camera pose and object motion pose are computed. After all the frames are processed, the end result is an average of all the RMS errors calculated in each frames. The average pose error is calculated for both camera and object motion. IV-C Object Speed For both DyOb-SLAM and VDO-SLAM systems, the object speed error has been evaluated. The linear velocity of each points in the object pixels are estimated. If the pose change is expressed as $H$ and $m_{k}$ be a point in the object pixel at frame $k$ where $m=[m_{x},m_{y},m_{z},1]$, then the estimated velocity can be expressed as $$\displaystyle\mathcal{V}_{est}$$ $$\displaystyle=m_{k}-m_{k-1}$$ $$\displaystyle=H_{k}m_{k-1}-m_{k-1}=(H_{k}-I_{4})m_{k-1}$$ (6) Here, $I_{4}$ is an identity matrix. The equation 5.2 states that the difference of the point coordinates from frame k-1 to k at the time step gives the estimated velocity. The figure 9 shows a bar chart of the average object speed error for DyOb-SLAM and VDO-SLAM at the 5 iterations. If $\mathcal{V}$ is the ground truth speed information, the velocity error - $$V_{err}=\lvert\mathcal{V}_{est}\rvert-\lvert\mathcal{V}\rvert$$ (7) The object speed is calculated at the end of processing every frame. In figure 10 it is shown that the detected dynamic objects are bounded by bounding boxes and the speed of the objects are printed in the processed frame. The average of these errors are obtained as the output of the SLAM system. IV-D Discussion of Results The key difference between VDO-SLAM and DyOb-SLAM is the semantic segmentation process in the system. For VDO-SLAM, the semantic segmentation is done in a pre-processed way shaping the segmentation data according to the ground truth. On the other hand DyOb-SLAM directly uses a Mask-RCNN network which segments out the dynamic objects and later on the data is then processed (see figure 3). Due to this, the average pose and speed errors are different for the two SLAM systems. As we can see from figure 8, the proposed system DyOb-SLAM has slightly higher average pose error for both camera and object in every iterations. Nevertheless, both the systems have very low average pose error. It can be observed from the table that the camera translational error of DyOb-SLAM differ about a range of 0.003-0.005 than that of VDO-SLAM. The camera rotational error difference is seen to be about a range of 0.007-0.017 which is a bigger difference compared to the translational error difference. For object pose error, the translational error difference ranges about 0.009-0.03 and the rotational error difference ranges about 0.05-0.17. It is well observed that for DyOb-SLAM system the first iteration gave higher outputs than the next 4. It is also observed, for both camera and object poses, the rotational error is bigger compared to the translational error. On the other hand, for the average object speed performance, it is observed that DyOb-SLAM performs better than VDO-SLAM which can be seen from figure 9. The object speed error for DyOb-SLAM gives a range of 0.96-1.20 more than the object speed error found from VDO-SLAM. The difference is very high and it can be concluded that the DyOb-SLAM system gives a satisfactory object speed error data. The settings file that has been obtained from the KITTI dataset has been modified for the experiment. With Mask-RCNN’s segmentation information, the objects could not be categorized as multiple. The scene contains different vehicles. The neural network segments out even the static prior outlier points. To solve the matter and to only segment out the required two cars that need to be tracked, the depth value has been compromised to a limit. As a result of this, the two focused cars in the frames can be identified as separate objects, instead of one. For this reason, the pose errors in the proposed system are a bit higher than that of VDO-SLAM. Besides, if the computational energy of the two system is compared, we understand that the proposed system is more expensive than VDO-SLAM because we have used the Mask-RCNN model which functions at the same time as the Tracking and Mapping modules in the system. The speed of the detected objects is calculated when the objects are detected and segmented out. After the dynamic objects are labelled with an object ID, the bounding boxes along with the speed calculation appear as seen in figure 10. Due to focusing the two vehicles at the close point portion in the frames which have been discussed before, the object bounding boxes appear as per the segmented objects. This means that when the detected objects are in close point depth scale, the bounding boxes along with the speed calculation appear accordingly. So, after the first car goes out of the depth scale, the speed of the car is not measured. V CONCLUSION In this work, we propose the DyOb-SLAM algorithm, which is one of the SLAM systems that includes dynamic objects in its mapping and track the objects in every frames. With the help of advanced technologies like convolutional neural networks and dense optical flow algorithms, dynamic contents can be accurately detected and used for further processing. The back-end of the DyOb-SLAM system can provide more optimized data for which an optimized sparse map of static keypoints in the environment can be obtained, along with a global map of camera and object trajectories with subsequent processing of each frame. To obtain an even better SLAM processing system, a cloud environment can be used to process data in the main processing unit and the cloud at the same time. In that way, the pose estimation error can be lowered and a faster driven output can be obtained. ACKNOWLEDGMENT We would like to thank Mr. Dharmendra, Head Of Department United College of Engineering and Research, for the constant encouragement towards the realization of this work. References [1] H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping: part i,” IEEE Robotics Automation Magazine, vol. 13, no. 2, pp. 99–110, 2006. [2] B. Bescós, J. M. Fácil, J. Civera, and J. Neira, “Dynaslam: Tracking, mapping and inpainting in dynamic scenes,” CoRR, vol. abs/1806.05620, 2018. [3] J. Zhang, M. Henein, R. Mahony, and V. Ila, “Vdo-slam: A visual dynamic object-aware slam system,” 2020. [4] G. Klein and D. Murray, “Parallel tracking and mapping for small ar workspaces,” pp. 1–10, 2007. [5] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardós, “Orb-slam: a versatile and accurate monocular slam system,” vol. 31(5), p. 1147–1163, October, 2015. [6] R. Mur-Artal and J. D. Tardos, “Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras,” vol. 33(5), pp. 1255–1262, June, 2017. [7] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges, and A. W. Fitzgibbon, “Kinectfusion: Real-time dense surface mapping and tracking,” vol. 11, pp. 127–136, 2011. [8] R. Hachiuma, C. Pirchheim, D. Schmalstieg, and H. Saito, “Detectfusion: Detecting and segmenting both known and unknown dynamic objects in real-time SLAM,” CoRR, vol. abs/1907.09127, 2019. [9] R. F. Salas-Moreno, R. A. Newcombe, H. Strasdat, P. H. Kelly, and A. J. Davison, “Slam++: Simultaneous localisation and mapping at the level of objects,” June, 2013. [10] K. Tateno, F. Tombari, I. Laina, and N. Navab, “Cnn-slam: Real-time dense monocular slam with learned depth prediction,” July, 2017. [11] J. McCormac, A. Handa, A. Davison, and S. Leutenegger, “Semanticfusion: Dense 3d semantic mapping with convolutional neural networks,” 2017. [12] F. Zhong, S. Wang, Z. Zhang, C. Chen, and Y. Wang, “Detect-slam: Making object detection and slam mutually beneficial,” March, 2018. [13] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, and S. Reed, “Ssd: Single shot multibox detector,” ECCV 2016. Lecture Notes in Computer Science, vol. 9905, pp. 21–37, 2016. [14] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” p. 2564–2571, 2011. [15] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” CoRR, vol. abs/1804.02767, 2018. [16] M. Rünz, M. Buffier, and L. Agapito, “Maskfusion: Real-time recognition, tracking and reconstruction of multiple moving objects,” 2018. [17] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask r-cnn,” October, 2017. [18] I. Ballester, A. Fontan, J. Civera, K. H. Strobl, and R. Triebel, “Dot: Dynamic object tracking for visual slam,” 2020. [19] D. Hunziker, M. Gajamohan, M. Waibel, and R. D’Andrea, “Rapyuta: The roboearth cloud engine,” May, 2013. [20] R. Arumugam, V. R. Enti, B. Liu, X. Wu, K. Baskaran, F. K. Foong, A. S. Kumar, D. M. Kang, and W. K. Goh, “Davinci: A cloud computing framework for service robots,” May, 2010. [21] M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger, R. Wheeler, and A. Ng, “Ros: an open-source robot operating system,” January, 2009. [22] D. Gálvez-López and J. D. Tardós, “Bags of binary words for fast place recognition in image sequences,” IEEE Transactions on Robotics, vol. 28, pp. 1188–1197, 2012. [23] L. Paz, P. Pinies, J. Tardos, and J. Neira, “Large scale 6dof slam with stereo-in-hand,” Robotics, IEEE Transactions on, vol. 24, pp. 946 – 957, 11 2008. [24] M. Lourakis and A. Argyros, “Is levenberg-marquardt the most efficient optimization algorithm for implementing bundle adjustment?.,” vol. 2, pp. 1526–1531, 01 2005. [25] R. Kümmerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “g2o: A general framework for graph optimization,” May, 2011. [26] Z. Luo, A. Small, L. Dugan, and S. Lane, “Cloud chaser: Real time deep learning computer vision on low computing power devices,” CoRR, vol. abs/1810.01069, 2018. [27] S. Kamburugamuve, H. He, G. Fox, and D. Crandall, “Cloud-based parallel implementation of slam for mobile robots,” 03 2016. [28] V. K. Sarker, J. Peña Queralta, T. N. Gia, H. Tenhunen, and T. Westerlund, “Offloading slam for indoor mobile robots with edge-fog-cloud computing,” in 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), pp. 1–6, 2019. [29] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The KITTI dataset,” International Journal of Robotics Research, vol. 32, pp. 1231 – 1237, Sept. 2013. [30] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361, 2012.
Forecasting the steam mass flow in a powerplant using the parallel hybrid network Andrii Kurkin Terra Quantum AG, 9000 St. Gallen, Switzerland    Jonas Hegemann Uniper Technologies GmbH, 45896 Gelsenkirchen, Germany    Mo Kordzanganeh Terra Quantum AG, 9000 St. Gallen, Switzerland    Alexey Melnikov Terra Quantum AG, 9000 St. Gallen, Switzerland Abstract Efficient and sustainable power generation is a crucial concern in the energy sector. In particular, thermal power plants grapple with accurately predicting steam mass flow, which is crucial for operational efficiency and cost reduction. In this study, we use a parallel hybrid neural network architecture that combines a parametrized quantum circuit and a conventional feed-forward neural network specifically designed for time-series prediction in industrial settings to enhance predictions of steam mass flow 15 minutes into the future. Our results show that the parallel hybrid model outperforms standalone classical and quantum models, achieving more than 5.7 and 4.9 times lower mean squared error (MSE) loss on the test set after training compared to pure classical and pure quantum networks, respectively. Furthermore, the hybrid model demonstrates smaller relative errors between the ground truth and the model predictions on the test set, up to 2 times better than the pure classical model. These findings contribute to the broader scientific understanding of how integrating quantum and classical machine learning techniques can be applied to real-world challenges faced by the energy sector, ultimately leading to optimized power plant operations. hybrid neural networks, time-series quantum machine learning ††preprint: APS/123-QED I Introduction Quantum-enhanced machine learning (or quantum machine learning, QML) has emerged as a rapidly growing field, combining quantum computing with machine learning to develop new models with the potential to revolutionize data analysis dunjko2018machine ; Melnikov_2023 . A popular approach to QML uses trainable quantum circuits as machine learning models similar to widely known classical neural networks. These circuits consist of encoding and variational unitary gates. The former maps classical data into quantum states, while the latter are parameterized gates trained using classical optimizers to minimize a cost function. The output of the circuit is obtained by measuring an observable on the final state, which produces a vector of real numbers representing the model’s prediction. This approach is known in the literature as parametrised quantum circuits (PQCs) benedettipaper ; jerbi2021parametrized , quantum neural networks (QNNs) farhi2018classification ; bp ; kordzanganeh2023exponentially , variational quantum circuits skolik2022quantum ; McClean_2016 ; romero2019variational or quantum circuit learning Mitarai_2018 . These models are a promising direction for the future of ML. It has been demonstrated on toy datasets that QML models need fewer steps to converge smaller error aselpaper2 ; amirapaper and have better generalisation ability from fewer data points aselpaper2 ; Generalization_QML compared to their classical counterparts. However, today, the quantum computing infrastructure cannot yet create competitive quantum models to tackle real-world, ill-structured data science problems schuld-advantage . This is only exacerbated by the barren plateau problem, discovered in bp , suggesting that large QML models are challenging to train high qubit count. Since NISQ devices limit the freedom in the machine learning model choice, research is instead focused on the hybrid quantum neural networks (HQNN) paradigm – a combination of classical and quantum models aselpaper2 ; aselpaper1 . It was shown aselpaper1 ; sagingalieva2023hybrid ; rainjonneau2023quantum ; senokosov2023quantum ; sedykh2023quantum that such models can outperform classical counterparts, making this concept attractive for further research. Classical machine learning techniques have been widely used in the energy sector for various applications, including time series prediction. One common use case is forecasting energy demand or generation to enable effective planning and operation of energy systems cassical_ml_energy_1 ; cassical_ml_energy_2 ; cassical_ml_energy_3 . However, in this work, we successfully attempted to solve such kind of real-world problem using HQNN. In partnership with Uniper, we predicted the average steam mass flow exiting a thermal power plant 15 minutes ahead. This was done by creating a novel HQNN architecture that, unlike most previous works mari2020transfer ; zhao2019qdnn ; dou2021unsupervised ; sebastianelli2021circuit ; pramanik2021quantum ; aselpaper2 ; aselpaper1 ; sagingalieva2023hybrid ; rainjonneau2023quantum where the quantum and classical networks were arranged sequentially to feed information to each other, were placed parallelly. This creates a complementary network where each element, quantum and classical, play its role. The structure of the paper is as follows. Section II provides context for the power plant, specifies the abstractions required to convert this into a data science problem and describes the dataset. Section III gives the pre-processing steps, baseline classical and hybrid architectures. Then we present the details of our model training and performance results, comparing our hybrid model with the fully quantum and classical counterparts in Section IV. Our work concludes with discussing our findings in Section V. Additionally, we offer an analysis of our quantum circuit in the Appendix. II Problem statement and context II.1 Uniper Operaite A thermal power plant operates by converting heat energy into electricity through a series of processes involving fuel combustion, heat exchange, and steam generation. In waste-to-energy and biomass-to-energy power plants, solid waste or biomass is used as fuel. The combustion process generates hot flue gases, which transfer heat to water, turning it into steam. The superheated steam drives a turbine connected to a generator, which produces electricity. Throughout the power plant, numerous sensors measure parameters such as steam mass flow, air flows, waste feed, temperatures, and chemical concentrations in the flue gas. This data is crucial for monitoring and controlling the plant’s operation and developing machine learning models for forecasting and optimization purposes. Predicting steam mass flow in a thermal power plant is essential for maintaining operational stability, optimizing energy output, controlling emissions, planning maintenance activities, and reducing costs. Accurate forecasts of steam mass flow enable operators to make informed decisions about adjusting various parameters, such as fuel input and air supply, to ensure smooth and efficient operation. Furthermore, predicting steam mass flow can help in controlling emissions generated by the power plant, minimizing the production of pollutants that negatively impact the environment and public health. In the Operaite team at Uniper, combustion processes are optimized using conventional, fully-connected feed-forward neural networks. In waste-to-energy and biomass-to-energy power plants, combustion processes are highly dynamic due to volatile fuel quality resulting, e.g. from varying geometric or chemical properties of the typically solid fuel components. This corresponds to continuous stimulation of the combustion process and constantly changes the equilibrium of the combustion process, which is given by a certain ratio between fuel infeed and airflow. Thus, continuous control (of, e.g., the airflow) is needed to maintain a steady and stable operation, which is usually achieved by using CCS (combustion control systems). Various CCSs exist, ranging from simple conventional systems built from combinations of PID (proportional-integral-derivative) controllers over fuzzy or rule-based systems to model predictive control. Most power plants rely on PID controllers, and only a few employ more advanced process control techniques, mainly because implementing them takes a lot of time and is related to high costs. Typical optimization goals could be to increase or smooth the energy output in terms of steam mass flow or electricity generation or to reduce emissions in terms of CO or NOx concentrations in the flue gas. There are two ways to achieve these operational goals, which are (i) using a controlling neural network that directly interacts with control levers and (ii) using a neural network that provides forecasts and serves as an assistant system to help humans manipulate the process proactively. This paper concentrates on improving (ii) by employing hybrid quantum neural network forecasts. II.2 Forecasting problem In this section, we present the problem statement and discuss the dataset from the power plant for forecasting problems. The objective is to develop a model that can predict the steam mass flow for two sensors $15$ minutes into the future based on the current values of all the power plant parameters, including air flows, waste feed, temperatures, chemical concentrations in the flue gas, and steam mass flow, among others. Thus, we solve a multivariate regression problem in continuous space. The prediction timeframe is determined by the characteristic timescales of the combustion process, which are depicted in Fig. 1. The combustion process entails several stages, including waste feeding, transport across the grate, flue gas stream, and heat exchange. The airflow needs to be changed continuously in response to fuel quality changes. Assuming a constant fuel quality, the system requires $5$ to $15$ minutes to return to equilibrium. Consequently, we deemed it appropriate to consider a prediction horizon of $15$ minutes for this task. Thus, the forecasted timeframe should be sufficient to take appropriate action early enough. Feature and target data used in our study are represented in Fig. 2. The dataset consists of $192$ time series, which are our features, each representing measurements of power plant parameters captured by various sensors throughout the facility, also including numerical derivatives and moving averages for each sensor. The targets we aim to predict are represented by two time series. Specifically, each target variable corresponds to the steam mass flow value for a given sensor that has been shifted $15$ minutes into the future and averaged over a $10$-minute window. This averaging process is intended to create a more uniform steam mass flow signal and reduce fluctuations. Our training dataset consists of $6,500$ timestamps, which only represent $1\%$ of the original dataset used to train the classical model currently in production. We chose to reduce the dataset due to the time-consuming nature of training a quantum neural network using existing simulatorskordzanganeh2023benchmarking . Nevertheless, it is worth noting that quantum neural networks are better at generalising from fewer data points Generalization_QML . Therefore, such a simplification can be considered justified. III Machine learning models III.1 Pre-processing An effective pre-processing of data is crucial for any machine learning pipeline, especially when dealing with quantum neural networks. The scarcity of qubits on physical quantum computers and the high computational cost of PQCs simulations are significant challenges in this field kordzanganeh2023benchmarking ; tang2022cutting . However, even small quantum networks combined with classical ones can still achieve supremacy in some cases. It is important to transform the data into a space with dimensions suitable for quantum processing to overcome this challenge. In our approach, we use principal component analysis (PCA), a powerful data analysis technique that reduces the dimensions of the original dataset Hill2016PythonML . By projecting the data onto the five main components, we reduce the input vector dimension from $192$ to $5$ while retaining more than $60\%$ of the original data’s contribution share as illustrated in Fig. 4(a). We then standardize and normalize the data, which typically has varying scales. These pre-processing steps prepare the data for processing with PQC, for which appropriate architectures will be introduced below. III.2 Classical baseline architecture A classical baseline architecture was utilised to enable future comparisons with hybrid solutions. This architecture is a fully connected neural network with one hidden layer, consisting of $5$ input neurons, $256$ hidden neurons, and $2$ output neurons, as illustrated in Fig. 3(b). It is important to note that the design of the baseline model was inspired by an existing model currently in production, which addresses a similar problem of multivariate regression. However, the difference lies in the number of targets, as the production model predicts more parameters for various sensors, while our model only predicts steam mass flow for two sensors. The architecture of the production model is also a conventional feed-forward neural network with one hidden layer. Although the forecasting problem can be effectively solved using recurrent neural networks, such as LSTM models RNN_paper ; LSTM_paper or more advanced Transformer-based architectures vaswani2017attention , conventional feed-forward neural networks with a more straightforward structure are generally easier to train and faster to process. Conventional neural networks have the advantage that the forward pass, which is necessary for the production environment, can be efficiently implemented on a controller compatible with the distributed control system in a power plant. In addition, conventional feed-forward neural networks provide better explainability compared to more advanced network topologies. For this reason, we used a conventional feed-forward neural network for our baseline architecture in this research. III.3 Hybrid quantum-classical architecture A good understanding of the data structure is crucial in building an effective architecture for problem-solving. In our case, our goal is to predict the values of a time series at a given time. By analyzing Fig. 2, we can observe that the prediction follows a sinusoidal pattern with some irregularities. It is known that a classical neural network with one hidden layer is an asymptotically universal approximator. This was also shown for parameterised quantum circuits (PQC) - quantum neural networks capable of universal approximation schuld_fourier . However, PQCs achieve this by fitting a truncated Fourier series over the samples. With this in mind, we use the parallel hybrid network (PHN) configuration introduced in kordzanganeh2023parallel , which differs from the previous sequential quantum-classical hybrid models mari2020transfer ; zhao2019qdnn ; dou2021unsupervised ; sebastianelli2021circuit ; pramanik2021quantum ; aselpaper2 ; aselpaper1 ; sagingalieva2023hybrid ; rainjonneau2023quantum . Here, the quantum and classical parts process the data independently and simultaneously without interfering with each other. The quantum circuit approximates the sinusoidal part, while the classical network fits the protruding sections. Finally, the predictions from both parts are combined to obtain the final prediction. This approach lets the classical model only adjust its weights and does not interfere with the quantum circuit during the training procedure. The proposed architecture comprises two identical parameterized quantum ansatz circuits, which will be introduced below, and a classical fully-connected neural network, discussed in Section III.2 and illustrated in Figure 3(b). The complete PHN architecture is depicted in Figure 3(d), with the same feature vector as input to both the classical and quantum circuits. Before this, the input feature vector undergoes a PCA procedure to reduce its dimensions from 192 to 5. The quantum layer (PQC) depicted in Figure 3(c) consists of five qubits. The layer begins with applying a Hadamard transform to each qubit, followed by a sequence of variational gates consisting of rotations along the $z$ and $x$ axes for each qubit. The reduced input data (with a dimensionality of 5) is then embedded into the rotation angles along the $z$-axis. Subsequently, another variational block is applied, consisting of a sequence of $R_{ZZ}$ gates that alternate periodically with rotations along the $x$-axis and conventional CNOT gates. The complete gate sequence can be found in Figure 3(c). Finally, the local expectation value of the Z operator is measured for the first qubit, producing a classical output suitable for additional post-processing. In the Appendix, we analyze a simplified version of this quantum circuit using three approaches to assess its efficiency, trainability, and expressivity. The predictions of the classical network and the quantum circuits are combined to generate the final prediction. Specifically, the output of the first quantum circuit is added to the first component of the classical network’s prediction vector. In contrast, the output of the second quantum circuit is added to the second component of the vector. This results in the final classical-quantum output, which has the potential to enhance accuracy and efficiency for time series prediction. IV Training and results When using this architecture, one must take extra caution when tuning the hyper-parameters. This arises due to the separability of the three parallelized architectures. It is important to make sure none of the 3 fully dominates the training, resulting in a model stuck in a local minimum111In some cases, it could be plausible that the global minimum is where only one network dominates and that no contribution is made by the other two, but this is unlikely as for the most part the quantum and classical networks can produce values independently from each others’ parameters. This is especially unlikely in the case of the dataset in Fig. 2 due to its periodicity.. For this reason, we make sure that the quantum network has the chance to train first to create a sinusoidal landscape, and then the classical network begins to contribute. We do this by reducing the learning rate of the classical parameters compared with the quantum ones. This ensures that the quantum networks can train and fit a sinusoidal function before the classical make any real contribution. At some point in the training, the quantum network has achieved a minimum, and its gradient values are so small that the classical network begins its meaningful training stage. All machine learning experiments were conducted on the QMware cloud platform qmw_qmw_2022 . PyTorch library pas_pyt_2019 was used to implement the classical part, while the quantum one was implemented using the PennyLane framework ber_pen_2022 . We used the lightning.qubit device, which implements a high-performance C++ backend. The standard backpropagation algorithm was applied to the classical part of our hybrid quantum neural network to calculate the loss function’s gradients for each parameter. In contrast, the adjoint method was employed for the quantum part. In Fig. 4(b), the loss performance is presented for the hybrid architecture compared to the purely classical and purely quantum networks. The pure classical network exclusively employs the classical neural network, whereas the pure quantum network utilizes only the two quantum models in the absence of classical components. It is evident from the results that the hybrid architecture outperforms the pure classical and pure quantum counterparts by a significant margin. Specifically, after 100 epochs of training, the mean squared error (MSE) loss for the hybrid architecture is more than 5.7 and 4.9 times lower than that of the purely classical and purely quantum networks. Fig. 4(c) shows the fit of the classical and hybrid circuits to the time-series data in the test region. The top graph displays the classical and hybrid predictions on unseen data. In contrast, the bottom graph depicts their residuals, representing the relative errors between the ground truth and the model predictions at each point. Overall, the hybrid model predictions are much closer to the ground truth with a smaller relative error, up to 2 times lower than the classical approach. Therefore, a PHN-based approach is the most efficient strategy in this case. V discussion This study has highlighted the potential of quantum machine learning for time series prediction in the energy sector through a novel parallel hybrid architecture. We have demonstrated superior performance to traditional classical and quantum architectures by combining independent classical and quantum neural networks. Our approach enables the classical and quantum networks to operate independently during the training, preventing interference between the two. Our results reveal that the parallel hybrid model outperforms pure classical and pure quantum networks, exhibiting more than 5.7 and 4.9 times lower MSE loss on the test set after training. Furthermore, the hybrid model demonstrates more minor relative errors between the ground truth and the model predictions on the test set, up to 2 times better than the pure classical model. These findings suggest that quantum machine learning can be valuable for solving real-world problems in the energy sector and beyond. Future research could explore applying the parallel hybrid quantum neural network approach to other machine learning problems while also increasing the complexity and performance of the model. References (1) Vedran Dunjko and Hans J Briegel. Machine learning & artificial intelligence in the quantum domain: a review of recent progress. Reports on Progress in Physics, 81(7):074001, 2018. (2) Alexey Melnikov, Mohammad Kordzanganeh, Alexander Alodjants, and Ray-Kuang Lee. Quantum machine learning: from physics to software engineering. Advances in Physics: X, 8(1), 2023. (3) Marcello Benedetti, Erika Lloyd, Stefan Sack, and Mattia Fiorentini. Parameterized Quantum Circuits as Machine Learning Models. Quantum Science and Technology, 4:043001, 2019. (4) Sofiene Jerbi, Casper Gyurik, Simon Marshall, Hans Briegel, and Vedran Dunjko. Parametrized quantum policies for reinforcement learning. Advances in Neural Information Processing Systems, 34:28362–28375, 2021. (5) Edward Farhi and Hartmut Neven. Classification with Quantum Neural Networks on Near Term Processors. arXiv preprint arXiv:1802.06002, 2018. (6) Jarrod R. McClean, Sergio Boixo, Vadim N. Smelyanskiy, Ryan Babbush, and Hartmut Neven. Barren plateaus in quantum neural network training landscapes. Nature Communications, 9(1), 2018. (7) Mo Kordzanganeh, Pavel Sekatski, Leonid Fedichkin, and Alexey Melnikov. An exponentially-growing family of universal quantum circuits. Machine Learning: Science and Technology, 2023. (8) Andrea Skolik, Sofiene Jerbi, and Vedran Dunjko. Quantum agents in the gym: a variational quantum algorithm for deep Q-learning. Quantum, 6:720, 2022. (9) Jarrod R McClean, Jonathan Romero, Ryan Babbush, and Alá n Aspuru-Guzik. The theory of variational hybrid quantum-classical algorithms. New Journal of Physics, 18(2):023023, 2016. (10) Jonathan Romero and Alan Aspuru-Guzik. Variational quantum generators: Generative adversarial quantum machine learning for continuous distributions. arXiv preprint arXiv:1901.00848, 2019. (11) K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii. Quantum circuit learning. Physical Review A, 98(3), 2018. (12) Michael Perelshtein, Asel Sagingalieva, Karan Pinto, Vishal Shete, Alexey Pakhomchik, et al. Practical application-specific advantage through hybrid quantum computing. arXiv preprint arXiv:2205.04858, 2022. (13) Amira Abbas, David Sutter, Christa Zoufal, Aurélien Lucchi, Alessio Figalli, et al. The power of quantum neural networks. Nature Computational Science, 1:403–409, 2021. (14) Matthias C. Caro, Hsin-Yuan Huang, M. Cerezo, Kunal Sharma, Andrew Sornborger, et al. Generalization in quantum machine learning from few training data. Nature Communications, 13(1), 2022. (15) Maria Schuld and Nathan Killoran. Is Quantum Advantage the Right Goal for Quantum Machine Learning? PRX Quantum, 3:030101, 2022. (16) Asel Sagingalieva, Andrii Kurkin, Artem Melnikov, Daniil Kuhmistrov, Michael Perelshtein, et al. Hyperparameter optimization of hybrid quantum neural networks for car classification. arXiv preprint arXiv:2205.04878, 2022. (17) Asel Sagingalieva, Mohammad Kordzanganeh, Nurbolat Kenbayev, Daria Kosichkina, Tatiana Tomashuk, et al. Hybrid quantum neural network for drug response prediction. Cancers, 15(10):2705, 2023. (18) Serge Rainjonneau, Igor Tokarev, Sergei Iudin, Saaketh Rayaprolu, Karan Pinto, et al. Quantum algorithms applied to satellite mission planning for Earth observation. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, pages 1–13, 2023. (19) Arsenii Senokosov, Alexander Sedykh, Asel Sagingalieva, and Alexey Melnikov. Quantum machine learning for image classification. arXiv preprint arXiv:2304.09224, 2023. (20) Alexandr Sedykh, Maninadh Podapaka, Asel Sagingalieva, Nikita Smertyak, Karan Pinto, et al. Quantum physics-informed neural networks for simulating computational fluid dynamics in complex shapes. arXiv preprint arXiv:2304.11247, 2023. (21) Sean Nassimiha, Peter Dudfield, Jack Kelly, Marc Peter Deisenroth, and So Takao. Short-term Prediction and Filtering of Solar Power Using State-Space Gaussian Processes. arXiv preprint arXiv:2302.00388, 2023. (22) Weijia Yang, Sarah N. Sparrow, and David C. H. Wallom. A generalised multi-factor deep learning electricity load forecasting model for wildfire-prone areas. arXiv preprint arXiv:2304.10686, 2023. (23) Xuetao Jiang, Meiyu Jiang, and Qingguo Zhou. Day-Ahead PV Power Forecasting Based on MSTL-TFT. arXiv preprint arXiv:2301.05911, 2023. (24) Andrea Mari, Thomas R. Bromley, Josh Izaac, Maria Schuld, and Nathan Killoran. Transfer learning in hybrid classical-quantum neural networks. Quantum, 4:340, 2020. (25) Chen Zhao and Xiao-Shan Gao. QDNN: DNN with quantum neural network layers. arXiv preprint arXiv:1912.12660, 2019. (26) Tong Dou, Kaiwei Wang, Zhenwei Zhou, Shilu Yan, and Wei Cui. An unsupervised feature learning for quantum-classical convolutional network with applications to fault detection. In 2021 40th Chinese Control Conference (CCC), pages 6351–6355. IEEE, 2021. (27) Alessandro Sebastianelli, Daniela Alessandra Zaidenberg, Dario Spiller, Bertrand Le Saux, and Silvia Liberata Ullo. On Circuit-based Hybrid Quantum Neural Networks for Remote Sensing Imagery Classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 15:565–580, 2021. (28) Sayantan Pramanik, M Girish Chandra, CV Sridhar, Aniket Kulkarni, Prabin Sahoo, et al. A Quantum-Classical Hybrid Method for Image Classification and Segmentation. arXiv preprint arXiv:2109.14431, 2021. (29) Mohammad Kordzanganeh, Markus Buchberger, Basil Kyriacou, Maxim Povolotskii, Wilhelm Fischer, Andrii Kurkin, Wilfrid Somogyi, Asel Sagingalieva, Markus Pflitsch, and Alexey Melnikov. Benchmarking simulated and physical quantum processing units using quantum and hybrid algorithms. Advanced Quantum Technologies, 6(8):2300043, 2023. (30) Wei Tang and Margaret Martonosi. Cutting Quantum Circuits to Run on Quantum and Classical Platforms. arXiv preprint arXiv:2205.05836, 2022. (31) Patrick Hill and Uma Devi Kanagaratnam. Python Machine Learning Sebastian Rashka. Itnow, 58:64–64, 2016. (32) Robin M. Schmidt. Recurrent Neural Networks (RNNs): A gentle Introduction and Overview. arXiv preprint arXiv:1912.05911, 2019. (33) Alex Sherstinsky. Fundamentals of Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) network. Physica D: Nonlinear Phenomena, 404:132306, 2020. (34) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, et al. Attention Is All You Need. arXiv preprint arXiv:1706.03762, 2017. (35) Maria Schuld, Ryan Sweke, and Johannes Jakob Meyer. Effect of data encoding on the expressive power of variational quantum-machine-learning models. Physical Review A, 103(3), 2021. (36) Mohammad Kordzanganeh, Daria Kosichkina, and Alexey Melnikov. Parallel Hybrid Networks: an interplay between quantum and classical neural networks. arXiv preprint arXiv:2303.03227, 2023. (37) QMware. QMware — The first global quantum cloud. https://qm-ware.com/, 2022. (38) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc., 2019. (39) Ville Bergholm, Josh Izaac, Maria Schuld, Christian Gogolin, et al. PennyLane: Automatic Differentiation of Hybrid Quantum-Classical Computations. arXiv preprint arXiv:1811.04968, 2022. (40) Bob Coecke and Ross Duncan. Interacting quantum observables: categorical algebra and diagrammatics. New Journal of Physics, 13(4):043016, 2011. (41) John van de Wetering. ZX-calculus for the working quantum computer scientist. arXiv preprint arXiv:2012.13966, 2020. (42) Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251–276, 1998. (43) Oksana Berezniuk, Alessio Figalli, Raffaele Ghigliazza, and Kharen Musaelian. A scale-dependent notion of effective dimension. arXiv preprint arXiv:2001.10872, 2020. APPENDIX Quantum Circuit Analysis In this section, we thoroughly examine a parameterized quantum circuit (PQC) that is represented in Fig. 5(a). This PQC is a 2-qubit toy version222Carrying out such an analysis for the original architecture can be highly computationally expensive in terms of the Fisher information matrix calculation. It is also not as visual as the Fourier accessibility demonstration. Hence, we decided to analyze a simplified version. introduced in Section III.3, inheriting its fundamental properties and concepts. Our analysis focuses on three different approaches: the ZX-calculus [40] to examine circuit-reducibility, the Fisher information [13] to evaluate the trainable parameters and the circuit expressiveness, and the Fourier accessibility [35] to investigate the encoding. .1 ZX-calculus The ZX-calculus is a graphical language initially based on Category Theory that can simplify a quantum circuit to a simpler, equivalent one [40]. It involves transforming the circuit into a ZX graph and applying the ZX-calculus rules introduced in Ref. [41] to reduce the graph to a more fundamental version. After that, the obtained version is mapped again to the quantum circuit. This process results in a new, streamlined circuit that achieves the maximum potential of trainable layers while avoiding fully redundant parameters. If a circuit cannot be further reduced, it is called ZX-irreducible. In this study, we present a novel quantum circuit that generates a non-commuting graph which cannot be simplified using ZX-calculus rules. This is illustrated in Fig. 5(b), where adjacent dots are colored differently. Notably, the X-spiders (red dots) appear only at the first wire’s end, and the measurement is performed in the Z-basis (green family). Consequently, no pairs of dots can commute with each other, and none of them can fuse. This implies that our circuit has no redundant parameters. Remarkably, our circuit is designed only to measure the first qubit. At the same time, the encoding and variational parameters of the second qubit (in the original circuit with the other four qubits) heavily influence the measurement outcome of the first qubit. This is achieved using the $R_{ZZ}$ gate, namely parameter $\alpha_{5}$ introduces pairwise correlations between the $x_{1}$ and $x_{2}$ features (in the original circuit, such correlations create between any pair of qubits). As a result, we obtain a highly complex approximator that depends on the features in a non-trivial manner. The subsequent sections present a detailed analysis of such a quantum approximator’s qualities. .2 Fisher information Any neural network, classical or quantum, can be considered a statistical model. The Fisher information estimates the knowledge gained by a particular parameterization of such a statistical model. In supervised machine learning, we are given a set of data pairs $(\mathbf{x},y)$ from a training subset and a parameterized model $h_{\boldsymbol{\theta}}(\mathbf{x})$ that maps input data $\mathbf{x}$ to output $y$. The parameterized models family can be fully described by the joint probability of features and targets: $\mathcal{F}:=\{P(\mathbf{x},\mathbf{y}|\boldsymbol{\theta}):\boldsymbol{\theta}\in\boldsymbol{\Theta}\}$ and during the training procedure, we want to maximize likelihood to determine the parameters $\boldsymbol{\hat{\theta}}\in\boldsymbol{\Theta}$ for which the observed data have the highest joint probability. We can think of $\mathcal{F}$ as some Riemannian manifold, and the Fisher information matrix can be naturally defined as a metric over this manifold [42, 13]: $$F(\boldsymbol{\theta})=\mathbb{E}_{\{(\mathbf{x},\mathbf{y})\sim p\}}[\boldsymbol{\nabla}_{\boldsymbol{\theta}}\log p(\mathbf{x},\mathbf{y}|\boldsymbol{\theta})\cdot\boldsymbol{\nabla}_{\boldsymbol{\theta}}\log p(\mathbf{x},\mathbf{y}|\boldsymbol{\theta})^{T}]$$ (1) According to the findings presented in Ref. [13], when the number of qubits in a model increases, a Fisher information spectrum with a higher concentration of eigenvalues approaching zero indicates that the model potentially suffers from a barren plateau. On the other hand, if the Fisher information spectrum is not concentrated around zero, it is less likely for the model to experience a barren plateau. Using the Fisher information matrix, we can also describe model capacity: quantifying the class of functions, a model can fit, in other words, a measure of the model’s complexity. For this purpose, the notion of effective dimension, firstly introduced in Ref. [43] and modified in Ref. [13], can be used: $$d_{\gamma,n}(\mathcal{M}_{\Theta}):=2\frac{\text{log}\left(\frac{1}{V_{\Theta}}\int_{\Theta}\sqrt{\text{det}\left(id_{d}+\frac{\gamma n}{2\pi\text{log}n}\hat{F}(\boldsymbol{\theta})\right)}d\boldsymbol{\theta}\right)}{\text{log}\left(\frac{\gamma n}{2\pi\text{log}n}\right)},$$ (2) where $V_{\Theta}:=\int_{\Theta}d\theta$ is the volume of the parameter space, $\gamma$ is some constant factor [13], and $\hat{F}(\boldsymbol{\theta})$ is the normalised Fisher matrix defined as $$\hat{F}_{ij}(\boldsymbol{\theta}):=d\frac{V_{\Theta}}{\int_{\Theta}\operatorname{Tr}(F(\boldsymbol{\theta}))d\boldsymbol{\theta}}F_{ij}(\boldsymbol{\theta}).$$ (3) We calculate the Fisher information for three specific toy circuit configurations: with $N=1$ last trainable layer repetition - contains $7$ trainable parameters, when $N=2$ and $N=3$ repetitions that consist of $10$ and $13$ trainable parameters accordingly. For a finite number of data, taking into account definition (1), the Fisher information estimate with some simplifications can be rewritten as follows: $$F(\boldsymbol{\theta})=\sum_{(\mathbf{x},\mathbf{y})\in X\times Y}\frac{\boldsymbol{\nabla}_{\boldsymbol{\theta}}P(\mathbf{x},\mathbf{y}|\boldsymbol{\theta})\cdot\boldsymbol{\nabla}_{\boldsymbol{\theta}}P(\mathbf{x},\mathbf{y}|\boldsymbol{\theta})^{T}}{P(\mathbf{x},\mathbf{y}|\boldsymbol{\theta})},$$ (4) where the joint probability for QNN can be defined as the overlap between model output and these states: $$P(\mathbf{x},\mathbf{y}|\boldsymbol{\theta})=\operatorname{Tr}(\rho(\boldsymbol{\theta},\mathbf{x})\cdot\mathbf{y}\mathbf{y}^{\dagger}).$$ (5) Following Ref. [13], we used $1000$ features samples, each of them comes from Gaussian distribution $\mathbf{x_{i}}\sim\mathcal{N}(\mu=0,\sigma^{2}=1)$, and target as specific resultant state $\mathbf{y}\in Y=\{\left|00\right\rangle,\left|01\right\rangle,\left|10\right\rangle,\left|11\right\rangle\}$ which are all possible basis states since we deal with the 2-qubit circuit. The Fisher information matrix is calculated with $100$ uniform weights realization $\theta\in[0,2\pi)$. Figure 5(c) shows how the average Fisher information matrix’s effective dimension and rank depend on the network’s trainable layers. As expected, the effective dimension increases with the number of trainable parameters, indicating an increase in expressivity. However, trainability is also an important factor. The spectrum of the Fisher information matrix reflects the square of the gradients [13], and a network with high trainability will have fewer eigenvalues close to zero. Our experiments found that the Fisher information matrix rank remained constant at $7$ for all three configurations, indicating the presence of zero gradients for some network parameters with $10$ and $13$ trainable parameters. This is further illustrated in Figure 5(d), which shows the distribution of eigenvalues for each configuration. The probability of observing eigenvalues close to zero increased from $36\%$ for one repetition to almost $60\%$ for three. Therefore, our results suggest that using only $N=1$ repetition is the optimal strategy for this setup. .3 Fourier accessibility In Ref. [35], it was demonstrated that any quantum neural network (QNN) can be expressed as a partial Fourier series in the data. The encoding gates in the QNN determine the frequencies that can be accessed. In the case of a multi-feature setting, the QNN produces a multi-dimensional truncated Fourier series. The quantum approximator $f(\boldsymbol{\theta},\mathbf{x})$, which is the expectation value of a specific measurement for a two-feature setting, can be expressed as a sum of truncated Fourier series terms: $$f(\boldsymbol{\theta},\mathbf{x})=\sum_{l_{1}=-L_{1}}^{L_{1}}\sum_{l_{2}=-L_{2}}^{L_{2}}2|c_{l_{1},l_{2}}|\cos(l_{1}x_{1}+l_{2}x_{2}-\arg(c_{l_{1},l_{2}})),$$ (6) where $L_{1}$ and $L_{2}$ are the numbers of encoding repetitions for the first and second features. The Fourier coefficients, $c_{l_{1},l_{2}}$, of the QNN determine the amplitude and phase of each Fourier term and depend on the variational gates used in the circuit. The amplitude of the coefficient is limited by the fact that the expectation value of any QNN takes values in the range of $-1$ to $1$. As a result, the maximum amplitude of $c_{l_{1},l_{2}}$ is 1. The accessibility of the Fourier space for a QNN is evaluated by examining a family of quantum models with only two features and one encoding repetition. The circuit is set up in this case, and the weights are randomly varied many times. The results of this analysis are shown in Figure 5(e), which displays the Fourier accessibility of the network (with $N=1$ repetition) for $1000$ randomly generated weight sets in the range of $[0,2\pi)$. The Fourier coefficients for a series with nine terms are presented, but due to the symmetry property $c_{l_{1},l_{2}}=c_{-l_{1},-l_{2}}$, only six coefficients are shown. It can be observed that the set characterizing the possible values of the coefficient does not degenerate into a point for any $l_{1}$ and $l_{2}$. Five out of nine coefficients have an amplitude of approximately or greater than $0.5$, while the other four have an amplitude of about $0.25$. Phase accessibility is also essential, and it can be seen that phases can be arbitrary except for $c_{0,0}$, which remains fixed. The Fourier accessibility shows comparable results with experiments conducted in Ref. [35], which is sufficiently good.
Abstract The formulation of hybrid crossing angle schemes has been a recent development of the TESLA collision geometry debate. Here we report on two such schemes, characterised by either a small vertical or horizontal beam crossing angle. LAL 04-114 October 2004 Alternative IR geometries for TESLA with a small crossing angle111Contributed to the machine-detector interface session of LCWS04. R. Appleby222r.b.appleby@dl.ac.uk, D. Angal-Kalinin ASTeC, Daresbury Laboratory, Warrington, WA4 4AD, England P. Bambade, B. Mouton LAL, IN2P3-CNRS et Université de Paris-Sud, Bât. 200, BP 34, 91898 Orsay Cedex, France O. Napoly, J. Payet DAPNIA-SACM, CEA Saclay, 91191 Gif/Yvette Cedex, France 1 Introduction The specification of the International Linear Collider (ILC) states the need for two separate interaction regions (IRs) with comparable performances and physics potential for e${}^{+}$e${}^{-}$-collisions. The time structure of the bunch trains does not require colliding the beams with a crossing angle in the case of the superconductive technology, which has now been chosen for the accelerator, and there is hence a flexibility in the design. It is believed that both IRs with and without a crossing angle can lead to acceptable designs. There are however specific advantages and disadvantages in each, for the machine as well as for some aspects of the physics potential [1]. Moreover, if a crossing angle is used its magnitude is an important parameter to optimise. An additional consideration is the requirement to enable $\gamma\gamma$ collisions as an option at one IR in the future. A large (still to be defined) crossing angle will most certainly be needed for the corresponding IR, while making sure not to compromise its e${}^{+}$e${}^{-}$ capabilities. A balanced scenario which could be attractive in this context would be to use a smaller or even null crossing angle at the other IR. In the TESLA technical design report [2] a head-on collision geometry was actually specified, but the extraction of the beamstrahlung photon flux and spent beam was found to be problematic during the evaluation conducted in 2003 by the ILC TRC, which even highlighted it as a level 2 ranking item in its final report [3]. In this paper, we describe two new so-called hybrid schemes featuring small $\mathcal{O}$(10${}^{-3}$ rad) crossing angles. They are attempts to maintain the advantages of the head-on geometry while resolving some of its weaknesses. The first scheme, which uses a small vertical crossing angle, was originally suggested by Brinkmann [4] to reduce the power deposition from beamstrahlung on the septum blade used in the extraction of the spent beam. In the solution presented in section 2, this vertical crossing angle is combined with modified optics in the final focus to improve the chromatic properties in the transport of the low energy tail of the spent beam and thereby reduce losses in the extraction channel. The second scheme, which uses a horizontal crossing angle, was first developed in the context of CLIC [6]. The solution presented in section 3 is an adaptation to the TESLA project. Both schemes are discussed is this paper in the spirit of initial proofs of principle. Investigations were only carried out at a centre-of-mass energy of 500 GeV and $L^{*}$, the distance between the last quadrupole and the interaction point, was taken to be 4.1m. Both geometries need further work and development. This will be pursued in the coming months to fully assess their feasibility for the IR which will not later be upgraded to $\gamma\gamma$ collisions, for centre-of-mass energies up to 1 TeV. 2 The small vertical crossing angle scheme The TESLA extraction scheme with a head-on collision described in the TDR [2] suffers from the problems of septum irradiation [5] and the loss of low energy tail particles. The total power radiated on the septum blade was found to be unacceptable [5], and higher than the estimate in the TESLA TDR, when calculated for a realistic beam using start-to-end simulations. Analysis of the transport of the post-IP beam down the extraction line also revealed that the loss of charged particles can reach unacceptable levels in the septum blade region. This loss is a consequence of a beam size increase resulting from overfocusing of the low energy disrupted beam tail by the strong final doublet. The solution to avoid these problems is twofold [4]. The septum irradiation problem is solved by introducing a small vertical crossing angle to shine the beamstrahlung away from the septum blade. The required vertical crossing angle can be estimated from the vertical photon distribution [5] and the upper limit considered reasonable for the power deposition. An angle of $\sim$0.3mrad should be sufficient. The overfocusing of tail particles is solved by splitting the strong final doublet into a quadruplet. Figure 1 shows the disrupted beam size for the low energy tail particles at the magnetic septum location. This septum is located almost 50m away from the IP. These plots were obtained by tracking the disrupted beam along the extraction line using an NLC version of DIMAD [7] which performs tracking calculations correct to all orders in the energy deviation $\delta$. This ensures the correct analytic treatment of the low energy tail particles. The reduction in disrupted beam size with quadruplet optics can be expected to reduce the losses along the extraction line. The quadruplet optics used to reduce particle losses in the extraction line must satisfy the requirements for the incoming beam and must have good chromatic properties. The final focus system with local chromaticity correction [9] has been modified to include a final quadruplet and the resulting lattice has been optimised to second order to achieve good chromatic bandwidth. The dipole locations and beam optics have been optimised to keep the horizontal emittance growth due to synchrotron radiation below $2.5\times 10^{-14}\,\mathrm{m.rad}$. The optical functions of the final focus system and the beam sizes and luminosity as a function of energy spread are shown in the left and right hand plots of figure 2, respectively. The results with the quadruplet are comparable with those obtained using a doublet [9]. While this scheme resolves the problems found in the extraction of the beamstrahlung photon flux and spent beam, other requirements of the head-on geometry, such as the need for strong electrostatic separators remain. It will also be required to properly mask the synchrotron radiation generated by the off-axis beam in the outgoing quadrupoles to minimise backshining into the detector. The outgoing beams will moreover have offsets in the beam position monitors which may increase the complexity of the IP feedback. A strong crab-crossing correction will moreover be required, something which is not needed in the head-on scheme. 3 The small horizontal crossing angle scheme The second proposed IR geometry is an adaptation of a scheme studied for CLIC [6]. It has a small $\sim$2mrad horizontal crossing angle and uses two different kinds of quadrupoles for the final doublet: a large bore superconducting r=24mm magnet for the last defocusing quadrupole (QD) and a conventional r=7mm magnet for the next to last focussing element (QF). In this way, the outgoing beam goes through QD horizontally off-axis by about 1mm, which further deflects it away from the incoming beam. The optics for the incoming beam is similar to that described in [9]. In the fitted solution, the transport matrix element R${}_{22}\simeq 3$ between the interaction point and the exit of QD, resulting in a total angle of $\sim$6mrad between incoming and outgoing beam lines. QD has a length of 1m and a 1.5m drift space is kept between QD and QF. In this way both the outgoing beamstrahlung cone and disrupted charged beam are far enough away from the incoming beam in QF ($\leq$6mm beyond the vacuum chamber at its entrance) so that they can be safely steered in between the pole tips on that side of the magnet. The set-up is sketched in figure 3 together with a schematic of the relevant apertures for both in and outgoing beams up to 10m from the IP (see figure 4). The envelope for the beamstrahlung cone is represented by the dashed lines and corresponds to a $\pm$0.5mrad horizontal angular spread around the 2mrad crossing angle, which is enough to contain most of the emitted power in realistic beam conditions [8]. The tracing of a representative set of particles from the low-energy tail of the disrupted outgoing beam is also depicted in the schematic to illustrate the clearance at the exit of QD. An initial estimate of the fraction of outgoing beam power deposited in QD is shown in figure 5 where it is also compared with the same fraction for the head-on scheme (in the latter case in the entire doublet). Although this study was limited by statistics (the disrupted beam was represented by only 640000 macroparticles) it can be seen that less than 5 $\times 10^{-7}$ of the beam, corresponding to $\simeq$5W at nominal intensity, is deposited in either scheme after passing through the magnetic element(s) common with the incoming beam. This exceeds the 3 W/m limit required to keep the cooling of the superconducting magnet reasonable. Moreover, for the same safety margin to be assured in the crossing angle as in the head-on scheme when taking into account realistic beam conditions, it can be seen from the plots that the crossing angle would have to be limited to $\sim$1.6mrad. A more comprehensive study with more statistics and with a suitable optimisation of both the apertures and lengths is needed to refine these numbers and determine the optimal magnitude for the crossing angle and feasibility of the scheme. The rationale for an IR geometry with such a small horizontal crossing angle is as follows. There is no need to develop and operate a very compact high gradient final doublet of quadrupoles (either superconducting of with permanent magnets) as is required for large crossing angles (e.g. 20mrad). Strong electrostatic separators, required for the head-on geometry, are not needed. Only the last quadrupole, QD is common to both beams, instead of the entire QD+QF doublet as in the head-on geometry. This should give a bit more freedom both in the design of the optics and operationally. Detrimental effects on the physics program (e.g. reduced hermeticity in the forward region, complications from the solenoid and beam axes not being aligned) are negligible. With a horizontal crossing angle of $\sim$2mrad and for nominal TESLA parameters [2], only $\sim$15% of the luminosity is lost without using crab-crossing (see Figure 6, obtained using the Guinea-pig simulation [8]), compared to a factor of about 5 for a 20mrad crossing angle. Correction of this 15% loss may be possible without dedicated cavities, by exploiting the angular dispersion required at the collision point in the design of the final focus optics [9] to enable local chromaticity correction. Diagnostics of the spent beam should be easier than in the head-on scheme, although it remains to be checked that a polarimeter and energy spectrometer can both be designed with suitable performances in the outgoing beam line. 4 Conclusion Two options have been suggested to save the advantages of the head-on collision scheme proposed in the TESLA design. In the first, a small vertical crossing angle ($\sim$0.3mrad) at the IP can alleviate the problem of beamstrahlung heating of the septum blade. To reduce the low energy tail particle losses the strong final doublet can be replaced by a quadruplet. A final focus system with good chromatic properties can been designed with such a quadruplet. However, as for the head-on scheme, R&D on electrostatic separators will be needed, especially for the upgrade to 1 TeV. Moreover a strong crab-crossing correction is required to maintain the luminosity in this scheme. The second option uses a small ($\sim$2mrad) horizontal crossing angle. This scheme is attractive as it does not need electrostatic separators and requires only a very modest crab-crossing correction, which moreover may be achieved without special cavities, by exploiting finite dispersion at the IP. Many details of both designs still need to be worked out, including optimising the magnitude of the crossing angle in the second scheme, confirming that power losses in the extraction channel are tolerable and studying whether suitable post-IP diagnostics can be included. References [1] P. Bambade, “Review of physics and background implications from a finite crossing-angle”, Contributed to the same session at this conference [2] R. Brinkmann et al, “The TESLA technical design report” [3] G. Loew et al, “Second Report of the International Linear Collider Technical Review Committee” [4] R. Brinkmann, “A few thoughts on improving the TESLA beam extraction scheme”, DESY seminar 3/12/02, http://tesla.desy.de/tesla-apdg/docs/extr_line_sem_031202/brinkmann.pdf [5] A. Seyri, “Beamstrahlung photon load on the TESLA extraction septum blade”, LCC-Note-0104 (2002). K. Büßer, “Beamstrahlung on the septum blade”, ECFA/DESY Workshop, Amsterdam 01/04/03 [6] O. Napoly, “Design for a final focus system for CLIC in the multi-bunch regime”, CEA/DAPNIA/SEA-97-17 [7] P. Tenenbaum et al, “Use of simulation programs for the modelling of the next linear collider”, Proceedings of the 1999 IEEE Part. Acc. Conf. (PAC99), New York City, NY (1999), SLAC-PUB-8136 (1999) [8] O. Napoly and D. Schülte, “Beam-Beam effects at the TESLA linear collider”, CEA/DAPNIA/SEA-01-04, DESY TESLA-01-15 [9] J. Payet and O. Napoly, “New design of the TESLA interaction region with L*=5m”, The proceedings of PAC2003
Effective Potential for Polyakov Loops in Lattice QCD Y. Nemoto [RBC collaboration] We thank RIKEN, Brookhaven National Laboratory and the U.S. Department of Energy for providing the facilities essential for the completion of this work. RIKEN BNL Research Center, Brookhaven National Laboratory, NY 11973, USA Abstract Toward the derivation of an effective theory for Polyakov loops in lattice QCD, we examine Polyakov loop correlation functions using the multi-level algorithm which was recently developed by Luscher and Weisz. 1 Introduction Precise measurement of Polyakov loop correlation is important for finite temperature physics. First of all heavy quark potentials at finite temperature are obtained by measuring two-Polyakov-loop correlation functions. It is already well-known that we can obtain the $Q\bar{Q}$ potential below and above the critical temperature ($T_{C}$) from this measurement. Due to the nonperturbative effects, however, there are still less-known quantities at intermediate temperature. For example the $Q\bar{Q}$ potential at finite temperature is generally written as $V/T=-e/(RT)^{d}\exp(-\mu R)$, where $T$ is temperature, $R$ the distance between quarks, and $e,d,\mu$ are parameters. At very high temperature, perturbation theory predicts $d=2$, while just above $T_{C}$, lattice simulation gives the deviation from $d=2$[1]. In relation to such a nonperturbative behavior at finite temperature, our main interest here is to derive an effective theory for Polyakov loops from lattice QCD. Recently new Polyakov loop model is proposed[2] regarding the deconfining phase transition in $SU(N)$ gauge theory. It conjectures a relationship between the Polyakov loops and the pressure. The usual Polyakov loop $\ell_{1}$ transforms under a global $Z(N)$ transformation as $\ell_{1}\to e^{i\phi}\ell_{1}$, with $\phi=2\pi j/N(j=1...N-1)$. This means that $\ell_{1}$ transforms as a field with charge one. On the other hand one can consider Polyakov loops with higher charge under a global $Z(N)$ transformation. For example the charge two Polyakov loop is defined as $$\ell_{2}=\frac{1}{N}{\rm tr}L^{2}-\frac{1}{N^{2}}({\rm tr}L)^{2},$$ (1) where $L$ is an ordered link product in the temporal direction, $L(x)={\rm P}\exp(ig\int_{0}^{\beta}A_{0}(x,\tau)d\tau)$. The Polyakov loops with higher charge might affect non-universal behavior in the model. Therefore it is interesting to measure relevant kind of new Polyakov loop correlation on the lattice without any phenomenological assumptions. In this case, however, we need the Polyakov loops with relatively large length in the temporal direction, which might be inaccessible in the ordinary computation of the Polyakov loops. Thus we are led to try the new algorithm recently developed by Luscher and Weisz called the multi-level method[3] to measure various Polyakov loop correlation functions. This method can measure quite precisely for the Polyakov loop correlation with the large temporal length and/or large distance between the Polyakov loops as shown below. 2 Precise measurement of Polyakov loop correlation One way to get the Polyakov loop correlation functions as precisely as possible is the method of improved estimator. When we measure Polyakov loops, instead of a time-like link variable itself, by using some average of the time-like links, we can reduce the error of the Polyakov loop without affecting the average. The multi-level algorithm, which is used here, is more efficient in measuring the Polyakov loop correlation. We explain this algorithm briefly in the following. Let us consider the two-Polyakov-loop correlation, $\langle P(n)P^{\dagger}(n+r)\rangle$, where $n$ is a spatial coordinate and $r$ is the distance between two quarks. First we define a two-time-like link operator, $$T(t_{0},r)_{\alpha\beta\gamma\delta}=U_{4}(t_{0},n)_{\alpha\beta}U_{4}^{% \dagger}(t_{0},n+r)_{\gamma\delta}.$$ (2) The essence of the multi-level algorithm is similar to the multi-hit one, i.e., is to take an average of $T$ instead of $T$ itself in the sublattice of the time slice, $$[T(t_{0},r)_{\alpha\beta\gamma\delta}]=\frac{1}{Z_{s}}\int D[U]_{s}T(t_{0},r)e% ^{-S[U]_{s}}.$$ (3) This is schematically shown in Fig.1. Here $s$ denotes the sublattice which includes the operator $T$ and is shown as the shaded area in Fig.1. The Polyakov loop correlation function is then expressed as $$\displaystyle\langle P(n)P^{\dagger}(n+r)\rangle$$ (4) $$\displaystyle=$$ $$\displaystyle\langle[T(0,r)T(a,r)][T(2a,r)T(3a,r)]$$ $$\displaystyle\cdots[T(T-2a,r)T(T-a,r)]\rangle$$ for single layer or $$\displaystyle\langle P(n)P^{\dagger}(n+r)\rangle$$ (5) $$\displaystyle=$$ $$\displaystyle\langle\left[[T(0,r)T(a,r)][T(2a,r)T(3a,r)]\right]$$ $$\displaystyle\qquad\cdots\left[\cdots[T(T-2a,r)T(T-a,r)]\right]\rangle$$ for two layers. These equations are also schematically shown in Fig.1. In this method there are essentially two new parameters, which we call $n_{\rm ms}$ and $n_{\rm up}$. $n_{\rm ms}$ is the number of measurement to be made for the time slice average $$\displaystyle[T(t_{0},r)T(t_{0}+a,r)]$$ (6) $$\displaystyle=$$ $$\displaystyle\frac{1}{n_{\rm ms}}\sum_{i}^{n_{\rm ms}}T_{i}(t_{0},r)T_{i}(t_{0% }+a,r),$$ and $n_{\rm up}$ is the number of time slice updates between measurement. We determine $n_{\rm ms}$ so that the s/n ratio is equal to unity. Then the statistical errors are reduced exponentially as the distance between the Polyakov loops increases[3]. We show some test results of the algorithm in the following. All the simulations are done with the lattice size $12^{4}$ at $\beta=5.7$. We use the standard Wilson action and no gauge fixing. Fig.2 shows the $n_{\rm ms}$ dependence of a two-Polyakov-loop correlation function at a certain gauge configuration. In the figure, the spatial distance between the two Polyakov loops is set to $4$ in the lattice unit and $n_{\rm up}=1$ is fixed. In the single layer case, the relative computation time in the horizontal line is proportional to $n_{ms}$, i.e., the leftmost point corresponds to $n_{\rm ms}=10$ and the rightmost $n_{\rm ms}=1000$. In the two layer case, we have chosen the parameters $n_{\rm ms}=10$ and $n_{\rm up}=16$ for the outer layer and $n_{\rm ms}=5\sim 150$ and $n_{\rm up}=1$ for the inner layer. Usual measurement of the Polyakov loop correlation without any improvement corresponds to $n_{\rm ms}=1$. The graph shows that the values approach the precise one as $n_{\rm ms}$ increases. We have confirmed that our calculation agrees with that of [3] within the range of the statistical error. We have also calculated three-Polyakov-loop correlation shown in Fig.3. As an example of the result we show the $n_{\rm ms}$ dependence of the quantity $\langle|P(n+4x)P(n+4y)P(n+4z)|\rangle$ in Fig.4. We see the validity of the algorithm even in this case by increasing the value of $n_{\rm ms}$. Finally we comment on our trial computation of a variant of the multi-level method. Measurement of many-point Polyakov loop correlation in the above method spends much memory. One possible solution is to apply a method which is used in the calculation of quark propagators of disconnected diagrams. Let us consider for simplicity the two-Polyakov-loop correlation. The modified method is to take summation over space of the two-time-link operators before taking the tensor product of them, $$\displaystyle\langle P(n)P^{\dagger}(n+r)\rangle=\frac{1}{N_{\rm conf}}\{[[K(0% )][K(2a)]]$$ (8) $$\displaystyle\cdots[[K(T-4a)][K(T-2a)]]\},$$ $$\displaystyle K(t_{0})=\sum_{\rm space}T(t_{0},r)T(t_{0}+a,r).$$ If the number of configuration and $n_{\rm ms}$ are sufficiently large, this method gives the same results as the conventional multi-level method, because the gauge non-invariant terms cancel out. Our trial simulation is, however, much noisier than the conventional one due to the finite number of configuration and $n_{\rm ms}$. So unfortunately our present results imply that this method is less effective. In summary, toward the construction of the effective theory for Polyakov loops, we have tried a new algorithm called the multi-level method[3]. We have reproduced the two-Polyakov-loop correlation by [3] and confirmed the validity for the computation of the three-Polyakov-loop correlation. References [1] O. Kaczmarek, F. Karsch, E. Laermann and M. Lütgemeier, Phys. Rev. D 62 (2000) 034021. [2] R.D. Pisarski, Nucl. Phy. A 702 (2002) 151; hep-ph/0203271; A. Dumituru and R.D. Pisarski, hep-ph/0204233. [3] M. Luscher and P. Weisz, JHEP 109 (2001) 010.
Interfering pathways for photon blockade in cavity QED with one and two qubits K. Hou MOE Key Laboratory of Advanced Micro-Structured Materials, School of Physics Science and Engineering,Tongji University, Shanghai, China 200092 Department of Mathematics and Physics, Anhui JianZhu University, Hefei 230601,China    C. J. Zhu cjzhu@tongji.edu.cn MOE Key Laboratory of Advanced Micro-Structured Materials, School of Physics Science and Engineering,Tongji University, Shanghai, China 200092    Y. P. Yang yang_yaping@tongji.edu.cn MOE Key Laboratory of Advanced Micro-Structured Materials, School of Physics Science and Engineering,Tongji University, Shanghai, China 200092    G. S. Agarwal girish.agarwal@tamu.edu Institute for Quantum Science and Engineering, and Department of Biological and Agricultural Engineering Texas A$\&$M University, College Station, Texas 77843, USA Abstract We theoretically study the quantum interference induced photon blockade phenomenon in atom cavity QED system, where the destructive interference between two different transition pathways prohibits the two-photon excitation. Here, we first explore the single atom cavity QED system via an atom or cavity drive. We show that the cavity-driven case will lead to the quantum interference induced photon blockade under a specific condition, but the atom driven case can’t result in such interference induced photon blockade. Then, we investigate the two atoms case, and find that an additional transition pathway appears in the atom-driven case. We show that this additional transition pathway results in the quantum interference induced photon blockade only if the atomic resonant frequency is different from the cavity mode frequency. Moreover, in this case, the condition for realizing the interference induced photon blockade is independent of the system’s intrinsic parameters, which can be used to generate antibunched photon source both in weak and strong coupling regimes. I Introduction The phenomenon of quantum interference (QI) effect Ficek and Swain (2005) occurs between different photon transmission pathways, leading to the inhibition/attenuation of absorption by the destructive interference Agarwal (d 17). With its fascinating physical mechanism, many novel quantum effects and corresponding applications have emerged, e.g., electromagnetically induced transparency (EIT) Fleischhauer et al. (2005), coherent population trapping (CPT) Yun et al. (2017), laser without inversion Scully et al. (1989), light with ultraslow group velocity Baba (2008) and so on. Specifically, quantum interference between two photons has been observed in experiments Beugnon et al. (2006); Araneda et al. (2018). Moreover, the QI effect can result in a new type of photon blockade (PB) phenomenon in cavity QED systems Bamba et al. (2011). The traditional PB results from the anharmonicity of the Jaynes Cummings ladder in atom cavity QED systems Imamoḡlu et al. (1997), where the absorption of the first photon blocks the transmission of the second photon. As a result, one can observe an orderly output of photons one by one with strong photon antibunching and the sub-Poissonian statistics. Up to now, the traditional PB has been experimentally demonstrated in various quantum systems, including atoms Birnbaum et al. (2005); Dayan et al. (2008), quantum dot Faraon et al. (2008) and ions cavity quantum electrodynamics systems Debnath et al. (2018) as well as the circuit-QED systems Lang et al. (2011); Hoffman et al. (2011). To blockade the transmission of the second photon, strong coupling limit is required in the traditional PB, which is challenging in experiments, especially for semiconductor cavity QED systems. In addition, with current experimental techniques, the second order correlation function $g^{(2)}$, which is a signature for PB, can’t be close to zero to achieve perfect antibunched photons. This is because the two-photon and multiphoton excitations cannot be inhibited due to the energy broadening. In the light of these disadvantages, a novel physical mechanism based on quantum interference is proposed to generate strong PB phenomenon Liew and Savona (2010). In literature, this quantum interference induced PB is known as the unconventional photon blockade (UPB). In general, there exist two methods to open an additional transition pathway for generating the quantum interference effect. One is by adding an auxiliary cavity with the coherent mode coupling Liew and Savona (2010); Ferretti et al. (2013); Xu and Li (2014a), while the other is by adding an auxiliary driving field  Tang et al. (2015); Flayac and Savona (2017). To date, the UPB has been theoretically investigated in various systems, such as two tunnel-coupled cavity system Kyriienko et al. (2014); Xu and Li (2014b); Zhou et al. (2015); Gerace and Savona (2014); Shen et al. (2015), quantum dots Majumdar et al. (2012); Zhang et al. (2014); Cygorek et al. (2017), quantum well Kyriienko et al. (2014), nanomechanical resonator Xu et al. (2016), optomechanical system Wang et al. (2015); Sarma and Sarma (2018), optical parametric amplifier system Sarma and Sarma (2017), as well as the hybrid quantum plasmonic system Zhao (2018). Recently, the UPB is experimentally demonstrated in two coupled superconducting resonators Vaneph et al. (2018) and a single quantum dot cavity QED system driven by two orthogonally polarized modes Snijders et al. (2018). In this paper, we theoretically investigate the quantum interference induced photon blockade in atom cavity QED systems via an atom or cavity drive. Using the amplitude method, we obtain the condition for observing the interference induced photon blockade in the atom cavity QED system via a cavity drive, i.e., equation (5). To the best of our knowledge, this condition has not yet been reported in the literature. Opposite to the work in Ref. Bamba et al. (2011), we show that the interference induced photon blockade can be realized without requiring the weak nonlinearity or auxiliary pumping field. Moreover, we show that additional transition pathways take place by adding another atom in the cavity. These additional transition pathways result in the interference induced photon blockade even via an atom drive. Therefore, extremely strong antibunching photons can be accomplished, leading to the value of the second-order correlation function smaller than unity. The paper is arranged as follows. In section II, we first study the single atom case via a cavity drive or an atom drive. We show that the destructive interference can only be observed in the case of the cavity drive because the destructive interference can be achieved between transition pathways with odd number of photons. However, there is no interference in the case of the atom drive. This is contrary to intuition as one generally views that atom drive and cavity drive should be similar physics. In section III, we study the two atoms case also via a cavity drive or an atom drive. We show that the odd-photon transition induced destructive interference in cavity driven case can be improved by the collective coupling enhancement, leading to a significant improvement of the photon blockade phenomenon. In the atom-driven case, surprisingly, we find that an additional transition pathway appears in the presence of the second atom, yielding interference between transitions with even number of photons. If the atomic resonant frequency is the same as the cavity mode frequency, we show that this interference is constructive, so that the photon blockade can not be observed. In section IV, we show that this even-photon interference can be destructive when the atomic resonant frequency is not the same as the cavity mode frequency. As a result, two transition pathways become distinguishable, leading to the interference induced photon blockade. We also show that the condition for realizing this even-photon interference induced PB in atom-driven scheme is insensitive to the atom-cavity coupling strength. Thus, one can increase the coupling strength to obtain reasonable photon number with strong PB effect. II single atom cavity QED system First, we consider a typical single atom cavity QED system as shown in Fig. 1(a). In the frame rotating at the driving frequency $\omega_{d}$, the system Hamiltonian is written as Hamsen et al. (2017) (setting $\hbar=1$) $$H_{1}=-\Delta_{c}a^{\dagger}a-\Delta_{a}\sigma^{\dagger}\sigma+g(\sigma a^{% \dagger}+\sigma^{\dagger}a)+H_{d},$$ (1) where $H_{d}$ is the driving term. If the cavity (atom) is driven by a coherent field, $H_{d}=\eta(a+a^{\dagger})$ [$H_{d}=\eta(\sigma_{j}+\sigma_{j}^{\dagger})$] with $\eta$ being the driving strength. Here, $\Delta_{c}=\omega_{d}-\omega_{c}$ and $\Delta_{a}=\omega_{d}-\omega_{\rm a}$ are the detunings for the cavity and atom, respectively. $a$ ($a^{\dagger}$) is the annihilation (creation) operator of the cavity mode with the resonant frequency $\omega_{c}$. $\sigma$ ($\sigma^{\dagger}$) denotes the lowering (raising) operator of the two-level atom with the resonant transition frequency $\omega_{a}$. $g$ is the atom-cavity coupling strength. The dynamics of this open quantum system is governed by the master equation, i.e., $$\displaystyle\frac{\partial\rho}{\partial t}=-i[H,\rho]+\mathcal{L}_{\kappa}% \rho+\mathcal{L}_{\gamma}\rho,$$ (2) where $\mathcal{L}_{\kappa}\rho=\kappa(2a\rho a^{\dagger}-a^{\dagger}a\rho-\rho a^{% \dagger}a)$ and $\mathcal{L}_{\gamma}\rho=\gamma(2\sigma\rho\sigma^{\dagger}-\sigma^{\dagger}% \sigma\rho-\rho\sigma^{\dagger}\sigma)$ describe the dissipations of the cavity and atom with the decay rate $\kappa$ and $\gamma$, respectively. Numerically solving Eq. (2), one can obtain the second-order photon-photon correlation function $g^{(2)}(0)=\langle a^{\dagger}a^{\dagger}aa\rangle/\langle a^{\dagger}a\rangle% ^{2}$ in the steady state. In general, the value of $g^{(2)}(0)$ characterizes the probability of detecting two photons at the same time. If $g^{(2)}(0)>1$, two photons will be detected simultaneously. However, if $g^{(2)}(0)<1$, the output of photons has antibunching behavior, i.e., detecting photons one by one. In the following, we will discuss this phenomenon in detail. Cavity-driven scheme. - We consider that the cavity is driven by a coherent field and assume $\Delta_{a}=\Delta_{c}=0$ for simplicity. Thus, the transition pathways of this cavity-driven scheme is shown in Fig. 1(b), where $|\alpha,n\rangle$ represent the states in the atom-cavity product space. Obviously, there exist two transition pathways corresponding to the transition from $|g,1\rangle$ to $|g,2\rangle$. The first one is $|g,1\rangle\overset{\sqrt{2}\eta}{\longrightarrow}|g,2\rangle$ transition, and the second one is $|g,1\rangle\overset{g}{\longleftrightarrow}|e,0\rangle\overset{\eta}{% \longrightarrow}|e,1\rangle\overset{\sqrt{2}g}{\longleftrightarrow}|g,2\rangle$ transition. If the destructive interference takes place between these two transition pathways, the two-photon excitation will be not allowed and the probability for detecting the state $|g,2\rangle$ will be zero. Then, one can achieve quantum interference induced photon blockade phenomenon, leading to an output of antibunched photons. To obtain the condition for this destructive interference, we assume the system wavefunction $\Psi\approx\sum_{n=0}^{2}C_{g,n}|g,n\rangle+\sum_{n=0}^{1}C_{e,n}|e,n\rangle$, where $|C_{\alpha,n}|^{2}$ ($\alpha=g,e$) is the probability for detecting the state $|\alpha,n\rangle$. Here, other states in larger photon-number spaces have been neglected since the driving field is not strong enough to excite these states. Then, the dynamical equations for the amplitudes of each state can be written as $$\displaystyle i\dot{C}_{g,1}=gC_{e,0}-i\frac{\kappa}{2}C_{g,1}+\eta C_{g,0}+% \sqrt{2}\eta C_{g,2},$$ (3a) $$\displaystyle i\dot{C}_{g,2}=\sqrt{2}gC_{e,1}-i\kappa C_{g,2}+\sqrt{2}\eta C_{% g,1},$$ (3b) $$\displaystyle i\dot{C}_{e,0}=gC_{g,1}-i\frac{\gamma}{2}C_{e,0}+\sqrt{2}\eta C_% {e,1},$$ (3c) $$\displaystyle i\dot{C}_{e,1}=\sqrt{2}gC_{g,2}-i\left(\frac{\kappa+\gamma}{2}% \right)C_{e,1}+\eta C_{e,0}.$$ (3d) Using the perturbation method Ferretti et al. (2010); Bamba et al. (2011) and solving above equations under the steady state approximation, one can obtain $$C_{g,2}=\frac{2\sqrt{2}\eta^{2}\left(-\gamma^{2}-\gamma\kappa+4g^{2}-4\eta^{2}% \right)}{\left(\gamma\kappa+4g^{2}\right)\left(\gamma\kappa+4g^{2}+\kappa^{2}% \right)+4\eta^{2}X},$$ (4) with $X=4\eta^{2}-8g^{2}+\gamma^{2}+\kappa^{2}+\gamma\kappa$. Clearly, the optimal condition for $C_{g,2}=0$ is $$g=\frac{1}{2}\sqrt{\gamma^{2}+\gamma\kappa+4\eta^{2}}.$$ (5) Under the weak driving condition, the second-order correlation function can be expressed as $g^{(2)}(0)\approx 2|C_{g,2}|^{2}/|C_{g,1}|^{4}$, yielding $g^{(2)}(0)\rightarrow 0$ if Eq. (5) is satisfied. It is worth to point out that Eq. (5) is the condition for achieving destructive interference induced photon blockade in a single atom cavity QED system, as far as we known, which has not yet been reported before. Compared with the works in Refs. Ferretti et al. (2010); Vaneph et al. (2018); Snijders et al. (2018), our system is much simpler than their proposals, where the destructive interference results from the transition pathways between two cavity modes and weak nonlinearity is essential to realize the destructive interference. Eq. (5) implies that, under weak atom cavity coupling, the photon blockade can also be accomplished in a typical single atom cavity QED system via the destructive interference. It is noted that Eq. (5) is only valid for the weak driving field since high-order photon states are assumed to be unexcited. For strong driving field, however, Eq. (5) is invalid and one can not observe the interference induced photon blockade since high-order photon states will be excited, yielding $g^{(2)}$ larger than unity. To verify the above analysis, we numerically solve Eq. (2). In Fig. 2(a), we plot the equal-time second-order correlation function $g^{(2)}(0)$ as a function of atom-cavity coupling strength with atomic decay rate $\gamma=\kappa$ (blue dashed curve) and $\gamma=\kappa/2$ (red solid curve), respectively. Other system parameters are given by $\Delta_{a}=\Delta_{c}=0$ and $\eta=0.01\kappa$. It is clear to see that there exist a minimum in the second-order correlation function at $g=\sqrt{\gamma(\gamma+\kappa)+4\eta^{2}}/2$ due to the destructive interfere effect. Moreover, the smaller the atomic decay rate is, the smaller the value of $g^{(2)}(0)$ is, leading to a stronger photon blockade effect as shown in Fig. 2(a). No quantum interference for atom-driven scheme. - Contrary to the cavity-driven case, there exists only one transition pathway for the two-photon excitation, i.e., $|g,0\rangle\overset{\eta}{\rightarrow}|e,0\rangle\overset{g}{\rightarrow}|g,1% \rangle\overset{\eta}{\rightarrow}|e,1\rangle\overset{\sqrt{2}g}{\rightarrow}|% g,2\rangle$ [see Fig. 1(c)] if one drive the atom directly. Therefore, the two-photon excitations can’t be blockaded by the quantum interference effect. In the case of weak driving strength, e.g., $\eta=0.01\kappa$ (blue dashed curve) and $\eta=0.3\kappa$ (red solid curve), the value of $g^{(2)}(0)$ is smaller than unity for small coupling strengths because the high-order states can’t be excited by such weak driving fields. With the increase of the atom-cavity coupling strength, the energy splitting become dominant so that the driving field becomes far off-resonant to all states in the system, yielding $g^{(2)}(0)\rightarrow 1$ as shown in Fig. 2(b). However, in the case of strong driving strength, e.g., $\eta=\kappa$ (green dash dotted curve), states in two-photon space will be excited and one can obtain $g^{(2)}(0)>1$ for small coupling strength $g$. As $g$ increases, it is clear to see that $g^{(2)}(0)<1$ because the anharmonic energy splitting prevents the states in two-photon space from being excited [see the green curve in Fig. 2(b)]. Furthering increasing $g$, one can also observe $g^{(2)}(0)\rightarrow 1$ since the driving field is in far off-resonant state. III Two atoms cavity QED system Next, we consider that two identical atoms are trapped in a single-mode cavity with the same atom-cavity coupling strengths as shown in Fig. 3(a). The corresponding Hamiltonian is then written as $$H_{2}=-\Delta_{c}a^{\dagger}a+\sum_{j=1}^{2}[-\Delta_{a}\sigma_{j}^{\dagger}% \sigma_{j}+g(\sigma_{j}a^{\dagger}+\sigma_{j}^{\dagger}a)]+H_{d},$$ (6) where the subscript $j$ indicates the $j$-th atom. Likewise, the drive term is given by $H_{d}=\eta(a^{\dagger}+a)$ for the cavity drive, and $H_{d}=\eta\sum_{j=1}^{2}(\sigma_{j}+\sigma_{j}^{\dagger})$ for the atom drive, respectively. Then, the master equation describing the dynamics of the system is written as $$\frac{\partial\rho}{\partial t}=-i[H_{2},\rho]+\mathcal{L}_{\kappa}\rho+\sum_{% j=1}^{2}\mathcal{L}_{\gamma}^{(j)}\rho,$$ (7) where $\mathcal{L}^{(j)}_{\gamma}\rho$ represents the dissipation term of the $j$-th atom. For mathematical simplicity, we assume $\Delta_{a}=\Delta_{c}=0$. In general, such system can be described by using the collective states $\{|gg\rangle,|\pm\rangle,|ee\rangle\}$ as the basis. Here, we define the states $|\pm\rangle=(|eg\rangle\pm|ge\rangle)/\sqrt{2}$ as the symmetric and anti-symmetric Dicke states, respectively. Cavity-driven scheme. - Since two atoms have the same coupling strength (i.e., in-phase radiation), the anit-symmetric Dicke states $|-,n\rangle$ are dark states, decoupling to other states in this system Pleinert et al. (2017); Zhu et al. (2017). In the cavity driven scheme, therefore, one can also observe two different transition pathways for the two-photon excitations, corresponding to $|gg,1\rangle\overset{\sqrt{2}\eta}{\rightarrow}|gg,2\rangle$ and $|gg,1\rangle\overset{\sqrt{2}g}{\rightarrow}|+,0\rangle\overset{\sqrt{2}\eta}{% \rightarrow}(|+,1\rangle\overset{\sqrt{2}g}{\longleftrightarrow}|ee,0\rangle)% \overset{2g}{\rightarrow}|gg,2\rangle$, respectively [see Fig. 3(b)]. To obtain the optimal condition for achieving the destructive quantum interference between these two pathways, we also solve the amplitude equations [similar to Eqs. (3)] by assuming the system wavefunction $\Psi\approx\Sigma_{n=0}^{2}C_{gg,n}|gg,n\rangle+\Sigma_{n=0}^{1}C_{\pm,n}|\pm,% n\rangle+C_{ee,0}|ee,0\rangle$. Under the steady state approximation, the probability amplitude of state $|gg,2\rangle$ is given by $$C_{gg,2}\approx\frac{2\sqrt{2}\gamma\eta^{2}\left(4g^{2}-\gamma^{2}-\gamma% \kappa-4\eta^{2}\right)}{\left(\gamma\kappa+8g^{2}\right)\left(\gamma^{2}% \kappa+\gamma\kappa^{2}+8\gamma g^{2}+4g^{2}\kappa\right)}.$$ (8) Obviously, one can obtain the same optimal condition for $C_{gg,2}=0$ as given by Eq. (5), yielding strong photon blockade effect induced by the destructive interference between these two transition pathways. In Fig. 4(a), we plot the equal-time second-order correlation function $g^{(2)}(0)$ as a function of the atom-cavity coupling $g$. The system parameters are given by $\Delta_{a}=\Delta_{c}=0$, $\gamma=\kappa$ and $\eta=0.01\kappa$, respectively. It is clear to see that there also exists a minimum at $g=\sqrt{\gamma(\gamma+\kappa)+4\eta^{2}}/2$ for two atoms system. Compared with the single atom case, the value of $g^{(2)}(0)$ at $g=\sqrt{\gamma(\gamma+\kappa)+4\eta^{2}}/2$ decreases significantly [see red curve], leading to an improvement of the photon blockade phenomenon. The physical mechanism of this PB improvement attributes to the enhanced destructive interference effect resulting from the collective coupling enhancement [see Fig. 3(b)]. Constructive quantum interference leading to no photon blockade in atom-driven scheme. - Let’s consider that two atoms are driven by the coherent field. As shown in Fig. 3(c), there exist two different transition pathways for two-photon excitations, which is opposite to the single atom case. Using the same basis states, these two transition pathways can be represented by $|gg,0\rangle\overset{\sqrt{2}\eta}{\rightarrow}|+,0\rangle\overset{\sqrt{2}g}{% \rightarrow}|gg,1\rangle\overset{\sqrt{2}\eta}{\rightarrow}|+,1\rangle\overset% {2g}{\rightarrow}|gg,2\rangle$ and $|+,0\rangle\overset{\sqrt{2}\eta}{\rightarrow}|ee,0\rangle\overset{\sqrt{2}g}{% \rightarrow}|+,1\rangle\overset{2g}{\rightarrow}|gg,2\rangle$, respectively. To realize the PB, the excitation of the state $|+,1\rangle$ is required to be inhibited by the destructive interference so that the state $|gg,2\rangle$ remains unexcited. However, these two transitions are symmetric and indistinguishable so that the interference between these two transitions are constructive Muthukrishnan et al. (2004). Thus, the excitation of the state $|+,1\rangle$ is allowed in this case, and can be significantly enhanced via the constructive interference. As shown in Fig. 4(b), the value of $g^{(2)}(0)$ becomes much larger than that in single atom-driven case. IV Two atoms-driven cavity QED system with $\omega_{a}\neq\omega_{c}$ Here, we also study the two atoms-driven case but $\omega_{a}\neq\omega_{c}$, i.e., the atomic and cavity resonant frequency are different [see Fig. 5(a)]. In this system, the amplitude equations are given by $$\displaystyle i\dot{C}_{gg,1}=$$ $$\displaystyle\sqrt{2}gC_{+,0}-\left(i\frac{\kappa}{2}+\Delta_{c}\right)C_{gg,1% }+\sqrt{2}\eta C_{+,1},$$ (9a) $$\displaystyle i\dot{C}_{gg,2}=$$ $$\displaystyle 2gC_{+,1}+(-2\Delta_{c}-i\kappa)C_{gg,2},$$ (9b) $$\displaystyle i\dot{C}_{+,0}=$$ $$\displaystyle\sqrt{2}\eta C_{gg,0}-\left(\Delta_{a}+i\frac{\gamma}{2}\right)C_% {+,0}+\sqrt{2}gC_{gg,1}$$ $$\displaystyle+\sqrt{2}\eta C_{ee,0},$$ (9c) $$\displaystyle i\dot{C}_{+,1}=$$ $$\displaystyle\sqrt{2}\eta C_{gg,1}-\left(\Delta_{a}+\Delta_{c}+i\frac{\gamma+% \kappa}{2}\right)C_{+,1}$$ $$\displaystyle+2gC_{gg,2}+\sqrt{2}gC_{ee,0},$$ (9d) $$\displaystyle i\dot{C}_{ee,0}=$$ $$\displaystyle\sqrt{2}\eta C_{+,0}-(2\Delta_{a}+i\gamma)C_{ee,0}+\sqrt{2}gC_{+,% 1}.$$ (9e) Under the steady state approximation and assuming $C_{gg,0}\gg\{C_{gg,1},C_{+,0}\}\gg\{C_{gg,2},C_{+,1},C_{ee,0}\}$ and $C_{gg,0}\simeq 1$, one can obtain $$\displaystyle C_{gg,2}=\frac{16\sqrt{2}g^{2}\eta^{2}[-2i(2\Delta_{a}+\Delta_{c% })+2\gamma+\kappa]}{X(Y-2i\Delta_{a}Z)},$$ (10a) $$\displaystyle C_{gg,1}=-\frac{8g\eta}{8g^{2}-\Delta_{a}\left(4\Delta_{c}+2i% \kappa\right)-2i\gamma\Delta_{c}+\gamma\kappa},$$ (10b) $$\displaystyle C_{+,1}=\dfrac{8\sqrt{2}g\eta^{2}\left(\kappa-2i\Delta_{c}\right% )[2(2\Delta_{a}+\Delta_{c})+i(2\gamma+\kappa)]}{X(Y-2i\Delta_{a}Z)},$$ (10c) where $X=\Delta_{a}\left(-4\Delta_{c}-2i\kappa\right)-2i\gamma\Delta_{c}+\gamma\kappa% +8g^{2}$ and $Y=-4\Delta_{a}^{2}\left(\kappa-2i\Delta_{c}\right)+\gamma^{2}\kappa-4\gamma% \Delta_{c}^{2}-2i\Delta_{c}\left(\gamma^{2}+2\gamma\kappa+4g^{2}\right)+\gamma% \kappa^{2}+8\gamma g^{2}+4g^{2}\kappa$ and $Z=-4i\Delta_{c}(\gamma+\kappa)-4\Delta_{c}^{2}+2\gamma\kappa+8g^{2}+\kappa^{2}$. Assuming $\{\Delta_{a},\Delta_{c}\}\gg\{\gamma,\kappa\}$, the equal-time second-order correlation can be expressed as $$\displaystyle g^{(2)}(0)\simeq\dfrac{2|C_{gg,2}|^{2}}{|C_{gg,1}|^{4}}\simeq% \dfrac{(2\Delta_{a}+\Delta_{c})^{2}(8g^{2}-4\Delta_{a}\Delta_{c})^{2}}{D},$$ (11) where $D=[-8\Delta_{a}\Delta_{c}(\gamma+\kappa)-4\kappa\Delta_{a}^{2}+\gamma^{2}% \kappa-4\gamma\Delta_{c}^{2}+\gamma\kappa^{2}+8\gamma g^{2}+4g^{2}\kappa]^{2}+% [8\Delta_{a}^{2}\Delta_{c}-2\Delta_{a}(-4\Delta_{c}^{2}+2\gamma\kappa+8g^{2}+% \kappa^{2})-2\Delta_{c}(\gamma^{2}+2\gamma\kappa+4g^{2})]^{2}$. Obviously, there exist two conditions to achieve $g^{(2)}(0)\rightarrow 0$, yielding an output of antibunched photons. The first condition is $\Delta_{a}\Delta_{c}=2g^{2}$, corresponding to the traditional PB. In particular, this condition can be reduced to $\Delta_{a}=\pm\sqrt{2}g$ if one assume $\Delta_{c}=\Delta_{a}$, which is widely known as the condition for vacuum Rabi splitting and photon blockade phenomenon in strong coupling regime Zhu et al. (2017). It is worth to point out that the atom-cavity coupling strength must be strong enough to observe the PB at the frequencies $\Delta_{a}\Delta_{c}=2g^{2}$, i.e. the strong coupling is critically important. The second condition is $\Delta_{c}=-2\Delta_{a}$. Surprisingly, we find that it is independent to the atom-cavity coupling strength. To understand the physical mechanism of this condition, we examine these two transition pathways for two-photon excitations again. As shown in Fig. 5(b), transitions (I) $|+,0\rangle\rightarrow|gg,1\rangle\rightarrow|+,1\rangle\rightarrow|gg,2\rangle$ and (II) $|+,0\rangle\rightarrow|ee,0\rangle\rightarrow|+,1\rangle\rightarrow|gg,2\rangle$ are distinct as opposite to the case of $\omega_{a}=\omega_{c}$. Thus, the destructive interference will take place if this specific condition is satisfied. In general, the occupying probability of state $|+,1\rangle$ obeys the second-order Fermi golden rule Muthukrishnan et al. (2004); Debierre et al. (2015), yielding $$|C_{+,1}|^{2}\propto\frac{\pi}{\hbar}\left|\frac{1}{\omega_{a}+\omega_{c}-2% \omega_{p}}+\frac{1}{\omega_{a}-\omega_{p}}\right|^{2}.$$ (12) Clearly, the destructive interference will result in $|C_{+,1}|^{2}\rightarrow 0$ if $-(\omega_{a}+\omega_{c}-2\omega_{p})=\omega_{a}-\omega_{p}$ is satisfied. As a result, there is no population in the state $|gg,2\rangle$ and the second-order correlation function $g^{(2)}(0)\rightarrow 0$. This condition implies that the quantum interference induced PB can be implemented in two atoms-driven cavity QED system if $\omega_{a}\neq\omega_{c}$. It is worth to point out that the physical mechanism of this destructive interference between even-photon transitions is different from that extensively studied in literature, where the interference occurs via the odd-photon transitions. To verify these analytical results, we carry out numerical simulation by solving the master equation with system parameters $\Delta_{c}=20\kappa$ and $\gamma=\kappa$. For weak atom-cavity coupling strength, e.g., $g=0.5\kappa$, only a single minimum at the frequency $\Delta_{a}=-\Delta_{c}/2$ can be observed in the second-order correlation function [see Fig. 6(a), red curve]. As shown in Fig. 6(a), the value of $g^{(2)}(0)$ is smaller than $10^{-2}$ due to the destructive interference, resulting in a strong PB effect. Simultaneously, the value of $|C_{+,1}|^{2}$ (blue curve) reaches its minimum, leading to $|C_{gg,2}|^{2}\rightarrow 0$. For strong atom-cavity coupling strength (e.g., $g=5\kappa$), however, there exist two minimums in the second-order correlation function, corresponding to the frequencies $\Delta_{a}=-\Delta_{c}/2$ and $\Delta_{a}=2g^{2}/\Delta_{c}$, respectively [see Fig. 6(b), red curve]. The values for these two minimums are both less than unity. The left one is resulted from the destructive interference since $|C_{gg,2}|^{2}$ also reaches its minimum [see the blue curve], while the right one attributes to the anharmonic energy splitting. It is clear to see that the value of $g^{(2)}(0)$ at $\Delta_{a}=-\Delta_{c}/2$ is much smaller than that at $\Delta_{a}=2g^{2}/\Delta_{c}$. In Fig. 7(a), we plot the second-order correlation function $g^{(2)}(0)$ as functions of the detunings $\Delta_{a}$ and $\Delta_{c}$, respectively. In panels (a) and (b), we chose the atom-cavity coupling strength $g=0.5\kappa$ and $g=5\kappa$, respectively. Other system parameters are the same as those used in Fig. (6). The dashed and dash-dotted curves indicate the analytical expressions $\Delta_{a}=-\Delta_{c}/2$ and $\Delta_{a}\Delta_{c}=2g^{2}$, respectively. It is clear to see that the second order correlation function at $\Delta_{a}=-\Delta_{c}/2$ is always smaller than unity in both weak and strong coupling regimes. However, the correlation function at $\Delta_{a}\Delta_{c}=2g^{2}$ is smaller than unity only when the system enters into the strong coupling regime, which matches the analytical results very well. Finally, we discuss the influence of the atom-cavity coupling strength on the interference induced PB in this two atoms cavity QED system by setting $\Delta\equiv\Delta_{a}=-\Delta_{c}/2$. In Fig. 8(a), the second-order correlation function $g^{(2)}(0)$ is plotted as functions of the normalized atom-cavity coupling strength $g/\kappa$ and the normalized detuning $\Delta/\kappa$, respectively. Here, we choose a set of experimental parameters as $\kappa/2\pi=2.8$ MHz, $\gamma/2\pi=3.0$ MHz, $\eta/2\pi=1.4$ MHz Neuzner et al. (2016). We notice that $g^{(2)}(0)$ decreases quickly as the detuning $\Delta$ increases, but changes slightly as the increase of the atom-cavity coupling strength $g$. Contrary to the second order correlation function, the counting rate of cavity photons [see panel (b)] grows significantly as the atom-cavity coupling strength $g$ increases. Therefore, detectable photons with strong antibunching behavior can be accomplished under strong coupling regime if a specific detuning $\Delta$ is chosen. V Conclusion In summary, we have theoretically investigated the nature of the interference induced photon blockade in cavity QED with one and two atoms. In a single atom cavity QED system, we show that the interference induced photon blockade can only be observed when the external field drives the cavity directly. In the atom driven scheme, there exists a single transition pathway for the two-photon excitation so that the interference induced photon blockade cannot be observed. In two atoms cavity QED system, the quantum interference induced photon blockade still exists in cavity driven scheme. In the atom-driven case, we show that there exist two transition pathways for the two-photon excitation as opposite to the single atom case. If the atomic and cavity resonant frequencies are the same, these two transition pathways are indistinct and lead to constructive interference, which is harmful to the photon blockade effect. However, if the atomic and cavity resonant frequencies are different, these two transition pathways become distinct and lead to a new kind of photon blockade effect based on the even photon destructive interference. Moreover, we show that the condition for this novel interference induced photon blockade is independent to the atom-cavity coupling strength, which provides us the possibility for observing large photon number with strong antibunching behavior in the strong coupling regime. Acknowledgements. CJZ and YPY thank the support of the National Key Basic Research Special Foundation (Grant No.2016YFA0302800); the Shanghai Science and Technology Committee (Grants No.18JC1410900); the National Nature Science Foundation (Grant No.11774262). KH thanks the support of the Natural Science Foundation of Anhui Province (Grant No.1608085QA23). Natural Science Foundation of Anhui Provincial Education Department (Grant No.KJ2018JD20). GSA thanks the support of Air Force Office of scientific Research (Award No. FA-9550-18-1-0141). References Ficek and Swain (2005) Z. Ficek and S. Swain, Quantum interference and coherence: theory and experiments, Vol. 100 (Springer Science & Business Media, 2005). Agarwal (d 17) G. S. Agarwal, Quantum optics (Cambridge University Press, 2012, see chapters 14 and 17.). Fleischhauer et al. (2005) M. Fleischhauer, A. Imamoglu, and J. P. Marangos, Electromagnetically induced transparency: Optics in coherent media, Rev. Mod. Phys. 77, 633 (2005). Yun et al. (2017) P. Yun, F. Tricot, C. E. Calosso, S. Micalizio, B. François, R. Boudot, S. Guérandel, and E. de Clercq, High-performance coherent population trapping clock with polarization modulation, Phys. Rev. Applied 7, 014018 (2017). Scully et al. (1989) M. O. Scully, S.-Y. Zhu, and A. Gavrielides, Degenerate quantum-beat laser: Lasing without inversion and inversion without lasing, Phys. Rev. Lett. 62, 2813 (1989). Baba (2008) T. Baba, Slow light in photonic crystals, Nat. Photonics 2, 465 (2008). Beugnon et al. (2006) J. Beugnon, M. P. Jones, J. Dingjan, B. Darquié, G. Messin, A. Browaeys, and P. Grangier, Quantum interference between two single photons emitted by independently trapped atoms, Nature 440, 779 (2006). Araneda et al. (2018) G. Araneda, D. B. Higginbottom, L. Slodička, Y. Colombe, and R. Blatt, Interference of single photons emitted by entangled atoms in free space, Phys. Rev. Lett. 120, 193603 (2018). Bamba et al. (2011) M. Bamba, A. Imamoğlu, I. Carusotto, and C. Ciuti, Origin of strong photon antibunching in weakly nonlinear photonic molecules, Phys. Rev. A 83, 021802 (2011). Imamoḡlu et al. (1997) A. Imamoḡlu, H. Schmidt, G. Woods, and M. Deutsch, Strongly interacting photons in a nonlinear cavity, Phys. Rev. Lett. 79, 1467 (1997). Birnbaum et al. (2005) K. M. Birnbaum, A. Boca, R. Miller, A. D. Boozer, T. E. Northup, and H. J. Kimble, Photon blockade in an optical cavity with one trapped atom, Nature 436, 87 (2005). Dayan et al. (2008) B. Dayan, A. Parkins, T. Aoki, E. Ostby, K. Vahala, and H. Kimble, A photon turnstile dynamically regulated by one atom, Science 319, 1062 (2008). Faraon et al. (2008) A. Faraon, I. Fushman, D. Englund, N. Stoltz, P. Petroff, and J. Vučković, Coherent generation of non-classical light on a chip via photon-induced tunnelling and blockade, Nat. Phys. 4, 859 (2008). Debnath et al. (2018) S. Debnath, N. M. Linke, S.-T. Wang, C. Figgatt, K. A. Landsman, L.-M. Duan, and C. Monroe, Observation of hopping and blockade of bosons in a trapped ion spin chain, Phys. Rev. Lett. 120, 073001 (2018). Lang et al. (2011) C. Lang, D. Bozyigit, C. Eichler, L. Steffen, J. M. Fink, A. A. Abdumalikov, M. Baur, S. Filipp, M. P. da Silva, A. Blais, and A. Wallraff, Observation of resonant photon blockade at microwave frequencies using correlation function measurements, Phys. Rev. Lett. 106, 243601 (2011). Hoffman et al. (2011) A. J. Hoffman, S. J. Srinivasan, S. Schmidt, L. Spietz, J. Aumentado, H. E. Türeci, and A. A. Houck, Dispersive photon blockade in a superconducting circuit, Phys. Rev. Lett. 107, 053602 (2011). Liew and Savona (2010) T. C. H. Liew and V. Savona, Single photons from coupled quantum modes, Phys. Rev. Lett. 104, 183601 (2010). Ferretti et al. (2013) S. Ferretti, V. Savona, and D. Gerace, Optimal antibunching in passive photonic devices based on coupled nonlinear resonators, New J. Phys. 15, 025012 (2013). Xu and Li (2014a) X.-W. Xu and Y. Li, Strong photon antibunching of symmetric and antisymmetric modes in weakly nonlinear photonic molecules, Phys. Rev. A 90, 033809 (2014a). Tang et al. (2015) J. Tang, W. Geng, and X. Xu, Quantum interference induced photon blockade in a coupled single quantum dot-cavity system, Sci. Rep. 5, 9252 (2015). Flayac and Savona (2017) H. Flayac and V. Savona, Unconventional photon blockade, Phys. Rev. A 96, 053810 (2017). Kyriienko et al. (2014) O. Kyriienko, I. A. Shelykh, and T. C. H. Liew, Tunable single-photon emission from dipolaritons, Phys. Rev. A 90, 033807 (2014). Xu and Li (2014b) X.-W. Xu and Y. Li, Tunable photon statistics in weakly nonlinear photonic molecules, Phys. Rev. A 90, 043822 (2014b). Zhou et al. (2015) Y. H. Zhou, H. Z. Shen, and X. X. Yi, Unconventional photon blockade with second-order nonlinearity, Phys. Rev. A 92, 023838 (2015). Gerace and Savona (2014) D. Gerace and V. Savona, Unconventional photon blockade in doubly resonant microcavities with second-order nonlinearity, Phys. Rev. A 89, 031803 (2014). Shen et al. (2015) H. Z. Shen, Y. H. Zhou, H. D. Liu, G. C. Wang, and X. X. Yi, Exact optimal control of photon blockade with weakly nonlinear coupled cavities, Opt. Express 23, 32835 (2015). Majumdar et al. (2012) A. Majumdar, M. Bajcsy, A. Rundquist, and J. Vučković, Loss-enabled sub-poissonian light generation in a bimodal nanocavity, Phys. Rev. Lett. 108, 183601 (2012). Zhang et al. (2014) W. Zhang, Z. Yu, Y. Liu, and Y. Peng, Optimal photon antibunching in a quantum-dot–bimodal-cavity system, Phys. Rev. A 89, 043832 (2014). Cygorek et al. (2017) M. Cygorek, A. M. Barth, F. Ungar, A. Vagov, and V. M. Axt, Nonlinear cavity feeding and unconventional photon statistics in solid-state cavity qed revealed by many-level real-time path-integral calculations, Phys. Rev. B 96, 201201 (2017). Xu et al. (2016) X.-W. Xu, A.-X. Chen, and Y.-x. Liu, Phonon blockade in a nanomechanical resonator resonantly coupled to a qubit, Phys. Rev. A 94, 063853 (2016). Wang et al. (2015) H. Wang, X. Gu, Y.-x. Liu, A. Miranowicz, and F. Nori, Tunable photon blockade in a hybrid system consisting of an optomechanical device coupled to a two-level system, Phys. Rev. A 92, 033806 (2015). Sarma and Sarma (2018) B. Sarma and A. K. Sarma, Unconventional photon blockade in three-mode optomechanics, Phys. Rev. A 98, 013826 (2018). Sarma and Sarma (2017) B. Sarma and A. K. Sarma, Quantum-interference-assisted photon blockade in a cavity via parametric interactions, Phys. Rev. A 96, 053827 (2017). Zhao (2018) D. Zhao, All-optical active control of photon correlations: Dressed-state-assisted quantum interference effects, Phys. Rev. A 98, 033834 (2018). Vaneph et al. (2018) C. Vaneph, A. Morvan, G. Aiello, M. Féchant, M. Aprili, J. Gabelli, and J. Estève, Observation of the unconventional photon blockade in the microwave domain, Phys. Rev. Lett. 121, 043602 (2018). Snijders et al. (2018) H. J. Snijders, J. A. Frey, J. Norman, H. Flayac, V. Savona, A. C. Gossard, J. E. Bowers, M. P. van Exter, D. Bouwmeester, and W. Löffler, Observation of the unconventional photon blockade, Phys. Rev. Lett. 121, 043601 (2018). Hamsen et al. (2017) C. Hamsen, K. N. Tolazzi, T. Wilk, and G. Rempe, Two-photon blockade in an atom-driven cavity qed system, Phys. Rev. Lett. 118, 133604 (2017). Ferretti et al. (2010) S. Ferretti, L. C. Andreani, H. E. Türeci, and D. Gerace, Photon correlations in a two-site nonlinear cavity system under coherent drive and dissipation, Phys. Rev. A 82, 013841 (2010). Pleinert et al. (2017) M.-O. Pleinert, J. von Zanthier, and G. S. Agarwal, Hyperradiance from collective behavior of coherently driven atoms, Optica 4, 779 (2017). Zhu et al. (2017) C. J. Zhu, Y. P. Yang, and G. S. Agarwal, Collective multiphoton blockade in cavity quantum electrodynamics, Phys. Rev. A 95, 063842 (2017). Muthukrishnan et al. (2004) A. Muthukrishnan, G. S. Agarwal, and M. O. Scully, Inducing disallowed two-atom transitions with temporally entangled photons, Phys. Rev. Lett. 93, 093002 (2004). Debierre et al. (2015) V. Debierre, I. Goessens, E. Brainis, and T. Durt, Fermi’s golden rule beyond the zeno regime, Phys. Rev. A 92, 023825 (2015). Neuzner et al. (2016) A. Neuzner, M. Körber, O. Morin, S. Ritter, and G. Rempe, Interference and dynamics of light from a distance-controlled atom pair in an optical cavity, Nat. Photonics 10, 303 (2016).
The Effect of Recency to Human Mobility Hugo Barbosa BioComplex Lab, Department of Computer Sciences & Cybersecurity, Florida Institute of Technology, USA Fernando Buarque de Lima Neto Computational Intelligence Research Group, Polytechnic School, University of Pernambuco, Brazil Alexandre Evsukoff COPPE, Federal University of Rio de Janeiro, Brazil Ronaldo Menezes BioComplex Lab, Department of Computer Sciences & Cybersecurity, Florida Institute of Technology, USA Abstract A better understanding of how people move in space is of fundamental importance for many areas such as prevention of epidemics, urban planning and wireless network engineering, to name a few. In this work we explore a rank-based approach for the characterization of human trajectories and unveil a visitation bias toward recently visited locations. We test our hypothesis against different empirical data of human mobility. Also, we propose an extension to the Preferential Return mechanism to incorporate the new recency-based mechanism. Introduction A better understanding on the fundamental mechanisms of human mobility is of importance for many research fields such as epidemic modeling [1, 2, 3], urban planning[4, 5], and traffic engineering [6, 7, 8]. Although individual human trajectories can seem unpredictable and intricate for an external observer, in fact, human trajectories are very predictable [9, 10, 11, 12, 13, 14] and regular over space and time [15, 16, 17]. One characteristic of human motion, largely observed in empirical data, is the fact that we have the tendency to spend most of our time in just a few locations [15, 18, 19]. More generally, visitations frequencies distribution have been observed to be heavy tailed [18, 13]. However, the fundamental mechanisms responsible for shaping our visitation preferences are not fully understood. The preferential return (PR) mechanism, proposed by Song et al [18], offered an elegant and robust model for the visitation frequency distribution. More precisely, it defines the probability $\Pi_{i}$ for returning to a location $i$ as $\Pi_{i}\propto f_{i}$, where $f_{i}$ is the visitation frequency of the location $i$. It implies that the more visits a location receives, the more visits it is going to receive in the future, which in different fields goes by the names of Matthew effect [20], cumulative advantage [21], or preferential attachment [22]. Although the focus of the PR mechanism–as part of the Exploration and Preferential Return (EPR) individual mobility model–was to reproduce some of the scaling properties of human mobility, its general principles are grounded on plausible assumptions from the human behavior point of view. However, in the long term, the PR assumption as a property of human motion leads to two discrepancies. First, in the model, the earlier a location is discovered, the more visits it is going to receive. In implies that first visited location will most likely also be the most visited one. Second, if the cumulative advantage indeed holds true for human movements, people would not change their preferences, which is clearly not true. In this work, we explore the visitation return patterns under a temporal perspective. We analyzed different ranking approaches and tested their respective correlations with the return probabilities. Our approach is based on the empirical evidences that the longer the time since the last visit to a location, the lower is the probability of observing a user at this location [18, 15]. The proposed approach aims at overcoming the limitations of the PR mechanism. Results Data In this work, we used two mobility datasets: the first one ($D1$) corresponds to 6 months of anonymized mobile-phone traces from a large metropolitan area in Brazil. This dataset is composed of 8,898,108 records from 30,000 users between January 1–June 30, 2014. The second dataset ($D2$) is composed of 23,736,435 check-ins from 51,406 Brightkite users in 772,966 different locations. 111Brightkite was a location-based social networking service launched in 2007 and closed in 2011 [23, 24]. Unlike the mobile phone datasets, the Brightkite data has a spatial resolution in the range of a few meters. Given our interest here is on the individuals’ trajectories, in this analysis we considered only the data that give us information relating to the users’ displacement. Hence, we filtered out repeated observations in one place, resulting in a time series for each individual representing their trajectories over the observed period. Heterogeneities in human mobility The first analysis we performed was to measure the population-level heterogeneities represented by the different activity patterns. First we measured the number of observed displacements ($N$) per user during the period. Notice that it does not necessarily represent the actual number of displacements, but rather the number of jumps per user captured by the datasets. All the scaling parameters were estimated using the methods described by Clauset et al [25]. The $p(N)$ of $D1$ and $D2$ are better approximated by truncated power-law distributions, defined as $p(x)=Cx^{-\alpha}\mathrm{e}^{-x/\tau}$ with $\alpha_{D1}\approx 1.000$ and $\tau_{D1}\approx 783$ observations whereas $\alpha_{D2}\approx 1.3$ and $\tau_{D2}\approx 923$ observations. (see Figure 1). This means that in both datasets, users tend to not move a lot, and highly mobile individuals are very rare. For instance, in $D1$, the daily average number of displacements is approximately 2.2 whereas in $D2$ it was approximately 1.7. The average number of jumps per month in $D1$ is 24.5 while in $D2$ it was 9.2. The lower average number of movements in $D2$ could be because Brighkite was a location-based social networking service, hence, movements related to social activities must be overrepresented in it. Nevertheless, given our focus is on individuals visitation preferences–rather than needs–it does not affect negatively our analysis. From the human mobility perspective, we extracted the number of distinct locations users have visited in the period (Figure 2). It depicts the probability $p(S)$ of a user having visited $S$ distinct locations at the end of the observational period. Both the curves are better approximated by truncated power laws with exponents $\alpha_{D1}\approx 1.00$, $\tau_{D1}\approx 52.63$ whereas $D2$ has exponents $\alpha_{D2}\approx 1.22$ and $\tau_{D2}\approx 200.0$. When we look at the CCDF in linear scale (inset of Figure 2) it becomes even more evident the fact that we spend most of our time in a very few locations. To illustrate, about 30 % of the time, users in $D1$ were found at just 2 locations while in $D2$ this number was approximately 40%. Temporal patterns In a modern society, where most of the people have daily routines, part of our trajectories are constrained to a limited number of locations at regular time intervals. Human activity routines are responsible for part of the regularities human movements show. From the empirical data, we extracted the time interval (in hours) between two consecutive visits to a location. The distribution of time intervals is depicted in Figure 3. The plot unveils two important features of human movements: first, one can observe existence of peaks in 24h intervals representing the users’ daily routines; second, we can observe that the probability of returning to a location decreases with $p(\Delta_{t})\propto\Delta_{t}^{-\beta}\mathrm{e}^{-\Delta_{t}/\kappa}$ with $\beta_{D1}\approx 1.429$ and $\kappa_{D1}\approx 2,347$ hours and $\beta_{D2}\approx 1.442$ with $\kappa_{D2}\approx 7,240$ hours. Indeed a very rapid decay. A rank-based analysis of human visitation patterns As previously described, the PR mechanism suggests that the visitation probability of a particular location is proportional to the number of previous visits to it. Our claim is that the Zipf’s Law observed in visitation frequencies distribution is influenced by our tendency to return to recently visited locations. To test such influence we compared the return probabilities from two ranking approaches: one based on the visitation frequencies ($K_{f}$), and the other ($K_{s}$) based on the recency of the last visit to a location. Additionally, we performed the same analyses presented in here to a set of randomized variations of the datasets (see Supplementary Information). In summary, the two ranks can be describe as: • $\mathbf{K_{s}}$ - recency-based rank. A location with $K_{s}=1$ at time $t$ means that it was the previous visited location. $K_{s}=2$ means that such location was the second most recent location visited up to time $t$ and so on, so forth. • $\mathbf{K_{f}}$ - is the frequency-based rank. A location with $K_{f}=1$ at time $t$ means that it was the most visited location up to that point in time. First we analyze the probability of return as a function of $K_{s}$. This analysis shows that such probability decays vary rapidly with $K_{s}$. More precisely, the probability $p(K_{s})$ follows a truncated power law, similarly to $p(\Delta_{t})$. In fact, when we compare distribution exponents $\beta$ and $\alpha$ we see that the return probability decreases faster with $K_{s}$ than with $\Delta_{t}$ for both datasets (see Figure 4). Recency over frequency - the role of recent events in human mobility In this section we explore the two-dimensional density distribution of returns $p(K_{f},K_{s})$. The idea is to investigate the return probabilities as an outcome of the convolution between visitation frequencies and times, encoded in $K_{f}$ and $K_{s}$ simultaneously. If users have a stronger preference for recently visited locations we should observe: 1. lower values of $K_{s}$ must be frequently observed over a wider range of $K_{f}$. It would suggest that we tend to return to recently visited locations even if we have not visited such location many times before (i.e. lower $K_{f}$ rank); 2. higher values of $K_{f}$ must deviate from lower $K_{f}$ values, suggesting that the probability of return to a location decays with time, even if it was a highly visited location. To test these hypotheses, we analyzed the frequency of returns with ranks $(K_{f},K_{s})$ for all $K_{f}$ and $K_{s}$. For example, a visit to a location with ranks $(10,3)$ means a return to the 10th most visited site after visiting three other locations. This return distribution is represented as a two-dimensional histogram for each of the datasets (Fig. 5). From the heatmaps, we can observe that returns to the most visited locations (e.g., $K_{f}\leq 7$) have shorter return trajectories. In other words, when it comes to our most visited locations, we tend to return to them after visiting very few locations. It can be seen by the rapid decrease in the returns frequencies when $K_{s}$ grows. For instance, in $D1$, more than 86% of the returns to the most visited location occurred after visiting less than five other locations while for $D2$, it was more than 91% (see Figure 6). We can observe also that the recency increases the probability of return to less visited locations (e.g., $7\leq K_{f}\leq 40$), expressed by a broader distribution of $K_{f}$ when $K_{s}$ is low (e.g., $K_{s}\leq 3$. For instance, a closer look at the bottom rows of the plots in Figure 5 shows that a recent visit to a location can increase the probability of returning to it up to 10 times in $D2$ (see Figure 5b). When we compare $D1$ and $D2$ we can observe a slightly different pattern between them. First, the effect of recency is much stronger in $D2$ than in $D1$. Such difference can be rooted on the fact that the mobility data of $D1$ is coarse-grained to a cell tower level. $D2$, on the other hand, provides finer-grained mobility data, capturing changes in visitation preferences, even when the locations are in the same cell area. Further analyses based on randomized versions of the datasets have shown that indeed the recency effect can be observed in both $D1$ and $D2$ (see Supplementary Information). The Recency-based model An alternative explanation for the anomalies observed in the ranks distribution would be that they could be simply a byproduct or artifact of the rank-based approach. To test to what extent the patterns we observed in the rank distribution corresponds to an unforeseen mechanism of human mobility, we tested for the hypothesis that it emerges from the data when we build the sequence-based ranks of frequency-driven trajectories. If the last one indeed holds true, the same patterns must be observed in the synthetic data produced by the EPR model. To test our hypothesis, we compared the purely frequentist mechanism of the EPR against our new human mobility model where returns have a bias toward recently visited locations. The recency-based model extends the Preferential Return mechanism endowing it with a mechanism capable of capturing the visitation bias towards recently visited locations. Besides that, all other ingredients of the EPR model were kept intact except for the temporal dimension. The reason for that is because the waiting-time distribution of the EPR model determines only when an individual is going to move (i.e., how much time it will wait still before the next jump) but not where to go. It is important to emphasize that the recency bias underlying our model is regarding the visitation path and is time-independent. The model can be described as follows: first, a population of $N$ agents is initialized and scattered randomly over a discrete lattice with $70\times 70$ cells, each one representing a possible location. The initial position of each agent is accounted as its first visit. At each time step agents can either visit a new location if probability $p_{new}=\rho S^{\gamma}$ where $\rho=0.6$ and $\gamma=0.6$ are control parameters–whose values were derived by Song et al from empirical data– and $S$ corresponds to the number of distinct locations visited thus far. With complementary probability $1-p_{new}$ an agent return to a previously visited location. If the movement is selected to be a return, with probability $1-\alpha$ the $i$th last visited location is selected from a Zipf’s law with probability $p(i)\propto k_{s}(i)^{-\eta}$ where $k_{s}(i)$ is the recency-based rank of the location $i$. The parameter $\eta$ controls the number of previously visited locations a user would remember when deciding to visit a location. With probability $\alpha$ the destination is selected based on the visitation frequencies with probability $\Pi_{i}\propto k_{f}(i)^{-1-\gamma}$ where $k_{f}(i)$ is the frequency rank of location $i$. Notice that when $\alpha=1$ we recover the original preferential return behavior of the EPR model while when $\alpha=0$, visitation returns will be based solely on the recency. We experimentally tested different parameters configuration for the model. Our analyses have shown that when $\alpha=0$, the heavy tail of the visitation frequency disappears while for $\alpha=1$ the power law of the recency distribution vanishes. It suggests that both mechanisms must be present in order to reproduce those two features. In practice different individuals could have different $\alpha$ values. However, extracting it from the empirical data is not an easy task once it is hard to determine either the movement was driven by the recency or frequency. Nevertheless, we determined that $\alpha=0.1$ (i.e., 10% of the movements influenced by the visitation frequencies) was enough to restore the recency and frequency ranks distributions. Also, for the Zipf’s Law distribution of the recency rank we used $\eta=1.6$, extracted from the empirical data (see Supplementary Information for the parameter estimation process). Visually, the synthetic data produced by the EPR model seems to have a good approximation with the empirical data (see Figure 8). However, when we compare the bottom-most rows of the histogram, it deviates from the empirical evidence, by not capturing the broader distribution of $p(k_{f},k_{s})$ for recently visited locations. On the other hand, the recency-based mechanism (RM) reproduced the recency influence as observed in the empirical data ( Figure 8 b). When we look at each variable individually we notice that the $K_{s}$ distribution as produced by the EPR model deviates from a power law, being better approximated by an exponential distribution. When we look at the $K_{f}$ distribution, the EPR model recovers its heavy tail, as expected (Figure 8d). Discussion When we look at an individual’s trajectories over, let us say, one year, the visitation patterns and regularities become very evident and radical changes in visitation patterns–such as during a long vacation abroad or after starting a new job in another city–are very unlikely. In a large population, these events indeed occur, but their effect on the population scale are very diluted and, sometimes, transient. Within such limited time window, individuals indeed are predictable, and believing that one is going to be at one of its most visited locations is a reasonable guess. However, it is really unlikely that the individual’s preferences are the same for 10 or 20 years. A recently discovered restaurant is a more plausible destination than our former workplace. Some events in our lives have the potential to reshape not only our visitation patterns but also our preferences. In this work we explored this idea under a simple rank-based framework. We unveiled empirical evidences supporting the idea that human trajectories are biased towards recently visited locations. We also offered a different perspective for human mobility investigation, where the temporal dimension plays a role much more important than the inter-event times. Methods Datasets Given that the coverage areas of mobile phone towers are far from uniform, the first step was to convert the antennas IDs in $D1$ into more reasonable estimates of users’ locations. In fact, coverage areas frequently overlap and multiple towers can cover the same area–and they actually do. That is true specially for densely populated regions where the density of antennas is also high. Moreover, for commercial purposes, multiple communication antennas can be at the same location. In order to reduce the influence of these factors, we truncated the cell towers’ coordinates to the forth decimal place–which corresponds to antennas within less than 11 meters apart–and merged together those having the same (truncated) coordinates under the same id. A rank-based characterization of human trajectories In this work, we propose a rank-based approach to the analysis of human trajectories. For such, we defined two rank variables, namely the frequency rank ($K_{f}$) and the recency rank ($K_{s}$). Both ranks were measured in a expanding basis from the accumulated sub-trajectories. To illustrate, consider a particular user $x$ with a trajectory $T=[(l_{1},l_{2},\ldots,l_{n}),l_{i}\in[1,\ldots,N]]$ composed of $N$ steps to $S\leq N$ locations. For each step $j>0$, we have the partial trajectory $\mathcal{T}=[l_{1},l_{2},\ldots,l_{j-1}]$ composed of all the previous steps, with $l_{j-1}$ being the immediate preceding step. From the sub-trajectory $\mathcal{T}$ we compute the frequency-based ranks $K_{f}$ of all locations visited so far. If the step $l_{j}$ is a return (i.e., $l_{j}\in\mathcal{T}$) we say that the frequency rank of the location $l_{j}$ is the rank $k_{f}(j)=K_{f}[l_{j}]$. The Exploration and Preferential Return Model The Exploration and Preferential Return (EPR) individual mobility model, proposed by Song et al [18] is based on two components, the exploration and the preferential return. The exploration mechanism reproduces the scaling properties of the individuals’ jump length and radius of gyration distribution whereas the exploration reproduces the Zipf’s Law exhibited by the visitation frequencies to each location. In the EPR model, at each time step an individual can either move or stay at the same location, according to the waiting time distribution $$P(\Delta_{t})\sim\Delta_{t}^{-1-\beta}\exp(\Delta_{t}/\tau)$$ with $\beta=0.8$ and $\tau\approx 17$ hours. If the individual is going to move, it can either explore (i.e., visit a new location) with probability $\rho S^{\gamma}$ or return to a previously visited location with complementary probability $1-\rho S^{\gamma}$ where $\rho=0.6$ and $\gamma=0.6$ are the values selected according to the method described in Ref. [18], and $S$ is the number of previously visited locations. If the step corresponds to a return, a location is selected with a probability $P({f})\sim f^{-(1+1/\zeta)}$ to be proportional to the visitation frequency $f$ of such location and with $\zeta\approx 1.2\pm 0.1$. If the next move corresponds to an exploration step, then a new location random location is selected according to a Lévy Flight $$P(\Delta_{r})\sim\Delta_{r}^{-1-\alpha}\exp(\Delta_{r}/\kappa)$$ with $\alpha\approx 0.55\pm 0.05$ and $\kappa\approx 100\text{ km}$. Notice that the $\alpha$ parameter described here is not the same as the recency parameter of our model. In the context of this work, all the parameters proposed by Song et al [9] were kept intact. Author contributions Developed the ideas, methods and analyses: HB and RM. Empirical data analysis: HB and AE. Wrote the manuscript: HB, FBLN and RM. Additional information Competing financial interests: The authors declare no competing financial interests. References [1] Belik, V., Geisel, T. & Brockmann, D. Natural Human Mobility Patterns and Spatial Spread of Infectious Diseases. Physical Review X 1, 011001 (2011). [2] Colizza, V., Barrat, A., Barthelemy, M., Valleron, A.-J. & Vespignani, A. Modeling the worldwide spread of pandemic influenza: baseline case and containment interventions. PLoS medicine 4, e13 (2007). [3] Balcan, D. & Vespignani, A. Phase transitions in contagion processes mediated by recurrent mobility patterns. Nature physics 7 (2011). [4] Toole, J. L., Ulm, M., González, M. C. & Bauer, D. Inferring land use from mobile phone activity. Proceedings of the ACM SIGKDD international workshop on urban computing 1–8 (2012). arXiv:1207.1115v1. [5] Lenormand, M., Gonçalves, B., Tugores, A. & Ramasco, J. J. Human diffusion and city influence. arXiv 1–17 (2015). URL http://arxiv.org/abs/1501.07788. 1501.07788. [6] Kitamura, R., Chen, C., Pendyala, R. & Narayanan, R. Micro-simulation of daily activity-travel patterns for travel demand forecasting. Transportation 25–51 (2000). [7] Jung, W., Wang, F. & Stanley, H. Gravity model in the Korean highway. EPL (Europhysics Letters) 1–13 (2008). arXiv:0710.1274v1. [8] Krajzewicz, D., Hertkorn, G., Wagner, P. & Rössel, C. SUMO ( Simulation of Urban MObility ) An open-source traffic simulation Car-Driver Model (2011). [9] Song, C., Qu, Z., Blumm, N. & Barabási, A.-L. Limits of predictability in human mobility. Science (New York, N.Y.) 327, 1018–21 (2010). [10] Wang, D., Pedreschi, D., Song, C., Giannotti, F. & Barabási, A.-L. Human mobility, social ties, and link prediction. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’11, 1100 (ACM Press, New York, New York, USA, 2011). [11] Yang, Y., Herrera, C., Eagle, N. & González, M. C. Limits of predictability in commuting flows in the absence of data for calibration. Scientific reports 4, 5662 (2014). URL http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=4092333&tool=pmcentrez&rendertype=abstract. arXiv:1407.6256v1. [12] Sadilek, A. & Krumm, J. Far Out: Predicting Long-Term Human Mobility. Aaai 814–820 (2012). [13] Krumme, C., Llorente, A., Cebrian, M., Pentland, A. S. & Moro, E. The predictability of consumer visitation patterns. Scientific reports 3, 1645 (2013). [14] Lu, X., Wetter, E., Bharti, N., Tatem, A. A. J. & Bengtsson, L. Approaching the limit of predictability in human mobility. Scientific reports 3, 2923 (2013). [15] González, M. C., Hidalgo, C. A. & Barabási, A.-L. Understanding individual human mobility patterns. Nature 453, 479–482 (2008). 0806.1256. [16] Brockmann, D., Hufnagel, L. & Geisel, T. The scaling laws of human travel. Nature 439, 462–5 (2006). [17] Hasan, S., Schneider, C. M., Ukkusuri, S. V. & González, M. C. Spatiotemporal Patterns of Urban Human Mobility. Journal of Statistical Physics 151, 304–318 (2012). [18] Song, C., Koren, T., Wang, P. & Barabási, A.-l. Modelling the scaling properties of human mobility. Nature Physics 6, 818–823 (2010). [19] Schneider, C. M. et al. Unravelling daily human mobility motifs. Journal of the Royal Society, Interface / the Royal Society 10, 20130246 (2013). [20] Merton, R. K. The Matthew Effect in Science. Science (New York, N.Y.) 159, 56–63 (1968). [21] Price, D. A general theory of bibliometric and other cumulative advantage processes. Journal of the American Society for Information … (1976). [22] Barabási, A.-L. & Albert, R. Emergence of scaling in random networks. Science 11 (1999). 9910332. [23] Grabowicz, P., Ramasco, J., Gonçalves, B. & Eguíluz, V. Entangling mobility and interactions in social media. PloS one 1–16 (2014). 1307.5304v1. [24] Cho, E., Myers, S. A. & Leskovec, J. Friendship and mobility. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD ’11, 1082 (ACM Press, New York, New York, USA, 2011). [25] Clauset, A., Shalizi, C. R. & Newman, M. E. J. Power-Law Distributions in Empirical Data. SIAM Review 51, 661–703 (2009). 0706.1062v2.
Lattice collapse and quenching of magnetism in CaFe${}_{2}$As${}_{2}$ under pressure: A single crystal neutron and x-ray diffraction investigation A. I. Goldman${}^{1,2}$, A. Kreyssig${}^{1,2}$, K. Prokeš${}^{3}$, D. K. Pratt${}^{1,2}$, D. N. Argyriou${}^{3}$, J. W. Lynn${}^{4}$, S. Nandi${}^{1,2}$, S. A. J. Kimber ${}^{3}$, Y. Chen${}^{4,5}$, Y. B. Lee${}^{1,2}$, G. Samolyuk${}^{1,2}$, J. B. Leão${}^{4}$, S. J. Poulton${}^{4,5}$, S. L. Bud’ko${}^{1,2}$, N. Ni${}^{1,2}$, P. C. Canfield${}^{1,2}$ B. N. Harmon${}^{1,2}$ and R. J. McQueeney${}^{1,2}$ ${}^{1}$ Ames Laboratory, US DOE, Iowa State University, Ames, IA 50011, USA ${}^{2}$Department of Physics and Astronomy, Iowa State University, Ames, IA 50011, USA ${}^{3}$Helmholtz-Zentrum Berlin für Materialien und Energie, Glienicker Str. 100, 14109 Berlin, Germany ${}^{4}$NIST Center for Neutron Research, National Institute of Standards and Technology, Gaithersburg, MD 20899, USA ${}^{5}$Department of Materials Science and Engineering, University of Maryland, College Park, MD 20742, USA (November 25, 2020) Abstract Single crystal neutron and high-energy x-ray diffraction have identified the phase lines corresponding to transitions between the ambient-pressure tetragonal (T), the antiferromagnetic orthorhombic (O) and the non-magnetic collapsed tetragonal (cT) phases of CaFe${}_{2}$As${}_{2}$. We find no evidence of additional structures for pressures up to 2.5 GPa (at 300 K). Both the T-cT and O-cT transitions exhibit significant hysteresis effects and we demonstrate that coexistence of the O and cT phases can occur if a non-hydrostatic component of pressure is present. Measurements of the magnetic diffraction peaks show no change in the magnetic structure or ordered moment as a function of pressure in the O phase and we find no evidence of magnetic ordering in the cT phase. Band structure calculations show that the transition results in a strong decrease of the iron 3d density of states at the Fermi energy, consistent with a loss of the magnetic moment. pacs: 61.50.Ks, 61.05.fm, 74.70.Dd The discoveryTorikachvili et al. (2008); Park et al. (2008) of pressure-induced superconductivity in CaFe${}_{2}$As${}_{2}$ has opened an exciting new avenue for investigations of the relationship between magnetism, superconductivity, and lattice instabilities in the iron arsenide family of superconductors. Features found in the compositional phase diagrams of the iron arsenidesNorman (2008), such as a superconducting region at low temperature and finite doping concentrations, are mirrored in the pressure-temperature phase diagrams. Superconductivity appears at either a critical doping, or above some critical pressure in the AFe${}_{2}$As${}_{2}$ (A$=$Ba, Sr, Ca) or ’122’ family of compounds, raising questions regarding the role of both electronic doping and pressure, especially in light of the recent observation of pressure induced superconductivity in the related compound, LaFeAsOOkada et al. (2008). Does doping simply add charge carriers, or are changes in the chemical pressure, upon doping, important as well? What subtle, or striking, modifications in structure or magnetism occur with doping or pressure, and how are they related to superconductivity? Similar to other members of the AFe${}_{2}$As${}_{2}$ (A$=$Ba, Sr) familyHuang et al. (2008); Jesche et al. (2008); Zhao et al. (2008); Su et al. (2008), at ambient pressure CaFe${}_{2}$As${}_{2}$ undergoes a transition from a non-magnetically ordered tetragonal (T) phase (a = 3.879(3) Å, c = 11.740(3) Å) to an antiferromagnetic (AF) orthorhombic (O) phase (a = 5.5312(2) Å, b = 5.4576(2) Å, c = 11.683(1) Å) below approximately 170 K.Ni et al. (2008); Goldman et al. (2008) In the O phase, Fe moments order in the so called AF2 structureYildirim (2008a) with moments directed along the a-axis of the orthorhombic structureGoldman et al. (2008). Neutron powder diffraction measurementsKreyssig et al. (2008) of CaFe${}_{2}$As${}_{2}$ under hydrostatic pressure found that for p$>$0.35 GPa (at T$=$50 K), the antiferromagnetic O phase transforms to a new, non-magnetically ordered, collapsed tetragonal (cT) structure (a = 3.9792(1) Å, c = 10.6379(6) Å) with a dramatic decrease in both the unit cell volume (5%) and the c/a ratio (11%). The transition to the cT phase occurs in close proximity to the pressure at which superconductivity is first observedTorikachvili et al. (2008). Total energy calculations based on this cT structure concluded that the Fe moment is quenched, consistent with the absence of magnetic neutron diffraction peaksKreyssig et al. (2008). Since the report of a cT phase in CaFe${}_{2}$As${}_{2}$ under pressure, a significant effort has been devoted to understanding the relationship between the cT structure, features observed in resistivity and susceptibility measurementsTorikachvili et al. (2008); Lee et al. (2008), and the results of local probe measurements such as $\mu$SRGoko et al. (2008). Although the neutron powder diffraction measurements demonstrated the loss of magnetic order in the cT phase, subsequent theoretical work has proposed that the Fe moment in the cT phase is not quenched and orders in an alternative Nèel state (the so-called AF1 structure)Yildirim (2008b). It has also been suggested, from $\mu$SR measurements, that a partial volume fraction of static magnetic order coexists with superconductivity over an intermediate pressure rangeGoko et al. (2008). Most recently, the existence of a new structural phase above 0.75 GPa in CaFe${}_{2}$As${}_{2}$ (Phase III in Ref.  Lee et al., 2008) has been proposed based on resistivity measurements. It is, therefore, important to clearly identify the chemical and magnetic structures of CaFe${}_{2}$As${}_{2}$, and the phase lines that separate them as a function of temperature and pressure, a task best accomplished by neutron and x-ray diffraction measurements. In this paper, we report on neutron and x-ray single crystal diffraction studies of the pressure-temperature ($p-$T) magnetic and structural phase diagram of CaFe${}_{2}$As${}_{2}$. We clearly identify the phase boundaries of the T, O and cT phases in the $p-$T phase diagram and find significant hysteresis associated with transitions to, and from, the cT phase. We find no evidence of additional structural phasesLee et al. (2008) for pressures up to 2.5GPa. We also demonstrate that the AF2 ordering is associated only with the O phase, and vanishes upon entering the cT phase in agreement with previous powder diffraction measurementsKreyssig et al. (2008). The apparent coexistence between magnetic order and superconductivity under pressureGoko et al. (2008) most likely arises from coexistence between the O and cT phases under non-hydrostatic measurement conditions. Furthermore, measurements at reciprocal lattice points associated with one of the three-dimensional realizations of the proposed AF1 structureYildirim (2008b) does not reveal any evidence of magnetic order in the cT phase. Finally, we show from band structure calculations that the collapse of the lattice leads to a strong reduction of the Fe 3d density of states at the Fermi energy, consistent with the loss of magnetism in the cT phase. I Experimental Details The single crystals of CaFe${}_{2}$As${}_{2}$ used for the diffraction measurements were grown either using a Sn flux as described previouslyNi et al. (2008)or from an FeAs flux. The FeAs powder used for the self-flux growth was synthesized by reacting Fe and As powders after they were mixed and pressed into pellets. The pellets were sealed inside a quartz tube under one-third of an atmosphere of Ar gas, slowly heated to 500${}^{\circ}$ C, held at that temperature for ten hours, and then slowly heated to 900${}^{\circ}$ C and held at 900${}^{\circ}$ C for an additional ten hours. Single crystals of CaFe${}_{2}$As${}_{2}$ were grown from this self flux using conventional high-temperature solution growth techniques. Small Ca chunks and the FeAs powder were mixed together in a 1:4 ratio. The mixture was placed into an alumina crucible, together with a second catch crucible containing quartz wool, and sealed in a quartz tube under one-third of an atmosphere of Ar gas. The sealed quartz tube was heated to 1180${}^{\circ}$ C for 2 hours, cooled to 1020${}^{\circ}$ C over 4 hours, and then slowly cooled to 970${}^{\circ}$ C over 27 hours where the FeAs was decanted from the plate like single crystals. The as-grown crystals were annealed at 500${}^{\circ}$ C for 24 hours. Neutron diffraction data were taken on the BT-7 spectrometer at the NIST Center for Neutron Research and on the E4 diffractometer at the Helmholtz-Zentrum Berlin für Materialien und Energie. High-energy x-ray diffraction data were acquired using station 6-ID-D in the MUCAT Sector at the Advanced Photon Source. Since much of the description below involves diffraction measurements in both the tetragonal and orthorhombic phases of CaFe${}_{2}$As${}_{2}$, it is useful to describe the indexing system employed in our discussions. For the orthorhombic structure we employ indices (HKL)${}_{\textup{O}}$ for the reflections based on the relations, H = h + k, K = h - k and L = l, where (hkl)${}_{\textup{T}}$ are the corresponding Miller indices for the tetragonal phase. For example, the (220)${}_{\textup{T}}$ tetragonal peak becomes the orthorhombic (400)${}_{\textup{O}}$ below the structural transition. The (hhl)${}_{\textup{T}}$ reciprocal lattice plane in the tetragonal structure becomes the (H0L)${}_{\textup{O}}$ plane in the orthorhombic phase. The indexing notation for the collapsed tetragonal phase is identical to that for the tetragonal structure. When referring to diffraction peaks, we will use subscripts T, O and cT to denote whether the peaks are indexed in the tetragonal, orthorhombic or collapsed tetragonal phase. Neutron diffraction measurements on BT-7: Three sets of measurements were performed on the BT-7 diffractometer. The first data set focused on measurements of phase boundaries separating the T, O and cT phase. These data were taken in double-axis mode, using a wavelength of $\lambda=$2.36 Å and two pyrolytic graphite filters to reduce higher harmonic content of the beam. A 10 mg single crystal (3$\times$3$\times$0.2 mm${}^{3}$), wrapped in Al-foil was secured to a flat plate within the Al-alloy He-gas pressure cell and cooled using a closed-cycle refrigerator. The sample was oriented so that the (hhl)${}_{\textup{T}}$ reciprocal lattice plane was coincident with the scattering plane of the diffractometer. The second set of data focused on measurements of the magnetic scattering in the orthorhombic phase. To maximize the intensity of the magnetic scattering relative to the substantial background from the pressure cell, these data were taken on a composite of eight single crystals, attached to a support plate using Fomblin oil, with the diffractometer operated in triple-axis mode using a PG(002) analyzer. The single crystals (combined mass of approximately 60 mg) were co-aligned so that their common (hhl)${}_{\textup{T}}$ plane was aligned in the scattering plane. The mosaic of the composite sample with respect to the [hh0]${}_{\textup{T}}$ and [00l]${}_{\textup{T}}$ directions was approximately 1 deg full-width-at-half-maximum (FWHM). The third data set focused on a survey of magnetic scattering vectors in the (h0l)${}_{\textup{cT}}$ diffraction plane of the cT phase associated with the proposed AF1 structure using the same diffractometer conditions as for data set two. For these measurements, a 70 mg single crystal, grown in an FeAs flux, was wrapped in Al-foil and secured to a flat plate within the Al-alloy He-gas pressure cell. The crystal mosaic of this sample was measured to be approximately 1 deg (FWHM). All of these measurements employed an Al-alloy He-gas pressure cell to ensure hydrostatic pressure conditions. The cell was connected to a pressurizing intensifier through a high pressure capillary that allowed continuous monitoring and adjusting of the pressure. Using this system, the pressure could be varied at fixed temperatures (above the He solidification line), or the temperature could be scanned at nearly constant pressures. A helium reservoir allowed the pressure to remain relatively constant as temperature was changed. The finite size of the reservoir, however, results in some change in pressure over the temperature range measured, on the order of 15% for the highest pressures ($\sim$0.6 GPa). Neutron diffraction measurements on E4: Additional neutron diffraction data were measured up to a pressure of 1 GPa on the E4 double axis diffractometer with $\lambda=$2.44 Å for the (hhl)${}_{\textup{T}}$ orientation of a 10 mg Sn-flux grown single crystal. Here we employed a Be-Cu clamp-type pressure cell and a 1:1 mixture of Fluorinert 77 and 70, for the pressure medium. The pressure was set ex-situ at room temperature and the cell was inserted into a standard He cryostat. The initial pressure was measured using a manganin pressure sensor and then monitored in-situ (since pressure decreases with decreasing temperature) by tracking the lattice constants of NaCl crystals placed above the sample. High-energy x-ray diffraction measurements: In order to move beyond the limits of the He-gas and clamped pressure cells we used a Merrill-Bassett diamond anvil pressure cell with an ethanol/methanol mixture as the pressure medium. This cell allowed us to collect diffraction data at 300K up to 2.5 GPa. Here we employed 100 keV x-rays to ensure full penetration of single crystal (Sn-flux grown) samples in the cell and recorded diffraction data over layers of reciprocal space using a 2D detector. At all pressures investigated the medium remained fluid. II Results II.1 Determination of the pressure-temperature phase diagram We first describe how the temperature/pressure phase lines for CaFe${}_{2}$As${}_{2}$, shown in Fig. 1(a), were derived from our neutron and high-energy x-ray diffraction measurements. The T-O phase boundary was determined by monitoring the (220)${}_{\textup{T}}$ or (112)${}_{\textup{T}}$ diffraction peaks upon heating (solid circles) or cooling (open circles) at specified pressures. At the T-O transition, the orthorhombic distortion and associated twinning splits both the (220)${}_{\textup{T}}$ and (112)${}_{\textup{T}}$ peaks, providing a clear signature of the transition. As indicated in Fig. 1(a), there is little hysteresis in the T-O transition ($\sim$2-3 K), consistent with our previous measurements at ambient pressureTorikachvili et al. (2008); Ni et al. (2008); Goldman et al. (2008). In a similar fashion, the T-cT phase boundary in Fig. 1(a) was mapped by monitoring the intensity of the (004)${}_{\textup{cT}}$ peak while heating the sample from low temperature (solid squares), or the intensity of the (004)${}_{\textup{T}}$ diffraction peak while cooling the sample from higher temperature (open squares). The sizeable difference ($\sim$9%) in the c-lattice constants for these two structures is evident from the scattering angles for the two peaks in Figs. 1(b) and (c). Figure 1(d) shows that, at a nominal pressure of 0.47 GPa, the cT phase transforms sharply to the T phase at 141 K (on heating) with no coexistence beyond the 1 K wide transition region. On decreasing temperature, the transition from the T to the cT phase is also very sharp but occurs at 112 K, nearly 30 K below the transition on heating, demonstrating the strong hysteresis (similar in magnitude noted in Refs.  Torikachvili et al., 2008 and  Lee et al., 2008) in the T-cT transition. We also point out that the strong volume changes associated with the T-cT transition irresversibly increases the sample mosaic. Indeed, the difference between the measured peak intensities for the (004) diffraction peaks in Fig. 1(d) arises from the doubling of the sample mosaic during this transition. We note, however, that all transitions in temperature and pressure were reproducible, even after several pressure/temperature cycles. The phase boundary between the O and the cT phases was determined by increasing/decreasing pressure at a fixed temperature while monitoring the (004)${}_{\textup{O}}$ diffraction peak (increasing pressure, solid triangles) and the (004)${}_{\textup{cT}}$ peak (decreasing pressure, open triangles). For example, at 92 K, on increasing pressure, a sharp transition from the O to cT phase, with a width smaller than our step size of 0.025 GPa, was found between 0.375 and 0.400 GPa (Fig. 1(e)). Upon decreasing pressure at 92 K, however, the transition to the O phase occurred at 0.225 GPa (Fig. 1(f)), demonstrating a striking pressure hysteresis in the O-cT transition as well. Nevertheless, the transitions themselves were sharp with no phase coexistence in evidence. We note that above 0.4 GPa, the cT phase persists down to the lowest temperature (4 K) and highest pressure (0.6 GPa) attained for the BT-7 measurements. Based upon resistivity measurements performed in a clamp-type Be-Cu cell using a silicon fluid as a pressure medium, Lee et al.Lee et al. (2008) have proposed that there is a transition at approximately 0.75 GPa from the cT phase to another structure of unknown symmetry (Phase III in Ref.  Lee et al., 2008). In order to move beyond the limits of the He-gas pressure cell to investigate the possibility of additional phases at higher pressure, x-ray diffraction measurements were performed using a diamond-anvil pressure cell at 300 K on the 6ID-D station in the MUCAT Sector at the Advanced Photon Source. Diffraction data over a wide range of reciprocal space in both the (hhl)${}_{\textup{T}}$ and (h0l)${}_{\textup{T}}$ planes were collected using a two-dimensional area detector. As described in earlier workKreyssig et al. (2007), entire reciprocal lattice planes can be imaged to identify any new reflections that signal the presence of additional structural phases. The T-cT transition at 300 K is evidenced by the strong change in lattice constants observed at 1.6 GPa, as shown in the inset of Fig. 1(a). No additional diffraction peaks were found above this transition, confirming the presence of only the cT phase for pressures up to 2.5 GPa. One of the most striking features of Fig. 1(a) is the large hysteresis regime represented by the shading. Within this area, the structure and physical properties measured at a particular pressure and temperature depend strongly upon the path taken to that point. Taken together, this large range of hysteresis and the strongly anisotropic response of the structure at the cT phase boundaries (see discussion in Section IIC) can easily lead to discrepencies in reports of magnetic order, electronic properties and superconductivity in CaFe${}_{2}$As${}_{2}$ under pressure. II.2 The magnetic structure of CaFe${}_{2}$As${}_{2}$under pressure One of the most interesting results of the original report of a cT phase from neutron powder diffraction measurementsKreyssig et al. (2008), was the disappearance of Fe magnetic ordering. The antiferromagnetic AF2 ordering in the orthorhombic phase of CaFe${}_{2}$As${}_{2}$is shown in Fig. 2(a). Magnetic diffraction peaks are found at positions (H0L)${}_{\textup{O}}$ with H and L odd, such as (101)${}_{\textup{O}}$ and (103)${}_{\textup{O}}$. During the course of our measurements of the single crystal on BT-7 the magnetic peaks were also monitored. The (101)${}_{\textup{O}}$ and (103)${}_{\textup{O}}$ magnetic peaks were observed (see, for example, Fig. 1(h)) at selected pressures and temperatures within the O phase, but no AF2 magnetic peaks were found at the corresponding ($\frac{1}{2}\frac{1}{2}$1) or ($\frac{1}{2}\frac{1}{2}3$) positions (see Figs. 1(g) and (i)) in either the T or cT phases, consistent with previous resultsKreyssig et al. (2008). In order to more closely track the evolution of the magnetic structure with pressure, a composite sample of eight single crystals, as described in the Section I, was mounted in the He-gas pressure cell and cooled through the T-O transition at ambient pressure. At T = 75 K, pressure was increased in 0.05 GPa steps, through the O-cT transition, up to a maximum pressure of 0.45 GPa. The temperature was then lowered to 50 K and pressure was released in 0.05 GPa steps to ambient pressure. At each pressure step , the (004)${}_{\textup{O}}$ and (004)${}_{\textup{cT}}$ nuclear peaks were measured along with the (101)${}_{\textup{O}}$ and (103)${}_{\textup{O}}$ magnetic peaks. Fig. 3 plots the volume fraction of the O and cT phases as a function of pressure at these two temperatures, determined from the integrated intensities of the (004)${}_{\textup{O}}$ and (004)${}_{\textup{cT}}$ peaks. The integrated intensity of the (103)${}_{\textup{O}}$ magnetic peak is also plotted at each pressure value. Upon increasing pressure at 75 K, the magnetic peak intensity remains constant until the O-cT phase boundary is reached. At 0.40 GPa we observe coexistence between the O and cT phases and a decrease in the magnetic intensity consistent with the decrease in O phase volume fraction. For pressures greater than 0.45 GPa, the sample has completely transformed to the cT phase and there is no evidence of static antiferromagnetic order associated with the AF2 structure at either the (103)${}_{\textup{O}}$ or ($\frac{1}{2}\frac{1}{2}3$)${}_{\textup{cT}}$ reciprocal lattice positions. As pressure is decreased at 50 K, only the cT phase is present until the cT-O transition boundary at 0.075 GPa, consistent with the results shown in Fig. 1(a). Upon the appearance of the O phase, the AF2 structure is once again recovered. One interesting difference between the data in Fig. 3 and Fig. 1 is the finite range of coexistence between the O and cT phases. In particular, we see that for the composite sample, the cT phase is evident down to ambient pressure whereas for the single crystal sample, the transition between the cT and O phases was sharp, exhibiting little in the way of phase coexistence. We attribute this to the fact that the composite sample was set on a sample holder using Fomblin oil, which solidifies well above the temperatures of these measurements. As described below, dramatic changes in the unit cell dimensions, as the sample transforms into (or out of) the cT phase, can introduce significant strain for constrained samples that ”smears” the transition. Nevertheless, these data show that: (1) the AF2 magnetic structure is associated with the O phase and is absent in the cT phase and; (2) The magnetic moment associated with the AF2 structure is independent of pressure up to the O-cT transition. Although the neutron powder diffraction measurements demonstrated the loss of magnetic order in the cT phase, subsequent first-principles calculationsYildirim (2008b) have claimed that the cT phase is magnetic and Fe moments order in a AF1 Néel state (all nearest neighbor interactions are AF) with a moment of 1.3 $\mu_{\textup{B}}$. There are two three-dimensional realizations of the AF1 structure, illustrated in Fig. 2, with either an antiferromagnetic (Fig. 2(b)) or ferromagnetic (Fig. 2(c)) alignment of adjacent Fe planes along the c-axis. For both cases, the magnetic unit cell is the same as the chemical unit cell. For the structure in Fig. 2(b), magnetic reflections will be found at positions (h0l)${}_{\textup{cT}}$ with h and l odd. Unfortunately, these positions also correspond to allowed nuclear reflections and, unless the moment is large, the magnetic contribution to the scattering is difficult to measure using unpolarized neutrons. For the structure illustrated in Fig. 2(c), magnetic reflections will be found at positions (h0l)${}_{\textup{cT}}$ with h odd and l even. These positions are forbidden for nuclear scattering from the I4/mmm tetragonal structure and were investigated using the 70 mg FeAs-flux grown sample. Measurements were done at ambient pressure and T = 167 K (in the O phase for reference) and at p = 0.62 GPa and T = 50 K (in the cT phase). We found no evidence of AF1 magnetic order at the (100)${}_{\textup{cT}}$ and (102)${}_{\textup{cT}}$ positions, consistent with our previous neutron diffraction measurementsKreyssig et al. (2008). II.3 Coexistence of phases in CaFe${}_{2}$As${}_{2}$under pressure While our single crystal data (Section IIA) clearly shows the presence of single phase fields in the p-T phase diagram, we do find circumstances in which extended ranges of coexistence between the cT and O or T phases occur. We believe that this observation is key to understanding recent $\mu$SR measurements that suggest that a partial volume fraction of static magnetic order coexists with superconductivity over an intermediate pressure rangeGoko et al. (2008). For example, in Fig. 4 we show the evolution of the (002)${}_{\textup{T}}$ diffraction peak as a function of temperature at 0.83 GPa (set at room temperature) measured on the E4 diffractometer using a Be-Cu clamp-type pressure cell and a 1:1 mixture of Fluorinert 77 and 70, for the pressure medium. On decreasing the temperature below approximately 120 K (P = 0.66(5) GPa), we find evidence of an extended coexistence regime between the T and O phases. As temperature is further decreased below approximately 100 K (P = 0.60(5) GPa), Fig. 4 shows coexistence between the O and cT structures that persists down to the base temperature of 2 K. The coexistence of the antiferromagnetic O and the nonmagnetic cT phase under conditions similar to those for the $\mu$SR measurementsGoko et al. (2008), provides a compelling explanation for the observed coexistence between static magnetic order, with a volume fraction that decreases with increasing pressure, and a non-magnetic volume fraction, that increases with increasing pressure. We attribute this behavior to the freezing of the pressure mediating liquid above 150K (Daphne oil 7373Murata et al. (1997) in ref  Goko et al., 2008 and Fluorinert here). This freezing, coupled with the anisotropy in the change across the T-cT transition, can lead to significant pressure gradients through the sample(s), a problem that is not encountered in our He gas cell apparatus for the temperature and pressure ranges investigated here. As noted in Section IIB, bonding the sample to a support can result in similar strains and inhomogeneities, smearing the transitions to and from the cT phase and extending the coexistence regime between phases. III Discussion Although it appears, so far, that the related compounds, BaFe${}_{2}$As${}_{2}$ and SrFe${}_{2}$As${}_{2}$ do not manifest a cT phase at elevated pressuresKimber et al. (2008), the transition to a cT phase in CaFe${}_{2}$As${}_{2}$with applied pressure is not unique among systems that crystallize in the ThCr${}_{2}$Si${}_{2}$ structure. Other examples of this phenomenon are found among the phosphide compounds including SrRh${}_{2}$P${}_{2}$ and EuRh${}_{2}$P${}_{2}$Huhnt et al. (1997a), as well as SrNi${}_{2}$P${}_{2}$, EuCo${}_{2}$P${}_{2}$Huhnt et al. (1997b) and EuFe${}_{2}$P${}_{2}$Ni et al. (2001). For the Eu-compounds, the cT phase is accompanied by a valence transition from Eu${}^{2+}$ to the non-magnetic Eu${}^{3+}$ and, for EuCo${}_{2}$P${}_{2}$, a change from local moment Eu(4f) to itinerant Co(3d) magnetism associated with a strong modification of the 3d bands in the cT phaseChefki et al. (1998). In all of these phosphide compounds, the striking decrease in the c/a ratios in the cT phase has been described in terms of ”bonding” transitions involving the formation of a P-P single bond between ions in neighboring planes along the c-axis, and has been discussed in some detail, by Hoffmann and ZhengHoffmann and Zheng (1985). Following this work on the small A-site cation limit for the isostructural phosphide compoundsHoffmann and Zheng (1985), we suggest that there is a transition in the bonding character of As-ions and the promotion of As-As bonds across the Fe${}_{2}$As${}_{2}$ layers under pressure. This was also suggested in recent theoretical work by YildirimYildirim (2008b). In the ambient pressure tetragonal phase, the As-As distance between neighboring Fe${}_{2}$As${}_{2}$ layers is approximately 3.15 Å, much larger than the As-As single bond distance of 2.52 Å in elemental As. In the cT phase, the As-As distance decreases to approximately 2.82 Å, still larger, but much closer to the range of a As-As single bond. The enhancement of the As-As bonding under pressure can have dramatic effects on band structure and magnetismHoffmann and Zheng (1985). In the case of the collapsed phase of EuCo${}_{2}$P${}_{2}$, for example, an increase in the Co 3d band-filling results in an enhancement of the density of states at the Fermi energy and the formation of a Co magnetic momentNi et al. (2001). It has also been pointed out that even small changes in the arsenic position (z${}_{As}$) strongly affects the occupation of the Fe 3d${}_{x^{2}-y^{2}}$ orbitals and, therefore, the magnetic behaviorKrellner et al. (2008). To further investigate the impact of the cT transition on the electronic density of states (DOS) and generalized susceptibility, $\chi$(q), of CaFe${}_{2}$As${}_{2}$, we have performed band structure calculations for both the T and cT phases. These calculations were performed using the full potential LAPW method, with R${}_{MT}$ * K${}_{\textrm{max}}$ = 8 , R${}_{MT}$ = 2.2, 2.0, 2.0 atomic units for Ca, Fe and As respectively. The number of k-points in the irreducible Brillouin zone are 550 for the self consistent charge, 828 for the DOS calculation, and 34501 for the $\chi$(q) calculation. For the local density functional, the Perdew-Wang 1992 functionalPerdew and Wang (1992) was employed. The convergence criterion for the total energy was 0.01 mRyd/cell. The structural parameters for the T and cT phases were obtained from experimentKreyssig et al. (2008). The density of states, obtained with the tetrahedron method, has been broadened with a Gaussian of width 3 mRyd. Fig. 5(a) shows the calculated density of states (DOS) within 2 eV of the Fermi energy for both non-magnetic tetragonal phases. The Fe states overwhelmingly ($>$ 95 percent) contribute in this region, with significant As hybridization (bonding) occurring at lower energies and Ca contributions appearing at higher energies. The most significant change in the collapsed phase is the dramatic lowering of the DOS at the Fermi energy associated, primarily, with a shift to lower energies of the Fe $3d_{x^{2}-y^{2}}$ and Fe $3d_{xz+yz}$ orbitals. This is also demonstrated in the corresponding $\chi$(q) calculations shown in Fig. 5(b) for the two phases. The intraband contribution to $\chi$(q = 0) is exactly the density of states at $E_{F}$, with a large value favoring ferromagnetic ordering (Stoner criteria). We note that the DOS at $E_{F}$ of 2.8 states/ev-Fe is comparable to that for pure Fe (3.0 states/eV-Fe). The considerably larger peak in $\chi$(q) at the zone boundary for the T phase, however, is an indication that the magnetic instability is antiferromagnetic, with ordering observed upon a small orthorhombic distortionGoldman et al. (2008). The small DOS at $E_{F}$ for the cT phase shown in Fig. 5(a) is not sufficient to induce magnetic ordering, and the essentially featureless $\chi$(q) (note the offset) in Fig. 5(b) and along other high symmetry directions (not shown) for the cT phase indicates that magnetic ordering is unlikely at any q-vector. These calculations clearly support the notion of a depressed Fe $3d$ density of states at $E_{F}$, which is consistent with previous workYildirim (2008b); Krellner et al. (2008); Samolyuk and Antropov (2008) as well as the suppression of magnetism in the cT phase. The suppression of the DOS realized in these calculations is, however, surprising in light of the strong reduction in the resistivity of CaFe${}_{2}$As${}_{2}$found upon transformations into the cT phaseTorikachvili et al. (2008); Park et al. (2008). This suggests that scattering effects are greatly reduced in the cT phase in comparison to the T phase, and calls for further investigation of the spin-fluctuation spectrum of the T phase in particular. IV Summary and Conclusions To summarize our results, we have identified the phase lines corresponding to transitions between the ambient-pressure tetragonal, the antiferromagnetic orthorhombic, and the non-magnetic collapsed tetragonal phases of CaFe${}_{2}$As${}_{2}$. No additional structures for pressures up to 2.5 GPa (at 300 K) were observed, in contradiction to the proposal of Ref  Lee et al., 2008. For at least one of the two possible AF1 structures, we find no evidence of magnetic ordering in the cT phase as proposed in Ref.  Yildirim, 2008b. Whereas the low-pressure T-O transition presents only slight hysteresis ($\sim$2 K), both the T-cT and O-cT transitions exhibit very significant hysteresis effects. The large temperature/pressure range of hysteresis, together with the strongly anisotropic changes in the CaFe${}_{2}$As${}_{2}$ lattice at the cT transition must be considered in the interpretation of resistivity and susceptibility measurements in CaFe${}_{2}$As${}_{2}$, particularly for measurements where non-hydrostatic pressure effects are possible. We have demonstrated that the $\mu$SR measurements in Ref  Goko et al., 2008 are consistent with our neutron powder and single-crystal studies; the coexistence of non-magnetic and magnetically ordered fractions results from coexistence between the O and cT phases in the presence of a non-hydrostatic pressure component. We also note that pressure-induced superconductivity in CaFe${}_{2}$As${}_{2}$ has been observed close to the O-cT transition, and within the hysteretic region associated with that transition and, therefore, should be studied carefully under hydrostatic conditions. The loss of magnetism in the cT phase is largely due to changes in the band structure that depletes Fe 3$d$ DOS at E${}_{F}$, a behavior that is analogous to other 122 type compoundsHuhnt et al. (1997a, b); Ni et al. (2001); Hoffmann and Zheng (1985) and qualitatively consistent with previous theoretical calculationsYildirim (2008b); Krellner et al. (2008); Samolyuk and Antropov (2008). The authors wish to acknowledge very useful discussions with Joerg Schmalian, the assistance of J.Q. Yan with sample preparation, and the assistance of Yejun Feng and Doug Robinson with the high energy x-ray measurements. The work at the Ames Laboratory and at the MUCAT sector was supported by the U.S. DOE under Contract No. DE-AC02-07CH11358. The use of the Advanced Photon Source was supported by U.S. DOE under Contract No. DE-AC02-06CH113. References Torikachvili et al. (2008) M. S. Torikachvili, S. L. Bud’ko, N. Ni, and P. C. Canfield, Phys. Rev. Lett. 101, 057006 (2008). Park et al. (2008) T. Park, E. Park, H. Lee, T. Klimczuk, E. D. Bauer, F. Ronning, and J. D. Thompson, J. Phys.: Condens. Matter 20, 322204 (2008). Norman (2008) M. R. Norman, Physics 1, 21 (2008), and references therein. Okada et al. (2008) H. Okada, K. Igawa, H. Takahashi, Y. Kamihara, M. Hirano, H. Hosono, K. Matsubayashi, and Y. Uwatoko, J. Phys. Soc. Jpn. 77, 113712 (2008). Huang et al. (2008) Q. Huang, Y. Qiu, W. Bao, M. A. Green, J. W. Lynn, Y. C. Gasparovic, T. Wu, G. Wu, and X. H. Chen, arXiv:0806.2776 (2008), unpublished. Jesche et al. (2008) A. Jesche, N. Caroca-Canales, H. Rosner, H. Borrmann, A. Ormeci, D. Kasinathan, K. Kaneko, H. H. Klauss, H. Luetkens, R. Khasanov, et al., arXiv:0807.0632 (2008), unpublished. Zhao et al. (2008) J. Zhao, W. Ratcliff, II, J. W. Lynn, G. F. Chen, J. L. Luo, N. L. Wang, J. Hu, and P. Dai, Phys. Rev. B 78, 140504(R) (2008). Su et al. (2008) Y. Su, P. Link, A. Schneidewind, T. Wolf, Y. Xiao, R. Mittal, M. Rotter, D. Johrendt, T. Brueckel, and M. Loewenhaupt, arXiv:0807.1743 (2008), unpublished. Ni et al. (2008) N. Ni, S. Nandi, A. Kreyssig, A. I. Goldman, E. D. Mun, S. L. Bud’ko, and P. C. Canfield, Phys. Rev. B 78, 014523 (2008). Goldman et al. (2008) A. I. Goldman, D. N. Argyriou, B. Ouladdiaf, T. Chatterji, A. Kreyssig, S. Nandi, N. Ni, S. L. Bud’ko, P. C. Canfield, and R. J. McQueeney, Phys. Rev. B 78, 100506(R) (2008). Yildirim (2008a) T. Yildirim, Phys. Rev. Lett. 101, 057010 (2008a). Kreyssig et al. (2008) A. Kreyssig, M. A. Green, Y. Lee, G. D. Samolyuk, P. Zajdel, J. W. Lynn, S. L. Bud’ko, M. S. Torikachvili, N. Ni, S. Nandi, et al., arXiv:0807.3032 (2008), unpublished. Lee et al. (2008) H. Lee, E. Park, T. Park, F. Ronning, E. D. Bauer, and J. D. Thompson, arXiv:0809.3550 (2008), unpublished. Goko et al. (2008) T. Goko, A. A. Aczel, E. Baggio-Saitovitch, S. L. Bud’ko, P. Canfield, J. P. Carlo, G. F. Chen, P. Dai, A. C. Hamann, W. Z. Hu, et al., arXiv:0808.1425 (2008), unpublished. Yildirim (2008b) T. Yildirim, arXiv:0807.3936 (2008b), unpublished. Kreyssig et al. (2007) A. Kreyssig, S. Chang, Y. Janssen, J. W. Kim, S. Nandi, J. Q. Yan, L. Tan, R. J. McQueeney, P. C. Canfield, and A. I. Goldman, Phys. Rev. B 76, 054421 (2007). Murata et al. (1997) K. Murata, H. Yoshino, H. O. Yadav, Y. Honda, and N. Shirakawa, Rev. Sci. Instrum. 68, 2490 (1997). Kimber et al. (2008) S. A. J. Kimber, A. Kreyssig, F. Yokaichiya, D. N. Argyriou, J. Q. Yan, T. Hansen, T. Chatterji, R. J. McQueeney, P. C. Canfield, and A. I. Goldman (2008), unpublished. Huhnt et al. (1997a) C. Huhnt, G. Michels, M. Roepke, W. Schlabitz, A. Wurth, D. Johrendt, and A. Mewis, Physica B 240, 26 (1997a). Huhnt et al. (1997b) C. Huhnt, W. Schlabitz, A. Wurth, A. Mewis, and M. Reehuis, Phys. Rev. B 56, 13796 (1997b). Ni et al. (2001) B. Ni, M. M. Abd-Elmeguid, H. Micklitz, J. P. Sanchez, P. Vulliet, and D. Johrendt, Phys. Rev. B 63, 100102(R) (2001). Chefki et al. (1998) M. Chefki, M. M. Abd-Elmeguid, H. Micklitz, C. Huhnt, W. Schlabitz, M. Reehuis, and W. Jeitschko, Phys. Rev. Lett. 80, 802 (1998). Hoffmann and Zheng (1985) R. Hoffmann and C. Zheng, J. Phys. Chem. 89, 4175 (1985). Krellner et al. (2008) C. Krellner, N. Caroca-Canales, A. Jesche, H. Rosner, A. Ormeci, and C. Geibel, Phys. Rev. B 78, 100504(R) (2008). Perdew and Wang (1992) J. P. Perdew and Y. Wang, Phys. Rev. B 45, 13244 (1992). Samolyuk and Antropov (2008) G. D. Samolyuk and V. P. Antropov, arXiv:0810.1445 (2008), unpublished.
A Low Density Lattice Decoder via Non-parametric Belief Propagation Danny Bickson IBM Haifa Research Lab Mount Carmel, Haifa 31905, Israel Email: danny.bickson@gmail.com    Alexander T. Ihler Bren School of Information and Computer Science University of California, Irvine Email: ihler@ics.uci.edu    Harel Avissar and Danny Dolev School of Computer Science and Engineering Hebrew University of Jerusalem Jerusalem 91904, Israel Email: {harela01,dolev}@cs.huji.ac.il Abstract The recent work of Sommer, Feder and Shalvi presented a new family of codes called low density lattice codes (LDLC) that can be decoded efficiently and approach the capacity of the AWGN channel. A linear time iterative decoding scheme which is based on a message-passing formulation on a factor graph is given. In the current work we report our theoretical findings regarding the relation between the LDLC decoder and belief propagation. We show that the LDLC decoder is an instance of non-parametric belief propagation and further connect it to the Gaussian belief propagation algorithm. Our new results enable borrowing knowledge from the non-parametric and Gaussian belief propagation domains into the LDLC domain. Specifically, we give more general convergence conditions for convergence of the LDLC decoder (under the same assumptions of the original LDLC convergence analysis). We discuss how to extend the LDLC decoder from Latin square to full rank, non-square matrices. We propose an efficient construction of sparse generator matrix and its matching decoder. We report preliminary experimental results which show our decoder has comparable symbol to error rate compared to the original LDLC decoder. I Introduction Lattice codes provide a continuous-alphabet encoding procedure, in which integer-valued information bits are converted to positions in Euclidean space. Motivated by the success of low-density parity check (LDPC) codes [1], recent work by Sommer et al. [2] presented low density lattice codes (LDLC). Like LDPC codes, a LDLC code has a sparse decoding matrix which can be decoded efficiently using an iterative message-passing algorithm defined over a factor graph. In the original paper, the lattice codes were limited to Latin squares, and some theoretical results were proven for this special case. The non-parametric belief propagation (NBP) algorithm is an efficient method for approximated inference on continuous graphical models. The NBP algorithm was originally introduced in [3], but has recently been rediscovered independently in several domains, among them compressive sensing [4, 5] and low density lattice decoding [2], demonstrating very good empirical performance in these systems. In this work, we investigate the theoretical relations between the LDLC decoder and belief propagation, and show it is an instance of the NBP algorithm. This understanding has both theoretical and practical consequences. From the theory point of view we provide a cleaner and more standard derivation of the LDLC update rules, from the graphical models perspective. From the practical side we propose to use the considerable body of research that exists in the NBP domain to allow construction of efficient decoders. We further propose a new family of LDLC codes as well as a new LDLC decoder based on the NBP algorithm . By utilizing sparse generator matrices rather than the sparse parity check matrices used in the original LDLC work, we can obtain a more efficient encoder and decoder. We introduce the theoretical foundations which are the basis of our new decoder and give preliminary experimental results which show our decoder has comparable performance to the LDLC decoder. The structure of this paper is as follows. Section II overviews LDLC codes, belief propagation on factor graph and the LDLC decoder algorithm. Section III rederive the original LDLC algorithm using standard graphical models terminology, and shows it is an instance of the NBP algorithm. Section IV presents a new family of LDLC codes as well as our novel decoder. We further discuss the relation to the GaBP algorithm. In Section V we discuss convergence and give more general sufficient conditions for convergence, under the same assumptions used in the original LDLC work. Section VI brings preliminary experimental results of evaluating our NBP decoder vs. the LDLC decoder. We conclude in Section VII. II Background II-A Lattices and low-density lattice codes An $n$-dimensional lattice $\Lambda$ is defined by a generator matrix $G$ of size $n\times n$. The lattice consists of the discrete set of points $x=(x_{1},x_{2},...,x_{n})\in R^{n}$ with $x=Gb$, where $b\in Z^{n}$ is the set of all possible integer vectors. A low-density lattice code (LDLC) is a lattice with a non-singular generator matrix $G$, for which $H=G^{-1}$ is sparse. It is convenient to assume that $det(H)=1/det(G)=1$. An $(n,d)$ regular LDLC code has an $H$ matrix with constant row and column degree $d$. In a latin square LDLC, the values of the $d$ non-zero coefficients in each row and each column are some permutation of the values $h_{1},h_{2},\cdots,h_{d}$. We assume a linear channel with additive white Gaussian noise (AWGN). For a vector of integer-valued information $b$ the transmitted codeword is $x=Gb$, where $G$ is the LDLC encoding matrix, and the received observation is $y=x+w$ where $w$ is a vector of i.i.d. AWGN with diagonal covariance $\sigma^{2}I$. The decoding problem is then to estimate $b$ given the observation vector $y$; for the AWGN channel, the MMSE estimator is $$b^{*}=\arg\mathop{\min}\limits_{b\in\mathbb{Z}^{n}}||y-Gb||^{2}\,.$$ (1) II-B Factor graphs and belief propagation Factor graphs provide a convenient mechanism for representing structure among random variables. Suppose a function or distribution $p(x)$ defined on a large set of variables $x=[x_{1},\ldots,x_{n}]$ factors into a collection of smaller functions $p(x)=\prod_{s}f_{s}(x_{s})$, where each $x_{s}$ is a vector composed of a smaller subset of the $x_{i}$. We represent this factorization as a bipartite graph with “factor nodes” $f_{s}$ and “variable nodes” $x_{i}$, where the neighbors $\Gamma_{s}$ of $f_{s}$ are the variables in $x_{s}$, and the neighbors of $x_{i}$ are the factor nodes which have $x_{i}$ as an argument ($f_{s}$ such that $x_{i}$ in $x_{s}$). For compactness, we use subscripts $s,t$ to indicate factor nodes and $i,j$ to indicate variable nodes, and will use $x$ and $x_{s}$ to indicate sets of variables, typically formed into a vector whose entries are the variables $x_{i}$ which are in the set. The belief propagation (BP) or sum-product algorithm [6] is a popular technique for estimating the marginal probabilities of each of the variables $x_{i}$. BP follows a message-passing formulation, in which at each iteration $\tau$, every variable passes a message (denoted $M_{is}^{\tau}$) to its neighboring factors, and factors to their neighboring variables. These messages are given by the general form, $$\displaystyle M^{\tau+1}_{is}(x_{i})$$ $$\displaystyle=f_{i}(x_{i})\prod_{t\in\Gamma_{i}\setminus s}M^{\tau}_{ti}(x_{i}% )\,,$$ $$\displaystyle M^{\tau+1}_{si}(x_{i})$$ $$\displaystyle=\int\limits_{x_{s}\setminus x_{i}}f_{s}(x_{s})\prod\limits_{j\in% \Gamma_{s}\setminus i}M^{\tau}_{js}(x_{j})dx_{s}\,.$$ (2) Here we have included a “local factor” $f_{i}(x_{i})$ for each variable, to better parallel our development in the sequel. When the variables $x_{i}$ take on only a finite number of values, the messages may be represented as vectors; the resulting algorithm has proven effective in many coding applications including low-density parity check (LDPC) codes [7]. In keeping with our focus on continuous-alphabet codes, however, we will focus on implementations for continuous-valued random variables. II-B1 Gaussian Belief Propagation When the joint distribution $p(x)$ is Gaussian, $p(x)\propto\exp\{-\frac{1}{2}x^{T}Jx+h^{T}x\}$, the BP messages may also be compactly represented in the same form. Here we use the “information form” of the Gaussian distribution, ${\cal N}(x;\mu,\Sigma)={\cal N}^{-1}(h,J)$ where $J=\Sigma^{-1}$ and $h=J\mu$. In this case, the distribution’s factors can always be written in a pairwise form, so that each function involves at most two variables $x_{i}$, $x_{j}$, with $f_{ij}(x_{i},x_{j})=\exp\{-J_{ij}x_{i}x_{j}\}$, $j\neq i$, and $f_{i}(x_{i})=\exp\{-\frac{1}{2}J_{ii}x_{i}^{2}+h_{i}x_{i}\}$. Gaussian BP (GaBP) then has messages that are also conveniently represented as information-form Gaussian distributions. If $s$ refers to factor $f_{ij}$, we have $$\displaystyle M^{\tau+1}_{is}(x_{i})$$ $$\displaystyle={\cal N}^{-1}(\beta_{i\setminus j},\alpha_{i\setminus j})\,,$$ $$\displaystyle\alpha_{i\backslash j}$$ $$\displaystyle=J_{ii}+\sum_{{k}\in\Gamma_{i}\setminus j}\alpha_{ki}\,,$$ $$\displaystyle\beta_{i\backslash j}$$ $$\displaystyle=h_{i}+\sum_{k\in\Gamma_{i}\setminus j}\beta_{ki}\,,$$ (3) $$\displaystyle M^{\tau+1}_{sj}(x_{j})$$ $$\displaystyle={\cal N}^{-1}(\beta_{ij},\alpha_{ij})\,,$$ $$\displaystyle\alpha_{ij}$$ $$\displaystyle=-J_{ij}^{2}\alpha_{i\backslash j}^{-1}\,,$$ $$\displaystyle\beta_{ij}$$ $$\displaystyle=-J_{ij}\alpha_{i\backslash j}^{-1}\beta_{i\backslash j}\,.$$ (4) From the $\alpha$ and $\beta$ values we can compute the estimated marginal distributions, which are Gaussian with mean $\hat{\mu}_{i}=\hat{K}_{i}(h_{i}+\sum_{k\in\Gamma_{i}}\beta_{ki})$ and variance $\hat{K}_{i}=(J_{ii}+\sum_{{k}\in\Gamma_{i}}\alpha_{ki})^{-1}$. It is known that if GaBP converges, it results in the exact MAP estimate $x^{*}$, although the variance estimates $\hat{K}_{i}$ computed by GaBP are only approximations to the correct variances [8]. II-B2 Nonparametric belief propagation In more general continuous-valued systems, the messages do not have a simple closed form and must be approximated. Nonparametric belief propagation, or NBP, extends the popular class of particle filtering algorithms, which assume variables are related by a Markov chain, to general graphs. In NBP, messages are represented by collections of weighted samples, smoothed by a Gaussian shape–in other words, Gaussian mixtures. NBP follows the same message update structure of (2). Notably, when the factors are all either Gaussian or mixtures of Gaussians, the messages will remain mixtures of Gaussians as well, since the product or marginalization of any mixture of Gaussians is also a mixture of Gaussians [3]. However, the product of $d$ Gaussian mixtures, each with $N$ components, produces a mixture of $N^{d}$ components; thus every message product creates an exponential increase in the size of the mixture. For this reason, one must approximate the mixture in some way. NBP typically relies on a stochastic sampling process to preserve only high-likelihood components, and a number of sampling algorithms have been designed to ensure that this process is as efficient as possible [9, 10, 11]. One may also apply various deterministic algorithms to reduce the number of Gaussian mixture components [12]; for example, in [13, 14], an $O(N)$ greedy algorithm (where $N$ is the number of components before reduction) is used to trade off representation size with approximation error under various measures. II-C LDLC decoder The LDLC decoding algorithm is also described as a message-passing algorithm defined on a factor graph[6], whose factors represent the information and constraints on $x$ arising from our knowledge of $y$ and the fact that $b$ is integer-valued. Here, we rewrite the LDLC decoder update rules in the more standard graphical models notation. The factor graph used is a bipartite graph with variables nodes $\{x_{i}\}$, representing each element of the vector $x$, and factor nodes $\{f_{i},g_{s}\}$ corresponding to functions $$\displaystyle f_{i}(x_{i})$$ $$\displaystyle=\mathcal{N}(x_{i};y_{i},\sigma^{2})\,,$$ $$\displaystyle g_{s}(x_{s})$$ $$\displaystyle=\begin{cases}1&H_{s}x\in\mathbb{Z}\\ 0&\mathrm{otherwise}\end{cases}\,,$$ where $H_{s}$ is the $s^{\mathrm{th}}$ row of the decoding matrix $H$. Each variable node $x_{i}$ is connected to those factors for which it is an argument; since $H$ is sparse, $H_{s}$ has few non-zero entries, making the resulting factor graph sparse as well. Notice that unlike the construction of [2], this formulation does not require that $H$ be square, and it may have arbitrary entries, rather than being restricted to a Latin square construction. Sparsity is preferred, both for computational efficiency and because belief propagation is typically more well behaved on sparse systems with sufficiently long cycles [6]. We can now directly derive the belief propagation update equations as Gaussian mixture distributions, corresponding to an instance of the NBP algorithm. We suppress the iteration number $\tau$ to reduce clutter. Variable to factor messages. Suppose that our factor to variable messages $M_{si}(x_{i})$ are each described by a Gaussian mixture distribution, which we will write in both the moment and information form: $$M_{si}(x_{i})=\sum_{l}w_{si}^{l}\mathcal{N}(x_{i}\,;\,m_{si}^{l},\nu_{si}^{l})% =\sum_{l}w_{si}^{l}\mathcal{N}^{-1}(x_{i}\,;\,\beta_{si}^{l},\alpha_{si}^{l})\,.$$ (5) Then, the variable to factor message $M_{is}(x_{s})$ is given by $$M_{is}(x_{s})=\sum_{l}w_{is}^{l}\mathcal{N}(x_{s}\,;\,m_{is}^{l},\nu_{is}^{l})% =\sum_{l}w_{is}^{\mathbf{l}}\mathcal{N}^{-1}(x_{s}\,;\,\beta_{is}^{l},\alpha_{% is}^{l})\,,$$ (6) where ${\mathbf{l}}$ refers to a vector of indices $[{l}_{s}]$ for each neighbor $s$, $$\displaystyle\alpha_{is}^{{\bf l}}$$ $$\displaystyle=\sigma^{-2}+\sum_{t\in\Gamma_{i}\setminus s}\alpha_{ti}^{{l}_{t}% }\,,\ \ \ \beta_{it}^{\mathbf{l}}=y_{i}\sigma^{-2}+\sum_{t\in\Gamma_{i}% \setminus s}\beta_{ti}^{{l}_{s}}\,,$$ (7) $$\displaystyle w_{it}^{\mathbf{l}}$$ $$\displaystyle=\frac{\mathcal{N}(x^{*};y_{i},\sigma^{2})\prod w_{si}^{{l}_{s}}% \mathcal{N}^{-1}(x^{*};\beta_{si}^{{l}_{s}},\alpha_{si}^{{l}_{s}})}{\mathcal{N% }^{-1}(x^{*};\beta_{it}^{\mathbf{l}},\alpha_{it}^{\mathbf{l}})}\,.$$ The moment parameters are then given by $\nu_{it}^{\mathbf{l}}=(\alpha_{it}^{\mathbf{l}})^{-1}$, $m_{it}^{\mathbf{l}}=\beta_{it}^{\mathbf{l}}(\alpha_{it}^{\mathbf{l}})^{-1}$. The value $x^{*}$ is any arbitrarily chosen point, often taken to be the mean $m_{it}^{\mathbf{l}}$ for numerical reasons. Factor to variable messages. Assume that the incoming messages are of the form (6), and note that the factor $g_{s}(\cdot)$ can be rewritten in a summation form, $g_{s}(x_{s})=\sum_{b_{s}}\delta(H_{s}x=b_{s})$, which includes all possible integer values $b_{s}$. If we condition on the value of both the integer $b_{s}$ and the indices of the incoming messages, again formed into a vector ${\mathbf{l}}=[{l}_{j}]$ with an element for each variable $j$, we can see that $g_{s}$ enforces the linear equality $H_{si}x_{i}=b_{s}-\sum H_{sj}x_{j}$. Using standard Gaussian identities in the moment parameterization and summing over all possible $b_{s}\in\mathbb{Z}$ and ${\mathbf{l}}$, we obtain $$\displaystyle M_{si}(x_{i})=\sum_{b_{s}}\sum_{\mathbf{l}}w_{si}^{\mathbf{l}}% \mathcal{N}(x_{i}\,;\,m_{si}^{\mathbf{l}},\nu_{si}^{\mathbf{l}})=$$ $$\displaystyle\sum_{b_{s}}\sum_{\mathbf{l}}w_{si}^{\mathbf{l}}\mathcal{N}^{-1}(% x_{i}\,;\,\beta_{si}^{\mathbf{l}},\alpha_{si}^{\mathbf{l}})\,,$$ (8) where $$\displaystyle\nu_{si}^{\mathbf{l}}$$ $$\displaystyle=H_{si}^{-2}(\sum_{j\in\Gamma_{s}\setminus i}H_{js}^{2}\nu_{js}^{% {l}_{j}})\,,$$ $$\displaystyle m_{si}^{\mathbf{l}}$$ $$\displaystyle=H_{si}^{-1}(-b_{s}+\sum_{j\in\Gamma_{s}\setminus i}H_{js}m_{js}^% {{l}_{j}})\,,$$ $$\displaystyle w_{si}^{\mathbf{l}}=\prod_{j\in\Gamma_{s}\setminus i}w_{js}^{{l}% _{j}}\,,$$ (9) and the information parameters are given by $\alpha_{si}^{\mathbf{l}}=(\nu_{si}^{\mathbf{l}})^{-1}$ and $\beta_{si}^{\mathbf{l}}=m_{si}^{\mathbf{l}}(\nu_{si}^{\mathbf{l}})^{-1}$. Notice that (II-C) matches the initial assumption of a Gaussian mixture given in (5). At each iteration, the exact messages remain mixtures of Gaussians, and the algorithm iteslf corresponds to an instance of NBP. As in any NBP implementation, we also see that the number of components is increasing at each iteration and must eventually approximate the messages using some finite number of components. To date the work on LDLC decoders has focused on deterministic approximations [2, 15, 16, 17], often greedy in nature. However, the existing literature on NBP contains a large number of deterministic and stochastic approximation algorithms [9, 13, 12, 10, 11]. These algorithms can use spatial data structures such as KD-Trees to improve efficiency and avoid the pitfalls that come with greedy optimization. Estimating the codewords. The original codeword $x$ can be estimated using its belief, an approximation to its marginal distribution given the constraints and observations: $$B_{i}(x_{i})=f_{i}(x_{i})\prod_{s\in\Gamma_{i}}M_{si}(x_{i})\,.$$ (10) The value of each $x_{i}$ can then be estimated as either the mean or mode of the belief, e.g., $x_{i}^{*}=\arg\max B_{i}(x_{i})$, and the integer-valued information vector estimated as $b^{*}=\mathop{\mathrm{round}}(Hx^{*})$. III A Pairwise Construction of the LDLC decoder Before introducing our novel lattice code construction, we demonstrate that the LDLC decoder can be equivalently constructed using a pairwise graphical model. This construction will have important consequences when relating the LDLC decoder to Gaussian belief propagation (Section IV-B) and understanding convergence properties (Section V). Theorem 1 The LDLC decoder algorithm is an instance of the NBP algorithm executed on the following pairwise graphical model. Denote the number LDLC variable nodes as $n$ and the number of check nodes as $k$111Our construction extends the square parity check matrix assumption to the general case.. We construct a new graphical model with $n+k$ variables, $X=(x_{1},\cdots,x_{n+k}$) as follows. To match the LDLC notation we use the index letters $i,j,..$ to denote variables $1,...,n$ and the letters $s,t,...$ to denote new variables $n+1,...,n+k$ which will take the place of the check node factors in the original formulation. We further define the self and edge potentials: $$\displaystyle\psi_{i}(x_{i})\propto\mathcal{N}(x_{i};y_{i},\sigma^{2})\,,$$ $$\displaystyle\ \ \ \psi_{s}(x_{s})\triangleq\sum_{b_{s}=-\infty}^{\infty}% \mathcal{N}(x_{s};b_{s},0)\,,$$ $$\displaystyle\psi_{i,s}(x_{i},x_{s})$$ $$\displaystyle\triangleq\exp(-x_{i}H_{is}x_{s})\,.$$ (11) Proof: The proof is constructed by substituting the edge and self potentials (15) into the belief propagation update rules. Since we are using a pairwise graphical model, we do not have two update rules from variable to factors and from factors to variables. However, to recover the LDLC update rules, we make the artificial distinction between the variable and factor nodes, where the nodes $x_{i}$ will be shown to be related to the variable nodes in the LDLC decoder, and the nodes $x_{s}$ will be shown to be related to the factor nodes in the LDLC decoder. LDLC variable to factor nodes We start with the integral-product rule computed in the $x_{i}$ nodes: $$M_{is}(x_{s})=\int\limits_{x_{i}}\psi(x_{i},x_{s})\psi_{i}(x_{i})\prod_{t\in% \Gamma_{i}\setminus s}M_{ti}(x_{i})dx_{i}\,$$ The product of a mixture of Gaussians $\prod\limits_{t\in\Gamma_{i}\setminus s}M_{ti}(x_{i})$ is itself a mixture of Gaussians, where each component in the output mixture is the product of a single Gaussians selected from each input mixture $M_{ti}(x_{i})$. Lemma 2 (Gaussian product) [18, Claim 10], [2, Claim 2] Given $p$ Gaussians $\mathcal{N}(m_{1},v_{1}),\cdots,\mathcal{N}(m_{p},v_{p})$ their product is proportional to a Gaussian $\mathcal{N}(\bar{m},\bar{v})$ with $$\displaystyle\bar{v}^{-1}=\sum_{i=1}^{p}\frac{1}{v_{i}}=\sum_{i=1}^{p}\alpha_{i}$$ $$\displaystyle\bar{m}=(\sum_{i=1}^{p}m_{i}/v_{i})\bar{v}=\sum_{i=1}^{p}\beta_{i% }\bar{v}$$ Proof: Is given in [18, Claim 10]. ∎ Using the Gaussian product lemma the $l_{s}$ mixture component in the message from variable node $i$ to factor node $s$ is a single Gaussian given by $$M^{l_{s}}_{is}(x_{s})=\int\limits_{x_{i}}\psi_{is}(x_{i},x_{s})\big{(}\psi_{i}% (x_{i})\prod_{t\in\Gamma_{i}\setminus s}M^{\tau}_{ti}(x_{i})\Big{)}dx_{i}=$$ $${\int\limits_{x_{i}}}\psi_{is}(x_{i},x_{s})\big{(}\psi_{i}(x_{i})\exp\{-\tfrac% {1}{2}x_{i}^{2}(\sum_{t\in\Gamma_{i}\setminus s}\alpha_{ti}^{l_{s}})+x_{i}(\!% \!\sum_{t\in\Gamma_{i}\backslash s}\beta_{ti}^{l_{s}}\,)\}\big{)}dx_{i}=$$ $${\int\limits_{x_{i}}}\psi_{is}(x_{i},x_{s})\Big{(}\exp(-\tfrac{1}{2}x_{i}^{2}% \sigma^{-2}+x_{i}y_{i}\sigma^{-2})\cdot$$ $$\cdot{\exp\{{-\tfrac{1}{2}}x_{i}^{2}(\sum_{t\in\Gamma_{i}\setminus s}\alpha_{% ti}^{l_{s}})+x_{i}(\sum_{t\in\Gamma_{i}\backslash s}\beta_{ti}^{l_{s}}\,)\}}% \Big{)}dx_{i}=$$ $${\int\limits_{x_{i}}}\psi_{is}(x_{i},x_{s})\Big{(}{\exp\{{-\tfrac{1}{2}}x_{i}^% {2}(\sigma^{-2}+\hskip-8.535827pt\sum_{t\in\Gamma_{i}\setminus s}\!\!\!\alpha_% {ti}^{l_{s}})+x_{i}(y_{i}\sigma^{-2}+\hskip-8.535827pt\sum_{t\in\Gamma_{i}% \backslash s}\beta_{ti}^{l_{s}}\,)\,\}}\Big{)}dx_{i}\,.$$ We got a formulation which is equivalent to LDLC variable nodes update rule given in (7). Now we use the following lemma for computing the integral: Lemma 3 (Gaussian integral) Given a (one dimensional) Gaussian $\phi_{i}(x_{i})\propto\mathcal{N}(x_{i};m,v)$, the integral $\int\limits_{x_{i}}\psi_{i,s}(x_{i},x_{s})\phi_{i}(x_{i})dx_{i}$, where is a (two dimensional) Gaussian $\psi_{i,s}(x_{i},x_{s})\triangleq\exp(-\tfrac{}{}x_{i}H_{is}x_{s})$  is proportional to a (one dimensional) Gaussian $\mathcal{N}^{{-1}}(H_{is}m,H_{is}^{2}v).$ Proof: $$\int\limits_{x_{i}}\psi_{ij}(x_{i},x_{j})\phi_{i}(x_{i})dx_{i}\ \ \ $$ $$\propto\int\limits_{x_{i}}\exp{(-x_{i}H_{is}x_{s})}{\exp\{{-\tfrac{1}{2}}(x_{i% }-m)^{2}/v\}}dx_{i}=$$ $$={\int\limits_{x_{i}}}\exp{\Big{(}(-\tfrac{1}{2}x_{i}^{2}/v)+(m/v-H_{is}x_{s})% x_{i}\Big{)}}dx_{i}\ \ \ $$ $$\propto\ \ \ \exp{((m/v-H_{is}x_{s})^{2}/(-\tfrac{2}{v}))}\,,$$ where the last transition was obtained by using the Gaussian integral: $$\int\limits_{-\infty}^{\infty}\exp{(-ax^{2}+bx)}dx=\sqrt{\pi/a}\exp{(b^{2}/4a)}.$$ $$\exp{((m/v-H_{is}x_{s})^{2}/(-\tfrac{2}{v}))}={\exp\{{-\tfrac{1}{2}}(v(m/v-H_{% is}x_{s})^{2})\}}=$$ $$={\exp\{{-\tfrac{1}{2}}(H_{is}^{2}v)x_{s}^{2}+(H_{is}m)x_{s}-\tfrac{1}{2}v(m/v% )^{2}\}}$$ $$\propto{\exp\{{-\tfrac{1}{2}}(H_{is}^{2}v)x_{s}^{2}+(H_{is}m)x_{s}\}}\,.$$ ∎ Using the results of Lemma 3 we get that the sent message between variable node to a factor node is a mixture of Gaussians, where each Gaussian component $k$ is given by $$M^{{\mathbf{l}}}_{is}(x_{s})=\mathcal{N}^{-1}(x_{s};H_{is}m_{is}^{l_{s}},H^{2}% _{is}v_{is}^{l_{s}})\,.$$ Note that in the LDLC terminology the integral operation as defined in Lemma 3 is called stretching. In the LDLC algorithm, the stretching is computed by the factor node as it receives the message from the variable node. In NBP, the integral operation is computed at the variable nodes. LDLC Factors to variable nodes We start again with the BP integral-product rule and handle the $x_{s}$ variables computed at the factor nodes. $$M_{si}(x_{i})=\int\limits_{x_{s}}\psi_{is}(x_{i},x_{s})\psi_{s}(x_{s})\prod% \limits_{j\in\Gamma_{s}\setminus i}M_{js}(x_{j})\,dx_{s}.$$ Note that the product $\prod\limits_{j\in\Gamma_{s}\setminus i}M^{\tau}_{js}(x_{j})$ , is a mixture of Gaussians, where the $k$-th component is computed by selecting a single Gaussian from each message $M^{\tau}_{js}$ from the set $j\in\Gamma_{s}\setminus i$ and applying the product lemma (Lemma 2). We get $${\int\limits_{x_{s}}}\psi_{is}(x_{i},x_{s})\Big{(}\psi_{s}(x_{s})\exp\{-\tfrac% {1}{2}{x_{s}^{2}(\sum_{k\in\Gamma_{s}\backslash i}H^{2}_{ks}v_{ks}^{l_{i}}})+$$ $$+x_{s}(\sum_{k\in\Gamma_{s}\setminus i}H_{ks}m_{ks}^{l_{i}})\,\}\Big{)}dx_{s}$$ (12) We continue by computing the product with the self potential $\psi_{s}(x_{s})$ to get $$={\int\limits_{x_{s}}}\psi_{is}(x_{i},x_{s})\big{(}\!\!\!\!\sum_{b_{s}=-\infty% }^{\infty}\!\!\!\!\!\exp(b_{s}x_{s})\exp\{-\tfrac{1}{2}x_{s}^{2}(\!\!\!\!\sum_% {k\in\Gamma_{s}\backslash i}\!\!\!\!\!H^{2}_{ks}v_{ks}^{l_{i}})+$$ $$+x_{s}(\!\!\!\!\sum_{k\in\Gamma_{s}\setminus i}\!\!\!\!H_{ks}m_{ks}^{l_{i}})\,% \}\big{)}dx_{s}=$$ $$=\!\!\!\!\!\sum_{b_{s}=-\infty}^{\infty}{\int\limits_{x_{s}}}\psi_{is}(x_{i},x% _{s})\big{(}\!\exp(b_{s}x_{s})\exp\{-\tfrac{1}{2}x_{s}^{2}(\!\!\!\sum_{k\in% \Gamma_{s}\backslash i}\!\!\!\!H^{2}_{ks}v_{ks}^{l_{i}})+$$ $$+x_{s}(\!\!\!\sum_{k\in\Gamma_{s}\setminus i}\!\!\!\!\!H_{ks}m_{ks}^{l_{i}})\,% \}\big{)}dx_{s}=$$ $$=\!\!\!\!\sum_{b_{s}=-\infty}^{\infty}{\int\limits_{x_{s}}}\psi_{is}(x_{i},x_{% s})\big{(}\exp\{-\tfrac{1}{2}x_{s}^{2}(\!\!\!\!\sum_{k\in\Gamma_{s}\backslash i% }\!\!\!\!\!H^{2}_{ks}v_{ks}^{l_{i}})+$$ $$\ \!\!x_{s}(b_{s}+\!\!\!\!\!\sum_{k\in\Gamma_{s}\setminus i}\!\!\!\!H_{ks}m_{% ks}^{l_{i}})\}\big{)}dx_{s}=$$ $$=\!\!\sum_{b_{s}=\infty}^{-\infty}{\int\limits_{x_{s}}}\psi_{is}(x_{i},x_{s})% \big{(}\exp\{-\tfrac{1}{2}x_{s}^{2}(\!\!\!\!\sum_{k\in\Gamma_{s}\backslash i}% \!\!\!\!\!H^{2}_{ks}v_{ks}^{l_{i}})+$$ $$+x_{s}(-b_{s}+\!\!\!\!\sum_{k\in\Gamma_{s}\setminus i}\!\!\!\!\!\!H_{ks}m_{ks}% ^{l_{i}})\,\}\big{)}dx_{s}\,.$$ Finally we use Lemma 3 to compute the integral and get $$=\sum_{b_{s}=\infty}^{-\infty}\exp\{-tfrac{1}{2}x_{s}^{2}H_{si}^{2}(\!\!\sum_{% k\in\Gamma_{s}\backslash i}H^{2}_{ks}v_{ks}^{l_{i}})^{-1}+$$ $$+x_{s}H_{si}(\sum_{k\in\Gamma_{s}\backslash i}H^{2}_{ks}v_{ks}^{l_{i}})^{-1}(-% b_{s}+\sum_{k\in\Gamma_{s}\setminus i}H_{ks}m_{ks}^{l_{i}})\,\}dx_{s}\,.$$ It is easy to verify this formulation is identical to the LDLC update rules (II-C). ∎ IV Using Sparse Generator Matrices We propose a new family of LDLC codes where the generator matrix $G$ is sparse, in contrast to the original LDLC codes where the parity check matrix $H$ is sparse. Table I outlines the properties of our proposed decoder. Our decoder is designed to be more efficient than the original LDLC decoder, since as we will soon show, both encoding, initialization and final operations are more efficient in the NBP decoder. We are currently in the process of fully evaluating our decoder performance relative to the LDLC decoder. Initial results are reported in Section VI. IV-A The NBP decoder We use an undirected bipartite graph, with variables nodes $\{b_{i}\}$, representing each element of the vector $b$, and observation nodes $\{z_{i}\}$ for each element of the observation vector $y$. We define the self potentials $\psi_{i}(z_{i})$ and $\psi_{s}(b_{s})$ as follows: $$\displaystyle\psi_{i}(z_{i})$$ $$\displaystyle\propto\mathcal{N}(z_{i};y_{i},\sigma^{2})\,,$$ $$\displaystyle\psi_{s}(b_{s})$$ $$\displaystyle=\begin{cases}1&b_{s}\in\mathbb{Z}\\ 0&\mathrm{otherwise}\end{cases}\,,$$ (13) and the edge potentials: $$\psi_{i,s}(z_{i},b_{s})\triangleq\exp(-z_{i}G_{is}b_{s})\,.$$ Each variable node $b_{s}$ is connected to the observation nodes as defined by the encoding matrix $G.$ Since $G$ is sparse, the resulting bipartite graph sparse as well. As with LDPC decoders [7], the belief propagation or sum-product algorithm [19, 6] provides a powerful approximate decoding scheme. For computing the MAP assignment of the transmitted vector $b$ using non-parametric belief propagation we perform the following relaxation, which is one of the main novel contributions of this paper. Recall that in the original problem, $b$ are only allowed to be integers. We relax the function $\psi_{s}(x_{s})$ from a delta function to a mixture of Gaussians centered around integers. $$\psi^{relax}_{s}(b_{s})\propto\sum_{i\in\mathbb{Z}}\mathcal{N}(i,v)\,.$$ The variance parameter $v$ controls the approximation quality, as $v\rightarrow 0$ the approximation quality is higher. Figure 2 plots an example relaxation of $\psi_{i}(b_{s})$ in the binary case. We have defined the self and edge potentials which are the input the to the NBP algorithm. Now it is possible to run the NBP algorithm using (2) and get an approximate MAP solution to (1). The derivation of the NBP decoder update rules is similar to the one done for the LDLC decoder, thus omitted. However, there are several important differences that should be addressed. We start by analyzing the algorithm efficiency. We assume that the input to our decoder is the sparse matrix $G$, there is no need in computing the encoding matrix $G=H^{-1}$ as done in the LDLC decoder. Naively this initialization takes $O(n^{3})$ cost. The encoding in our scheme is done as in LDLC by computing the multiplication $Gb$. However, since $G$ is sparse in our case, encoding cost is $O(nd)$ where $d<<n$ is the average number of non-zeros entries on each row. Encoding in the LDLC method is done in $O(n^{2})$ since even if $H$ is sparse, $G$ is typically dense. After convergence, the LDLC decoder multiplies by the matrix $H$ and rounds the result to get $b$. This operation costs $O(nd)$ where $d$ is the average number of non-zero entries in $H$. In contrast, in the NBP decoder, $b$ is computed directly in the variable nodes. Besides of efficiency, there are several inherent differences between the two algorithms. Summary of the differences is given in Table II.  We use a standard formulation of BP using pairwise potentials form, which means there is a single update rule, and not two update rules from left to right and right to left. We have shown that the convolution operation in the LDLC decoder relates to product step of the BP algorithm. The stretch/unstrech operations in the LDLC decoder are implemented using the integral step of the BP algorithm. The periodic extension operation in the LDLC decoder is incorporated into our decoder algorithm using the self potentials. IV-B The relation of the NBP decoder to GaBP In this section we show that simplified version of the NBP decoder coincides with the GaBP algorithm. The simplified version is obtained, when instead of using our proposed Gaussian mixture prior, we initialize the NBP algorithm with a prior composed of a single Gaussian. Theorem 4 By initializing $\psi_{s}(b_{s})\sim\mathcal{N}(0,1)$ to be a (single) Gaussian the NBP decoder update rules are identical to update rules of the GaBP algorithm. Lemma 5 By initializing $\psi_{s}(x_{s})$ to be a (single) Gaussian the messages of the NBP decoder are single Gaussians. Proof: Assume both the self potentials $\psi_{s}(b_{s}),\psi_{i}(z_{i})$ are initialized to a single Gaussian, every message of the NBP decoder algorithm will remain a Gaussian. This is because the product (3) of single Gaussians is a single Gaussian, the integral and (4) of single Gaussians produce a single Gaussian as well. ∎ Now we are able to prove Theorem 4: Proof: We start writing the update rules of the variable nodes. We initialize the self potentials of the variable nodes $\psi_{i}(z_{i})=\mathcal{N}(z_{i};y_{i},\sigma^{2})\,$, Now we substitute, using the product lemma and Lemma 3. $$M_{is}(b_{s})=\int\limits_{z_{i}}\psi_{i,s}(z_{i},b_{s})\Big{(}\psi_{i}(z_{i})% \prod\limits_{t\in\Gamma_{i}\setminus s}M_{ti}(z_{i})\Big{)}dz_{i}=$$ $${\int\limits_{z_{i}}}\psi_{i,s}(z_{i},b_{s})\big{(}\exp(-\tfrac{1}{2}z_{i}^{2}% \sigma^{-2}+y_{i}z_{i}\sigma^{-2})\!\!\!\prod\limits_{t\in\Gamma_{i}\setminus s% }\!\!\!\exp(-\tfrac{1}{2}z_{i}^{2}\alpha_{ti}+z_{i}\beta_{ti})\big{)}dz_{i}$$ $${\int\limits_{z_{i}}}\psi_{i,s}(z_{i},b_{s})\big{(}\exp(-\tfrac{1}{2}z_{i}^{2}% (\sigma^{-2}+\sum_{t\in\Gamma_{i}\setminus s}\alpha_{ti})+z_{i}(\sigma^{-2}y_{% i}+\!\!\!\sum_{t\in\Gamma_{i}\setminus s}\beta_{ti})\big{)}dz_{i}=$$ $$\propto\exp\Big{(}-\tfrac{1}{2}z_{i}^{2}G^{2}_{is}(\sigma^{-2}+\sum_{t\in% \Gamma_{i}\setminus s}\alpha_{ti})^{-1}+$$ $$z_{i}G_{is}(\sigma^{-2}+\!\!\sum_{t\in\Gamma_{i}\setminus s}\alpha_{ti})^{-1}(% \sigma^{-2}y_{i}+\sum_{t\in\Gamma_{i}\setminus s}\beta_{ti})\Big{)}$$ Now we get GaBP update rules by substituting $J_{ii}\triangleq\sigma^{-2},J_{is}\triangleq G_{is},h_{s}\triangleq\sigma^{-2}% y_{i}:$ $$\displaystyle{\alpha_{is}=-J_{is}^{2}\alpha_{i\setminus s}^{-1}=-J_{is}^{2}(J_% {ii}+\sum\limits_{{t}\in\Gamma_{i}\setminus s}\alpha_{ti})^{-1}},$$ $$\displaystyle{\beta_{is}=-J_{is}\alpha_{i\setminus s}^{-1}\beta_{i\setminus s}% =-J_{is}\Big{(}\alpha_{i\setminus s}^{-1}(h_{i}+\sum\limits_{t\in\Gamma_{i}% \setminus s}\beta_{ti})\Big{)}}\,.$$ We continue expanding $$M_{si}(z_{i})=\int\limits_{b_{s}}\psi_{i,s}(z_{i},b_{s})\Big{(}\psi_{s}(b_{s})% \prod\limits_{k\in\Gamma_{s}\backslash i}M^{\tau}_{ks}(b_{s})\Big{)}db_{s}$$ Similarly using the initializations $\psi_{s}(b_{s})={\exp\{{-\tfrac{1}{2}}b_{s}^{2}\}},\ \psi_{i,s}(z_{i},b_{s})% \triangleq\exp(-z_{i}G_{is}b_{s}).$ $${\int\limits_{b_{s}}}\psi_{i,s}(z_{i},b_{s})\Big{(}{\exp\{{-\tfrac{1}{2}}b_{s}% ^{2}\}}\prod\limits_{k\in\Gamma_{s}\backslash i}\exp(-\tfrac{1}{2}b_{s}^{2}% \alpha_{is}+b_{s}\beta_{ks})\Big{)}db_{s}=$$ $${\int\limits_{b_{s}}}\psi_{i,s}(z_{i},b_{s})\Big{(}{\exp\{{-\tfrac{1}{2}}b_{s}% ^{2}(1+\!\!\!\sum\limits_{k\in\Gamma_{s}\backslash i}\alpha_{is})+b_{s}(\sum% \limits_{k\in\Gamma_{s}\backslash i}\beta_{ks})\}}\Big{)}db_{s}=$$ $${\exp\{{-\tfrac{1}{2}}b_{s}^{2}G_{is}^{2}(1+\!\!\!\!\sum\limits_{k\in\Gamma_{s% }\backslash i}\!\!\!\alpha_{is})^{-1}+b_{s}G_{is}(1+\sum\limits_{k\in\Gamma_{s% }\backslash i}\alpha_{is})^{-1}(\!\!\!\!\sum\limits_{k\in\Gamma_{s}\backslash i% }\beta_{ks})\}}$$ Now we get GaBP update rules by substituting $J_{ii}\triangleq 1,\\ J_{si}\triangleq G_{is},h_{i}\triangleq 0:$ $$\displaystyle{\alpha_{si}=-J_{si}^{2}\alpha_{s\setminus i}^{-1}=-J_{si}^{2}(J_% {ii}+\sum\limits_{{k}\in\Gamma s\setminus i}\alpha_{is})^{-1}},$$ $$\displaystyle{\beta_{si}=-J_{si}\alpha_{s\setminus i}^{-1}\beta_{s\setminus i}% =-J_{si}\Big{(}\alpha_{s\setminus i}^{-1}(h_{i}+\sum\limits_{{k}\in\Gamma s% \setminus i}\beta_{ks})\Big{)}}\,.$$ ∎ Tying together the results, in the case of a single Gaussian self potential, the NBP decoder is initialized using the following inverse covariance matrix: $$J\triangleq\begin{pmatrix}I&G\\ G^{T}&\ \mathop{\bf diag}(\sigma^{-2})\end{pmatrix}$$ We have shown that a simpler version of the NBP decoder, when the self potentials are initialized to be single Gaussians boils down to GaBP algorithm. It is known [20] that the GaBP algorithm solves the following least square problem $\min_{b\in\mathbb{R}^{n}}\|Gb-y\|$ assuming a Gaussian prior on $b$, $p(b)\sim\mathcal{N}(0,1)$, we get the MMSE solution $b^{*}=(G^{T}G)^{-1}G^{T}y.$ Note the relation to (1). The difference is that we relax the LDLC decoder assumption that $b\in\mathbb{Z}^{n}$, with $b\in\mathbb{R}^{n}$. Getting back to the NBP decoder, Figure 2 compares the two different priors used, in the NBP decoder and in the GaBP algorithm, for the bipolar case. It is clear that the Gaussian prior assumption on $b$ is not accurate enough. In the NBP decoder, we relax the delta function (13) to a Gaussian mixture prior composed of mixtures centered around Integers. Overall, the NBP decoder algorithm can be thought of as an extension of the GaBP algorithm with more accurate priors. V Convergence analysis The behavior of the belief propagation algorithm has been extensively studied in the literature, resulting in sufficient conditions for convergence in the discrete case [21] and in jointly Gaussian models [22]. However, little is known about the behavior of BP in more general continuous systems. The original LDLC paper [2] gives some characterization of its convergence properties under several simplifying assumptions. Relaxing some of these assumptions and using our pairwise factor formulation, we show that the conditions for GaBP convergence can also be applied to yield new convergence properties for the LDLC decoder. The most important assumption made in the LDLC convergence analysis [2] is that the system converges to a set of “consistent” Gaussians; specifically, that at all iterations $\tau$ beyond some number $\tau_{0}$, only a single integer $b_{s}$ contributes to the Gaussian mixture. Notionally, this corresponds to the idea that the decoded information values themselves are well resolved, and the convergence being analyzed is with respect to the transmitted bits $x_{i}$. Under this (potentially strong) assumption, sufficient conditions are given for the decoder’s convergence. The authors also assume that $H$ consists of a Latin square in which each row and column contain some permutation of the scalar values $h_{1}\geq\ldots\geq h_{d}$, up to an arbitrary sign. Four conditions are given which should all hold to ensure convergence: • LDLC-I: $\det(H)=\det(G)=1$. • LDLC-II: $\alpha\leq 1$, where $\alpha\triangleq\frac{\sum_{i=2}^{d}h^{2}_{i}}{h^{2}_{1}}$. • LDLC-III: The spectral radius of $\rho(F)<1$ where $F$ is a $n\times n$ matrix defined by: $$F_{k,l}=\begin{cases}\frac{h_{rk}}{h_{r}l}&\mbox{if $k\neq l$ and there exist % a row $r$ of $H$ }\\ &\mbox{for which $|H_{rl}|=h_{1}$ and $H_{rk}\neq 0$}\\ 0&\mbox{otherwise}\\ \end{cases}$$ • LDLC-IV: The spectral radius of $\rho(\tilde{H})<1$ where $\tilde{H}$ is derived from H by permuting the rows such that the $h_{1}$ elements will be placed on the diagonal, dividing each row by the appropriate diagonal element ($+h_{1}$ or $-h_{1}$), and then nullifying the diagonal. Using our new results we are now able to provide new convergence conditions for the LDLC decoder. Corollary 6 The convergence of the LDLC decoder depends on the properties of the following matrix: $$J\triangleq\begin{pmatrix}{\mathbf{0}}&H\\ H^{T}&\ \mathop{\bf diag}(1/\sigma^{2})\end{pmatrix}$$ (14) Proof: In Theorem 1 we have shown an equivalence between the LDLC algorithm to NBP initialized with the following potentials: $$\displaystyle\psi_{i}(x_{i})$$ $$\displaystyle\propto\mathcal{N}(x_{i};y_{i},\sigma^{2})\,,\ \ \ \ \psi_{s}(x_{% s})$$ $$\displaystyle\triangleq\sum_{b_{s}=-\infty}^{\infty}\mathcal{N}^{-1}(x_{s};b_{% s},0)\,,$$ $$\displaystyle\psi_{i,s}(x_{i},x_{s})\triangleq\exp(x_{i}H_{is}x_{s})\,.$$ (15) We have further discussed the relation between the self potential $\psi_{s}(x_{s})$ and the periodic extension operation. We have also shown in Theorem 4 that if $\psi_{s}(x_{s})$ is a single Gaussian (equivalent to the assumption of “consistent” behavior), the distribution is jointly Gaussian and rather than NBP (with Gaussian mixture messages), we obtain GaBP (with Gaussian messages). Convergence of the GaBP algorithm is dependent on the inverse covariance matrix $J$ and not on the shift vector $h$. Now we are able to construct the appropriate inverse covariance matrix $J$ based on the pairwise factors given in Theorem 1. The matrix $J$ is a $2\times 2$ block matrix, where the check variables $x_{s}$ are assigned the upper rows and the original variables are assigned the lower rows. The entries can be read out from the quadratic terms of the potentials (15), with the only non-zero entries corresponding to the pairs $(x_{i},x_{s})$ and self potentials $(x_{i},x_{i})$. ∎ Based on Corollary 6 we can characterize the convergence of the LDLC decoder, using the sufficient conditions for convergence of GaBP. Either one of the following two conditions are sufficient for convergence: [GaBP-I] (walk-summability [22]) $\rho(I-|D^{-1/2}JD^{-1/2}|)<1$ where $D\triangleq\mathop{\bf diag}(J)$. [GaBP-II] (diagonal dominance [8]) $J$ is diagonally dominant (i.e. $|J_{ii}|>=\sum_{j\neq i}|J_{ij}|,\forall i$). A further difficulty arises from the fact that the upper diagonal of (14) is zero, which means that both [GaBP-I,II] fail to hold. There are three possible ways to overcome this. 1. Create an approximation to the original problem by setting the upper left block matrix of (14) to $\mathop{\bf diag}(\epsilon)$ where $\epsilon>0$ is a small constant. The accuracy of the approximation grows as $\epsilon$ is smaller. In case either of [GaBP-I,II] holds on the fixed matrix the “consistent Gaussians” converge into an approximated solution. 2. In case a permutation on $J$ (14) exists where either [GaBPI,II] hold for permuted matrix, then the “consistent Gaussians” convergence to the correct solution. 3. Use preconditioning to create a new graphical model where the edge potentials are determined by the information matrix $HH^{T}$, $\psi_{i,s}(x_{i},x_{s})\triangleq\exp(x_{i}\{HH^{T}\}_{is}x_{s})$ and the self potentials of the $x_{i}$ nodes are $\psi_{i}(x_{i})\triangleq{\exp\{{-\tfrac{1}{2}}x_{i}^{2}\sigma^{-2}+x_{i}\{Hy% \}_{i}\}}$. The proof of the correctness of the above construction is given in [23]. The benefit of this preconditioning is that the main diagonal of $HH^{T}$ is surely non zero. If either [GaBP-I,II] holds on $HH^{T}$ then “consistent Gaussians” convergence to the correct solution. However, the matrix $HH^{T}$ may not be sparse anymore, thus we pay in decoder efficiency. Overall, we have given two sufficient conditions for convergence, under the “consistent Gaussian” assumption for the means and variances of the LDLC decoder. Our conditions are more general because of two reasons. First, we present a single sufficient condition instead of four that have to hold concurrently in the original LDLC work. Second, our convergence analysis does not assume Latin squares, not even square matrices and does not assume nothing about the sparsity of $H$. This extends the applicability of the LDLC decoder to other types of codes. Note that our convergence analysis relates to the mean and variances of the Gaussian mixture messages. A remaining open problem is the convergence of the amplitudes – the relative heights of the different consistent Gaussians. VI Experimental results In this section we report preliminary experimental results of our NBP-based decoder. Our implementation is general and not restricted to the LDLC domain. Specifically, recent work by Baron et al.   [5] had extensively tested our NBP implementation in the context of the related compressive sensing domain. Our Matlab code is available on the web on [24]. We have used a code lengths of $n=100,n=1000$, where the number of non zeros in each row and each column is $d=3$. Unlike LDLC Latin squares which are formed using a generater sequence $h_{i}$, we have selected the non-zeros entries of the sparse encoding matrix $G$ randomly out of $\{-1,1\}$. This construction further optimizes LDLC decoding, since bipolar entries avoids the integral computation (stretch/unstrech operation). We have used bipolar signaling, $b\in\{-1,1\}$. We have calculated the maximal noise level $\sigma^{2}_{max}$ using Poltyrev generalized definition for channel capacity using unrestricted power assumption [25]. For bipolar signaling $\sigma^{2}_{max}=4\sqrt[n]{\det(G)^{2}}/2\pi e$. When applied to lattices, the generalized capacity implies that there exists a lattice $G$ of high enough dimension $n$ that enables transmission with arbitrary small error probability, if and only if $\sigma^{2}<\sigma^{2}_{max}$. Figure 3 plots SER (symbol error rate) of the NBP decoder vs. the LDLC decoder for code length $n=100,n=1000$. The $x$-axis represent the distance from capacity in dB as calculated using Poltyrov equation. As can be seen, our novel NBP decoder has better SER for $n=100$ for all noise levels. For $n=1000$ we have better performance for high noise level, and comparable performance up to 0.3dB from LDLC for low noise levels. We are currently in the process of extending our implementation to support code lengths of up $n=100,000$. Initial performance results are very promising. VII Future work and open problems We have shown that the LDLC decoder is a variant of the NBP algorithm. This allowed us to use current research results from the non-parametric belief propagation domain, to extend the decoder applicability in several directions. First, we have extended algorithm applicability from Latin squares to full column rank matrices (possibly non-square). Second, We have extended the LDLC convergence analysis, by discovering simpler conditions for convergence. Third, we have presented a new family of LDLC which are based on sparse encoding matrices. We are currently working on an open source implementation of the NBP based decoder, using an undirected graphical model, including a complete comparison of performance to the LDLC decoder. Another area of future work is to examine the practical performance of the efficient Gaussian mixture product sampling algorithms developed in the NBP domain to be applied for LDLC decoder. As little is known about the convergence of the NBP algorithm, we plan to continue examine its convergence in different settings. Finally, we plan to investigate the applicability of the recent convergence fix algorithm [26] for supporting decoding matrices where the sufficient conditions for convergence do not hold. Acknowledgment D. Bickson would like to thank N. Sommer, M. Feder and Y. Yona from Tel Aviv University for interesting discussions and helpful insights regarding the LDLC algorithm and its implementation. D. Bickson was partially supported by grants NSF IIS-0803333, NSF NeTS-NBD CNS-0721591 and DARPA IPTO FA8750-09-1-0141. Danny Dolev is Incumbent of the Berthold Badler Chair in Computer Science. Danny Dolev was supported in part by the Israeli Science Foundation (ISF) Grant number 0397373. References [1] R. G. Gallager, “Low density parity check codes,” IRE Trans. Inform. Theory, vol. 8, pp. 21–28, 1962. [2] N. Sommer, M. Feder, and O. Shalvi, “Low-density lattice codes,” in IEEE Transactions on Information Theory, vol. 54, no. 4, 2008, pp. 1561–1585. [3] E. Sudderth, A. Ihler, W. Freeman, and A. Willsky, “Nonparametric belief propagation,” in Conference on Computer Vision and Pattern Recognition (CVPR), June 2003. [4] S. Sarvotham, D. Baron, and R. G. Baraniuk, “Compressed sensing reconstruction via belief propagation,” Rice University, Houston, TX, Tech. Rep. TREE0601, July 2006. [5] D. Baron, S. Sarvotham, and R. G. Baraniuk, “Bayesian compressive sensing via belief propagation,” IEEE Trans. Signal Processing, to appear, 2009. [6] F. Kschischang, B. Frey, and H. A. Loeliger, “Factor graphs and the sum-product algorithm,” vol. 47, pp. 498–519, Feb. 2001. [7] R. J. McEliece, D. J. C. MacKay, and J. F. Cheng, “Turbo decoding as an instance of Pearl’s ’belief propagation’ algorithm,” vol. 16, pp. 140–152, Feb. 1998. [8] Y. Weiss and W. T. Freeman, “Correctness of belief propagation in Gaussian graphical models of arbitrary topology,” Neural Computation, vol. 13, no. 10, pp. 2173–2200, 2001. [9] A. Ihler, E. Sudderth, W. Freeman, and A. Willsky, “Efficient multiscale sampling from products of gaussian mixtures,” in Neural Information Processing Systems (NIPS), Dec. 2003. [10] M. Briers, A. Doucet, and S. S. Singh, “Sequential auxiliary particle belief propagation,” in International Conference on Information Fusion, 2005, pp. 705–711. [11] D. Rudoy and P. J. Wolf, “Multi-scale MCMC methods for sampling from products of Gaussian mixtures,” in IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 3, 2007, pp. III–1201–III–1204. [12] A. T. Ihler. Kernel Density Estimation Toolbox for MATLAB [online] http://www.ics.uci.edu/$\sim$ihler/code/. [13] A. T. Ihler, Fisher, R. L. Moses, and A. S. Willsky, “Nonparametric belief propagation for self-localization of sensor networks,” Selected Areas in Communications, IEEE Journal on, vol. 23, no. 4, pp. 809–819, 2005. [14] A. T. Ihler, J. W. Fisher, and A. S. Willsky, “Particle filtering under communications constraints,” in Statistical Signal Processing, 2005 IEEE/SP 13th Workshop on, 2005, pp. 89–94. [15] B. Kurkoski and J. Dauwels, “Message-passing decoding of lattices using Gaussian mixtures,” in IEEE Int. Symp. on Inform. Theory (ISIT), Toronto, Canada, July 2008. [16] Y. Yona and M. Feder, “Efficient parametric decoder of low density lattice codes,” in IEEE International Symposium on Information Theory (ISIT), Seoul, S. Korea, July 2009. [17] B. M. Kurkoski, K. Yamaguchi, and K. Kobayashi, “Single-Gaussian messages and noise thresholds for decoding low-density lattice codes,” in IEEE International Symposium on Information Theory (ISIT), Seoul, S. Korea, July 2009. [18] D. Bickson, “Gaussian belief propagation: Theory and application,” Ph.D. dissertation, The Hebrew University of Jerusalem, October 2008. [19] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.     San Francisco: Morgan Kaufmann, 1988. [20] O. Shental, D. Bickson, P. H. Siegel, J. K. Wolf, and D. Dolev, “Gaussian belief propagation solver for systems of linear equations,” in IEEE International Symposium on Information Theory (ISIT), Toronto, Canada, July 2008. [21] A. T. Ihler, J. W. F. III, and A. S. Willsky, “Loopy belief propagation: Convergence and effects of message errors,” Journal of Machine Learning Research, vol. 6, pp. 905–936, May 2005. [22] D. M. Malioutov, J. K. Johnson, and A. S. Willsky, “Walk-sums and belief propagation in Gaussian graphical models,” Journal of Machine Learning Research, vol. 7, Oct. 2006. [23] D. Bickson, O. Shental, P. H. Siegel, J. K. Wolf, and D. Dolev, “Gaussian belief propagation based multiuser detection,” in IEEE International Symposium on Information Theory (ISIT), Toronto, Canada, July 2008. [24] Gaussian Belief Propagation implementation in matlab [online] http://www.cs.huji.ac.il/labs/danss/p2p/gabp/. [25] G. Poltyrev, “On coding without restrictions for the AWGN channel,” in IEEE Trans. Inform. Theory, vol. 40, Mar. 1994, pp. 409–417. [26] J. K. Johnson, D. Bickson, and D. Dolev, “Fixing convergence of Gaussian belief propagation,” in IEEE International Symposium on Information Theory (ISIT), Seoul, South Korea, 2009.
DESY 05-228, Edinburgh 2005/20, LU-ITP 2005/023, LTH 683 Perturbative Renormalisation for Low Moments of Generalised Parton Distributions with Clover Fermions M. Göckeler, R. Horsley, H. Perlt, P. E. L. Rakow, A. Schäfer[MCSD1], G. Schierholz, and A. Schiller[ITPL] Talk given by A. Schiller at the Workshop on Computational Hadron Physics, Nicosia, September 2005. Institut für Theoretische Physik, Universität Regensburg, 93040 Regensburg, Germany School of Physics, University of Edinburgh, Edinburgh EH9 3JZ, UK Institut für Theoretische Physik, Universität Leipzig, 04109 Leipzig, Germany Theoretical Physics Division, Department of Mathematical Sciences, University of Liverpool, Liverpool L69 3BX, UK John von Neumann-Institut für Computing NIC, Deutsches Elektronen-Synchrotron DESY, 15738 Zeuthen, Germany Deutsches Elektronen-Synchrotron DESY, 22603 Hamburg, Germany Abstract We present the non-forward quark matrix elements of operators with one and two covariant derivatives needed for the renormalisation of the first and second moments of generalised parton distributions in one-loop lattice perturbation theory using clover fermions. For some representations of the hypercubic group commonly used in simulations we define the sets of possible mixing operators and compute the one-loop mixing matrices of renormalisation factors. Tadpole improvement is applied to the results and some numerical examples are presented. 1 INTRODUCTION Generalised parton distributions (GPDs) have become a focus of both experimental and theoretical studies in hadron physics (for an extensive up-to-date review see [1]). They allow a parametrisation of a large class of hadronic correlators, including e.g. form factors and the ordinary parton distribution functions. Thus GPDs provide a solid formal basis to connect information from various inclusive, semi-inclusive and exclusive reactions in an efficient, unambiguous manner. Furthermore they give access to physical quantities which cannot be directly determined in experiments, like e.g. the orbital angular momentum of quarks and gluons in a nucleon (for a chosen specific scheme) and the spatial distribution of the energy or spin density of a fast moving hadron in the transverse plane. Since the structure of GPDs is rather complicated a direct experimental access is limited. Therefore, complementary information channels have to be opened up. One major source is lattice QCD [2, 3, 4, 5, 6]. Recently [7] we have calculated the non-forward matrix elements in one-loop lattice perturbation theory for the Wilson fermion action needed for the renormalisation of the second moments of GPDs. From these results the renormalisation factors for various representations of the hypercubic group have been derived. Here we present some new results for operators with two covariant derivatives using the Sheikholeslami-Wohlert (clover) action [8], which leads to $O(a)$ improved quark-quark-gluon and quark-quark-gluon-gluon vertices in the Feynman rules. Since in current numerical simulations the operators for the second moment of GPDs are not improved, we ignore such a possible additional improvement. Note that the $O(a)$ improvement for operators with one covariant derivative is known [9]. We consider the Wilson gauge action and clover fermions with the fermionic action $S_{\rm SW,F}$ [8] (for dimensionful massless fermion fields $\psi(x)$) $$\displaystyle S_{\rm SW,F}$$ $$\displaystyle=$$ $$\displaystyle 4ra^{3}\sum_{x}\bar{\psi}(x)\psi(x)$$ $$\displaystyle-$$ $$\displaystyle\frac{a^{3}}{2}\sum_{x,\mu}\left[\bar{\psi}(x)(r-\gamma_{\mu})U_{% x,\mu}\psi(x+a\hat{\mu})\right.$$ $$\displaystyle+$$ $$\displaystyle\left.\bar{\psi}(x+a\hat{\mu})(r+\gamma_{\mu})U^{\dagger}_{x,\mu}% \psi(x)\right]$$ $$\displaystyle-$$ $$\displaystyle\frac{a^{4}\,g\,c_{sw}}{4}\sum_{x,\mu,\nu}\,\bar{\psi}(x)\sigma_{% \mu\nu}F^{\rm clover}_{\mu\nu}\psi(x)\,.$$ Here $a$ denotes the lattice spacing and the sums run over all lattice sites $x$ and directions $\mu,\nu$ (all other indices are suppressed). $F^{\rm clover}_{\mu\nu}$ is the standard “clover-leaf” form of the lattice field strength and $\sigma_{\mu\nu}=i/2[\gamma_{\mu},\gamma_{\nu}]$. In a perturbative calculation the operators to be investigated are sandwiched between off-shell quark states with 4-momenta $p$ and $p^{\prime}$. Our calculations are performed in Feynman gauge, the final numbers are presented for the Wilson parameter $r=1$, leaving the value of $c_{sw}$ free. One-link quark operators with clover fermions have been discussed in [9] for forward matrix elements. The renormalisation constants found there for a given representation of the hypercubic group H(4) and charge conjugation parity $C$ can be used in the non-forward case as well. Additionally, in the case of GPDs “transversity” operators have to be taken into account. We will collect here all results for completeness. It is well known that operators with two or more covariant derivatives may mix under H(4): the one- and higher-loop structures differ in general from that of the Born term and multiplicative renormalisation may get lost. In addition, for non-forward matrix elements also operators with ordinary (external) derivatives can contribute making the mixing problem more complicated. To find the possible candidates for mixing one has to define those operators which belong to the same irreducible representation under H(4) and have the same charge conjugation parity. We define renormalised operators $\mathcal{O}_{i}^{\cal S}$ by $${\cal O}_{i}^{\cal S}(\mu)=\sum_{k=1}^{N}Z_{ik}^{S}(a,\mu)\,{\cal O}_{k}(a)$$ (2) where $\mathcal{S}$ denotes the renormalisation scheme and $N$ is the number of operators which mix in one-loop. $Z_{ik}^{\mathcal{S}}(a,\mu)$ are the renormalisation constants connecting the lattice operator $\mathcal{O}_{k}(a)$ with the renormalised operator $\mathcal{O}_{i}^{\mathcal{S}}(\mu)$ at scale $\mu$. We present the renormalisation constants in the $\overline{MS}$ scheme following [7]. 2 OERATORS AND MIXING We consider operators with up to two covariant symmetric lattice derivatives $\stackrel{{\scriptstyle\leftrightarrow}}{{D}}=\stackrel{{\scriptstyle% \rightarrow}}{{D}}-\stackrel{{\scriptstyle\leftarrow}}{{D}}$ and external ordinary derivatives $\partial$ needed for the chosen representations of interest for the first and second moment of GPDs. The standard realisation of the covariant derivatives acting to the right and to the left is used: $$\displaystyle\stackrel{{\scriptstyle\rightarrow}}{{D}}_{\mu}\psi(x)=\frac{1}{2% a}\times$$ (3) $$\displaystyle\Big{[}U_{x,\mu}\,\psi(x+a\hat{\mu})-U^{\dagger}_{x-a\hat{\mu},% \mu}\,\psi(x-a\hat{\mu})\Big{]}\,,$$ $$\displaystyle\bar{\psi}(x)\stackrel{{\scriptstyle\leftarrow}}{{D}}_{\mu}=\frac% {1}{2a}\times$$ (4) $$\displaystyle\Big{[}\bar{\psi}(x+a\hat{\mu})\,U^{\dagger}_{x,\mu}-\bar{\psi}(x% -a\hat{\mu})\,U_{x-a\hat{\mu},\mu}\Big{]}\,.$$ The external ordinary derivative is taken as $$\displaystyle\partial_{\mu}\left(\bar{\psi}\cdots\psi\right)\!(x)=\frac{1}{a}\times$$ (5) $$\displaystyle\Big{[}\left(\bar{\psi}\cdots\psi\right)(x+a\hat{\mu})-\left(\bar% {\psi}\cdots\psi\right)\!(x)\Big{]}\,.$$ The number of derivatives appearing in the operators is indicated by superscripts $D$ and $\partial$, respectively. Quark operators with one derivative are given by $$\displaystyle\mathcal{O}^{D}_{\mu\nu}$$ $$\displaystyle=$$ $$\displaystyle-\frac{i}{2}\bar{\psi}\gamma_{\mu}\stackrel{{\scriptstyle% \leftrightarrow}}{{D}}_{\nu}\psi\,,$$ (6) $$\displaystyle\mathcal{O}^{5,D}_{\mu\nu}$$ $$\displaystyle=$$ $$\displaystyle-\frac{i}{2}\bar{\psi}\gamma_{\mu}\gamma_{5}\stackrel{{% \scriptstyle\leftrightarrow}}{{D}}_{\nu}\psi\,,$$ (7) $$\displaystyle\mathcal{O}^{T,D}_{\mu\nu\omega}$$ $$\displaystyle=$$ $$\displaystyle-\frac{i}{2}\bar{\psi}[\gamma_{\mu},\gamma_{\nu}]\stackrel{{% \scriptstyle\leftrightarrow}}{{D}}_{\omega}\psi\,,$$ (8) $$\displaystyle\mathcal{O}^{T,\partial}_{\mu\nu\omega}$$ $$\displaystyle=$$ $$\displaystyle-\frac{i}{2}\partial_{\omega}\left(\bar{\psi}[\gamma_{\mu},\gamma% _{\nu}]\psi\right)\,.$$ (9) The operator (8) is a transversity operator antisymmetric in its first two indices which is of interest for GPDs, operators (8) and (9) contribute as lower dimensional operators to mixing in certain representations of the second moment of GPDs. As operators with two derivatives we consider here $$\displaystyle\mathcal{O}_{\mu\nu\omega}^{DD}$$ $$\displaystyle=$$ $$\displaystyle-\frac{1}{4}\bar{\psi}\gamma_{\mu}\stackrel{{\scriptstyle% \leftrightarrow}}{{D}}_{\nu}\stackrel{{\scriptstyle\leftrightarrow}}{{D}}_{% \omega}\psi\,,$$ $$\displaystyle\mathcal{O}_{\mu\nu\omega}^{\partial D}$$ $$\displaystyle=$$ $$\displaystyle-\frac{1}{4}\partial_{\nu}\left(\bar{\psi}\gamma_{\mu}\stackrel{{% \scriptstyle\leftrightarrow}}{{D}}_{\omega}\psi\right)\,,$$ (10) $$\displaystyle\mathcal{O}_{\mu\nu\omega}^{\partial\partial}$$ $$\displaystyle=$$ $$\displaystyle-\frac{1}{4}\partial_{\nu}\partial_{\omega}\left(\bar{\psi}\gamma% _{\mu}\psi\right)\,.$$ In addition, spin-dependent and “transversity” operators have to be considered when discussing all possible representations. They are roughly obtained by replacing $\gamma_{\mu}$ by $\gamma_{\mu}\gamma_{5}$ and $\sigma_{\mu\tau}$, respectively. To define the various representations with given $C$ we use the following short-hand notations $$\displaystyle\mathcal{O}_{\cdots\{\nu_{1}\nu_{2}\}}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}\left(\mathcal{O}_{\cdots\nu_{1}\nu_{2}}+\mathcal{O}_{% \cdots\nu_{2}\nu_{1}}\right)\,,$$ $$\displaystyle\mathcal{O}_{\{\nu_{1}\nu_{2}\nu_{3}\}}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{6}\left(\mathcal{O}_{\nu_{1}\nu_{2}\nu_{3}}+\mathcal{O}_% {\nu_{1}\nu_{3}\nu_{2}}+\mathcal{O}_{\nu_{2}\nu_{1}\nu_{3}}\right.$$ $$\displaystyle+$$ $$\displaystyle\left.\mathcal{O}_{\nu_{2}\nu_{3}\nu_{1}}+\mathcal{O}_{\nu_{3}\nu% _{1}\nu_{2}}+\mathcal{O}_{\nu_{3}\nu_{2}\nu_{1}}\right)\,,$$ $$\displaystyle\mathcal{O}_{\|\nu_{1}\nu_{2}\nu_{3}\|}$$ $$\displaystyle=$$ $$\displaystyle\mathcal{O}_{\nu_{1}\nu_{2}\nu_{3}}-\mathcal{O}_{\nu_{1}\nu_{3}% \nu_{2}}$$ $$\displaystyle+$$ $$\displaystyle\mathcal{O}_{\nu_{3}\nu_{1}\nu_{2}}-\mathcal{O}_{\nu_{3}\nu_{2}% \nu_{1}}-2\,\mathcal{O}_{\nu_{2}\nu_{3}\nu_{1}}$$ $$\displaystyle+$$ $$\displaystyle 2\,\mathcal{O}_{\nu_{2}\nu_{1}\nu_{3}}\,,$$ $$\displaystyle\mathcal{O}_{\langle\langle\nu_{1}\nu_{2}\nu_{3}\rangle\rangle}$$ $$\displaystyle=$$ $$\displaystyle\mathcal{O}_{\nu_{1}\nu_{2}\nu_{3}}+\mathcal{O}_{\nu_{1}\nu_{3}% \nu_{2}}$$ $$\displaystyle-$$ $$\displaystyle\mathcal{O}_{\nu_{3}\nu_{1}\nu_{2}}-\mathcal{O}_{\nu_{3}\nu_{2}% \nu_{1}}\,.$$ Let us denote an irreducible representation of the hypercubic group $H(4)$ by $\tau_{k}^{(l)}$ with dimension $l$ ($k$ labels inequivalent representations of the same dimension) and a given charge conjugation parity $C$ by $\pm 1$. For the first moments we choose the following representations presented in Table 1 (for the notation and a detailed discussion of the transformation under H(4) see [10]). They are renormalised multiplicatively. For the second moments we consider in this contribution the following mixing cases ( the details and additional operators will be presented elsewhere [11]): Representation $\tau_{2}^{(4)}$, $C=-1$ with operators $$\mathcal{O}_{\{124\}}^{DD}\,,\,\mathcal{O}_{\{124\}}^{\partial\partial}\,.$$ (11) Representation $\tau_{1}^{(8)}$, $C=-1$ with $$\displaystyle\mathcal{O}_{1}=\mathcal{O}^{DD}_{\{114\}}-\frac{1}{2}\left(% \mathcal{O}^{DD}_{\{224\}}+\mathcal{O}^{DD}_{\{334\}}\right)\,,$$ $$\displaystyle\mathcal{O}_{2}=\mathcal{O}^{\partial\partial}_{\{114\}}-\frac{1}% {2}\left(\mathcal{O}^{\partial\partial}_{\{224\}}+\mathcal{O}^{\partial% \partial}_{\{334\}}\right)\,,$$ $$\displaystyle\mathcal{O}_{3}=\mathcal{O}^{DD}_{\langle\langle 114\rangle% \rangle}-\frac{1}{2}\left(\mathcal{O}^{DD}_{\langle\langle 224\rangle\rangle}+% \mathcal{O}^{DD}_{\langle\langle 334\rangle\rangle}\right)\,,$$ $$\displaystyle\mathcal{O}_{4}=\mathcal{O}^{\partial\partial}_{\langle\langle 11% 4\rangle\rangle}-\frac{1}{2}\left(\mathcal{O}^{\partial\partial}_{\langle% \langle 224\rangle\rangle}+\mathcal{O}^{\partial\partial}_{\langle\langle 334% \rangle\rangle}\right)\,,$$ $$\displaystyle\mathcal{O}_{5}=\mathcal{O}^{5,\partial D}_{||213||}\,,\quad% \mathcal{O}_{6}=\mathcal{O}^{5,\partial D}_{\langle\langle 213\rangle\rangle}\,,$$ (12) $$\displaystyle\mathcal{O}_{7}=\mathcal{O}^{5,DD}_{||213||}\,,$$ $$\displaystyle\mathcal{O}_{8}=\mathcal{O}^{T,\partial}_{411}-\frac{1}{2}\left(% \mathcal{O}^{T,\partial}_{422}+\mathcal{O}^{T,\partial}_{433}\right)\,.$$ 3 ONE-LOOP CALCULATION We calculate the non-forward matrix elements of the operators in one-loop lattice perturbation theory in the infinite volume limit following Kawai et al. [12]. Details of the computational procedure are given in [7]. In lattice momentum space operators with non-zero momentum transfer $q$ are realised by applying the lattice momentum transfer $q$ at the lattice position $x$ or at the “position centre” $x+\frac{a}{2}\hat{\mu}$, e.g. for an operator with one covariant derivative we have the two possibilities $$\displaystyle\left(\bar{\psi}\stackrel{{\scriptstyle\leftrightarrow}}{{D}}_{\!% \mu}\,\psi\right)\!(q)=\frac{1}{2a}\sum_{x}\left\{{\rm e}^{iq\cdot x}+{\rm e}^% {iq\cdot(x+a\hat{\mu})}\atop 2{\rm e}^{iq\cdot(x+a\hat{\mu}/2)}\right\}\times$$ $$\displaystyle\left[\bar{\psi}(x)U_{x,\mu}\psi(x+a\hat{\mu})-\bar{\psi}(x+a\hat% {\mu})U^{\dagger}_{x,\mu}\psi(x)\right]\,.$$ (13) Eq. (3) basically defines the Feynman rules for the operators in lattice perturbation theory. As an example we get for the operator $\mathcal{O}^{DD}_{\mu\nu\omega}$ to order $O(g^{0})$ ($g$ is the bare gauge coupling): $$\displaystyle{\cal O}^{DD}_{\mu\nu\omega}(p^{\prime},p)=\bar{\psi}(p^{\prime})% \gamma_{\mu}\psi(p)\,\times$$ $$\displaystyle\frac{1}{a}\sin\frac{a(p+p^{\prime})_{\nu}}{2}\,\frac{1}{a}\sin% \frac{a(p+p^{\prime})_{\omega}}{2}\times$$ (14) $$\displaystyle\left\{{\cos\frac{a(p-p^{\prime})_{\nu}}{2}\,\cos\frac{a(p-p^{% \prime})_{\omega}}{2}\atop 1}\right\}\,.$$ In the following sections we denote the upper/lower realisations by supercripts $I/II$. The contributing one-loop diagrams for the self energy and the amputated Green (vertex) functions are shown in Figures 1 and  2 (filled black circles indicate the place of the operator insertions): 3.1 First Moment Since mixing is absent, we omit the matrix notation for the renormalisation constants and use the general form ($C_{F}=4/3$, $g_{R}$ is the renormalised coupling) $$Z(a\mu)=1-\frac{g_{R}^{2}\,C_{F}}{16\pi^{2}}\left[\gamma\,\ln(a^{2}\mu^{2})+B(% c_{sw})\right]$$ (15) with the anomalous dimensions $\gamma=8/3$ for the first four operators and $\gamma=2$ for the last three operators in Table 1. For the operators of that Table we get the finite contributions shown in Table 2. 3.2 Second Moment We present the matrix of renormalisation constants in the generic form $$\displaystyle Z_{ij}^{(m)}(a\mu)$$ $$\displaystyle=$$ (16) $$\displaystyle\delta_{ij}-\frac{g_{R}^{2}\,C_{F}}{16\pi^{2}}\left[\gamma_{ij}\,% \ln(a^{2}\mu^{2})+B_{ij}^{(m)}(c_{sw})\right]$$ with $$B_{ij}^{(m)}(c_{sw})=B_{ij}^{(0,m)}+B_{ij}^{(1,m)}\,c_{sw}+B_{ij}^{(2,m)}\,c_{% sw}^{2}\,.$$ The superscript $(m)$ with $m=I,II$ distinguishes the realisations I and II of the covariant derivatives (3). Representation $\tau_{2}^{(4)}$, $C=-1$ For this representation the operators (11) mix. The corresponding $2\times 2$-mixing matrices are $$\gamma_{jk}=\left(\begin{array}[]{rr}\frac{25}{6}&-\frac{5}{6}\\ 0&0\end{array}\right)\,,$$ (17) $$\displaystyle B_{jk}^{(I,II)}(c_{sw})=\left(\begin{array}[]{rr}-11.563&0.024\\ 0&20.618\end{array}\right)$$ $$\displaystyle                 +\left(\begin{array}[]{rr}2.898&-0.255\\ 0&4.746\end{array}\right)\,c_{sw}$$ $$\displaystyle                 -\left(\begin{array}[]{rr}0.984&\hskip 8.535827% pt0.016\\ 0&0.543\end{array}\right)\,c_{sw}^{2}\,.$$ (18) In the matrix $B_{jk}^{(I,II)}$ the mixing between the operators $\mathcal{O}_{\{124\}}^{DD}$ and $\mathcal{O}^{\partial\partial}_{\{124\}}$ is very small. Thus it may be justified to neglect the mixing in practical applications. Representation $\tau_{1}^{(8)}$, $C=-1$ We consider the mixing (2) of the operators having the same dimension first. These are the operators $\mathcal{O}_{1}\dots\mathcal{O}_{7}$ in (2). To one-loop accuracy the operator $\mathcal{O}_{7}$ does not contribute and we have to consider the following mixing set: $$\{\mathcal{O}_{1},\mathcal{O}_{2},\mathcal{O}_{3},\mathcal{O}_{4},\mathcal{O}_% {5},\mathcal{O}_{6}\}\,.$$ The anomalous dimension matrix is $$\gamma_{jk}=\left(\begin{array}[]{rrrrrr}\frac{25}{6}&-\frac{5}{6}&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&\frac{7}{6}&-\frac{5}{6}&1&-\frac{3}{2}\\ 0&0&0&0&0&0\\ 0&0&0&0&2&-2\\ 0&0&0&0&-\frac{2}{3}&\frac{2}{3}\end{array}\right)$$ (19) and the finite parts of the mixing matrix are given in Table 3. (in cases of doublets the upper number belongs to type I, the lower to type II of realisation of lattice covariant derivative). Using lattice perturbation theory to one-loop, $1/a$ terms may appear when calculating the matrix elements of the operators with two covariant derivatives. Such terms are potentially dangerous because of the power-law divergence in the continuum limit. Considering the representation $\tau_{2}^{(4)}$, a potential mixing is absent. On the contrary, we get mixing for operator ${\cal O}_{1}$ of $\tau_{1}^{(8)}$ with the lower dimensional operator $\mathcal{O}_{8}$ given in (2). The perturbative mixing result is $$\displaystyle{\cal O}_{1}\big{|}_{1/a-{\rm part}}=\frac{g_{R}^{2}C_{F}}{16\pi^% {2}}\,\frac{1}{a}\,{\cal O}_{8}^{{\rm tree}}\times$$ (20) $$\displaystyle(-0.518+0.0832\,c_{sw}-0.00983\,c_{sw}^{2})\,,$$ but a nonperturbative subtraction from the matrix element of ${\cal O}_{1}$ is required to obtain reliable numbers. 4 TADPOLE IMPROVEMENT AND SOME NUMERICAL EXAMPLES Since many results of (naive) lattice perturbation theory are in bad agreement with their numerical counterparts, it has has been proposed [13] to rearrange the (naive) lattice perturbative series. This rearrangement is performed using the variable $u_{0}$ (the mean field value of the link), e.g. defined from the measured value of the plaquette at a given coupling $$u_{0}=\langle\frac{1}{3}{\rm Tr}U_{\Box}\rangle^{\frac{1}{4}}\,.$$ (21) In case of mixing the tadpole improvement procedure proceeds as follows. By scaling the link variables $U_{\mu}$ with $u_{0}$ $$U_{\mu}(x)=u_{0}\,\left(\frac{U_{\mu}(x)}{u_{0}}\right)=u_{0}\,\overline{U}_{% \mu}(x)$$ the amputated Green function for operator $\mathcal{O}$ with $n$ covariant derivatives $\Lambda^{(n)}_{\mathcal{O}}$ takes the form $$\Lambda^{(n)}_{\mathcal{O}}=u_{0}^{n}\Lambda^{(n)}_{\mathcal{O}}(\overline{U}_% {\mu}(x))\,.$$ (22) $\Lambda^{(n)}_{\mathcal{O}}(\overline{U}_{\mu}(x))$ is expected to have a better converging perturbative expansion. Up to order $g^{2}$ we obtain for the Wilson gauge action, labelling the operators by $i$ and the corresponding number of covariant derivatives by $n_{i}$, $$\displaystyle\Lambda^{(n_{i})}_{i}(\overline{U}_{\mu}(x))$$ $$\displaystyle=$$ $$\displaystyle\left(\frac{1}{u_{0}^{n_{i}}}\right)_{\rm pert}\,\Lambda^{(n_{i})% }_{i,{\rm pert}}(U_{\mu}(x))$$ (23) $$\displaystyle=$$ $$\displaystyle\left(1+\frac{g^{2}C_{F}}{16\,\pi^{2}}\,n_{i}\,\pi^{2}+O(g^{4})\right)$$ $$\displaystyle\times\left(\Lambda^{(n_{i},{\rm tree})}_{i}+\frac{g^{2}C_{F}}{16% \,\pi^{2}}\sum_{k=1}^{n}w_{ik}\Lambda^{(n_{k},{\rm tree})}_{k}\right.$$ $$\displaystyle\left.+O(g^{4})\right)\,,$$ where the $w_{ik}$ denote the mixing weights. From (23) it becomes clear that in one-loop only the diagonal terms in the mixing matrix get a shift proportional to $n_{i}\,\pi^{2}$. An external ordinary derivative ($\partial$) does not provide a factor of $u_{0}$. Taking into account the mean field value for the wave function renormalisation constant for massless Wilson fermions $Z_{\psi,{\rm Wilson}}^{MF}=u_{0}$ we get the tadpole improved matrix of renormalisation constants in the form $$\displaystyle Z_{ij}^{TI}$$ $$\displaystyle=$$ $$\displaystyle u_{0}^{1-n_{i}}\times$$ (24) $$\displaystyle\left(1-\frac{g^{2}C_{F}}{16\,\pi^{2}}(n_{i}-1)\,\pi^{2}\,\delta_% {ij}+O(g^{4})\right)\,Z_{ij}\,.$$ Additionally, one has to replace the parameters $g$ and $c_{sw}$ by their boosted counterparts $$\displaystyle g_{TI}^{2}\equiv g^{2}\,u_{0}^{-4}\,,\quad c_{sw}^{TI}\equiv c_{% sw}\,u_{0}^{3}\,.$$ (25) Putting (16), (24) and (25) together we obtain for the tadpole improved renormalisation mixing matrix in one-loop order $$\displaystyle Z_{ij}^{TI,(m)}$$ $$\displaystyle=$$ $$\displaystyle u_{0}^{1-n_{i}}\,\bigg{(}\delta_{ij}-\frac{g_{TI}^{2}\,C_{F}}{16% \pi^{2}}\times$$ (26) $$\displaystyle\Big{(}\gamma_{ij}\,\ln(a^{2}\mu^{2})+B_{ij}^{TI,(m)}(c_{sw}^{TI}% )\Big{)}\bigg{)}$$ with $$B_{ij}^{TI,(m)}(c_{sw}^{TI})=B_{ij}^{(m)}(c_{sw}^{TI})+(n_{i}-1)\pi^{2}\,% \delta_{ij}\,.$$ (27) Let us demonstrate the effect of tadpole improvement by some numerical examples. We choose $a=1/\mu$, $\beta=6$, $u_{0}=0.8778$ and $c_{sw}=1+O(g^{2})$ [9]. For the first moments the only effect consists in replacing $c_{sw}$ by $c_{sw}^{TI}$ and $g_{R}$ by $g_{TI}$ in (15) and in Table 2. For the representation $\tau_{3}^{(6)}$ we get $$Z=1.028\quad\rightarrow\quad Z^{TI}=1.023\,.$$ (28) For the second moments we consider the simple mixing $\mathcal{O}_{\{124\}}^{DD}\leftrightarrow\mathcal{O}_{\{124\}}^{\partial\partial}$ (11) first. Without tadpole improvement we obtain the mixing matrix $$Z_{ij}=\left(\begin{array}[]{rr}1.081&0.002\\ 0&0.790\end{array}\right)\,.$$ (29) The tadpole improved result is $$Z_{ij}^{TI}=\left(\begin{array}[]{rr}1.142&0.002\\ 0&0.707\end{array}\right)\,.$$ (30) It might be instructive to compare the one-loop corrections for the renormalisation constants: $B^{(m)}_{ij}(c_{sw})$ for the unimproved case (16) and $B_{ij}^{TI,(m)}(c_{sw}^{TI})$ for the tadpole improved case (26). We get $$B_{ij}=\left(\begin{array}[]{rr}-9.649&-0.247\\ 0&24.821\end{array}\right)$$ (31) and $$B_{ij}^{TI}=\left(\begin{array}[]{rr}-0.209&-0.177\\ 0&12.035\end{array}\right)\,.$$ (32) We observe that in agreement with the improvement aims the diagonal one-loop contributions are reduced. For the representation $\tau_{1}^{(8)}$, $C=-1$ with the mixing of operators $\mathcal{O}_{1}\dots\mathcal{O}_{6}$ we obtain for the unimproved/improved mixing matrices (choosing $m=I$) the numbers given in Table 4. 5 SUMMARY Within the framework of lattice QCD with clover improved Wilson fermions and Wilson’s plaquette action for the gauge fields we have calculated the one-loop quark matrix elements of operators needed for the first two moments of GPDs and meson distribution amplitudes. From these we have determined the matrices of renormalisation and mixing coefficients in the $\overline{MS}$-scheme. For the first moments of GPDs we can use the results from the first moments of structure functions. The results for the second moments extend the numbers obtained with Wilson fermions [7]. The general conclusions concerning the mixing properties remain unchanged. All sets which consist of one operator with two covariant derivatives $D$ and one operator with two external derivatives $\partial$ show very small mixing. The set discussed here with seven potential candidates (2) shows a more significant mixing. Moreover, taking $\mathcal{O}_{1}$ from (2) as the operator to be measured in a numerical simulation a mixing with a lower dimensional operator $\mathcal{O}_{8}$ appears. This requires a nonperturbative subtraction for Wilson or clover fermions. Using overlap fermions, such a mixing with a dangerous lower dimensional operator must be absent, since the mixing operators are of different chirality. Acknowledgements This work has been supported in part by the EU Integrated Infrastructure Initiative Hadron Physics (I3HP) under contract RII3-CT-2004-506078 and by the DFG under contract FOR 465 (Forschergruppe Gitter-Hadronen-Phänomenologie). References [1] M. Diehl, Phys. Rept.  388 (2003) 41 [arXiv:hep-ph/0307382]. [2] P. Hägler, J. Negele, D. B. Renner, W. Schroers, T. Lippert and K. Schilling [LHPC collaboration], Phys. Rev. D 68 (2003) 034505 [arXiv:hep-lat/0304018]. [3] M. Göckeler, R. Horsley, D. Pleiter, P. E. L. Rakow, A. Schäfer, G. Schierholz and W. Schroers [QCDSF Collaboration], Phys. Rev. Lett.  92 (2004) 042002 [arXiv:hep-ph/0304249]. [4] M. Göckeler, Ph. Hägler, R. Horsley, D. Pleiter, P.E.L. Rakow, A. Schäfer, G. Schierholz and J.M. Zanotti Nucl. Phys. Proc. Suppl.  140 (2005) 399 [arXiv:hep-lat/0409162]. [5] M. Göckeler, Ph. Hägler, R. Horsley, D. Pleiter, P.E.L. Rakow, A. Schäfer, G. Schierholz and J.M. Zanotti Nucl. Phys. A 755 (2005) 537 [arXiv:hep-lat/0501029]. [6] M. Göckeler, Ph. Hägler, R. Horsley, D. Pleiter, P.E.L. Rakow, A. Schäfer, G. Schierholz and J.M. Zanotti Phys.  Lett.   B 627 (2005) 113 [arXiv:hep-lat/0507001]. [7] M. Göckeler, R. Horsley, H. Perlt, P. E. L. Rakow, A. Schäfer, G. Schierholz and A. Schiller, Nucl. Phys. B 717 (2005) 304 [arXiv:hep-lat/0410009]. [8] B. Sheikholeslami and R. Wohlert, Nucl. Phys. B 259 (1985) 572. [9] S. Capitani, M. Göckeler, R. Horsley, H. Perlt, P. E. L. Rakow, G. Schierholz and A. Schiller, Nucl. Phys. B 593 (2001) 183 [arXiv:hep-lat/0007004]. [10] M. Göckeler, R. Horsley, E.-M. Ilgenfritz, H. Perlt, P. Rakow, G. Schierholz and A. Schiller, Phys. Rev. D 54 (1996) 5705 [arXiv:hep-lat/9602029]. [11] M. Göckeler, R. Horsley, H. Perlt, P. E. L. Rakow, A. Schäfer, G. Schierholz and A. Schiller, in preparation. [12] H. Kawai, R. Nakayama and K. Seo, Nucl. Phys. B 189 (1981) 40. [13] G. P. Lepage and P. B. Mackenzie, Phys. Rev. D 48 (1993) 2250.
Power-law cosmic expansion in $f(R)$ gravity models Naureen Goheer${}^{1}$, Julien Larena${}^{1}$ and Peter K. S. Dunsby${}^{1,2}$ 1. Department of Mathematics and Applied Mathematics, University of Cape Town, 7701 Rondebosch, Cape Town, South Africa 2. South African Astronomical Observatory, Observatory 7925, Cape Town, South Africa. (December 3, 2020) Abstract We show that within the class of $f(R)$ gravity theories, FLRW power-law perfect fluid solutions only exist for $R^{n}$ gravity. This significantly restricts the set of exact cosmological solutions which have similar properties to what is found in standard General Relativity. pacs: 98.80.Cq I Introduction Currently, one of the most popular alternatives to the Concordance Model is based on modifications of the Einstein-Hilbert action. Such models first became popular in the 1980’s because it was shown that they naturally admit a phase of accelerated expansion which could be associated with an early universe inflationary phase star80 . The fact that the phenomenology of Dark Energy requires the presence of a similar phase (although only a late time one) has recently revived interest in these models. In particular, the idea that Dark Energy may have a geometrical origin, i.e., that there is a connection between Dark Energy and a non-standard behavior of gravitation on cosmological scales is consequently a very active area of research. One such modification is based on gravitational actions which are non-linear in the Ricci curvature $R$ and$/$or contain terms involving combinations of derivatives of $R$ DEfR ; kerner ; teyssandier ; magnanoff . Over the past few years, these theories have provided a number of very interesting results on both cosmological ccct-ijmpd ; review ; cct-jcap ; otha ; perts and astrophysical cct-jcap ; cct-mnras scales. An important feature of these theories is that the field equations can be recast in a way that the higher order corrections are written as an energy - momentum tensor of geometrical origin describing an “effective” source term on the right hand side of the standard Einstein field equations ccct-ijmpd ; review . In this Curvature Quintessence scenario, the cosmic acceleration can be shown to result from such a new geometrical contribution to the cosmic energy density budget, due to higher order corrections of the Hilbert-Einstein Lagrangian. Of considerable importance to the study of the cosmology of these models is the existence of exact power–law solutions corresponding to phases of cosmic evolution when the energy density is dominated by a perfect fluid. The existence of such solutions is particularly relevant because in Friedmann–Lemaître–Robertson–Walker (FLRW) backgrounds, they typically represent asymptotic or intermediate states in the full phase–space of the dynamical system representing all possible cosmological evolutions. In this paper we investigate the implications for the gravitational action if exact FLRW power–law solutions in $f(R)$ gravity are assumed to exist. We discover that such solutions only occur for a very special class of $f(R)$ theories. This result is complementary to one recently found for $f(G)$ gravity models goheer09 . II Field equations for homogeneous and isotropic $f(R)$ models We consider the following action within the context of four–dimensional homogeneous and isotropic spacetimes, i.e., the (FLRW) universes with negligible spatial curvature: $$\mathcal{A}=\int d^{4}x\sqrt{-g}\left[f(R)+{\cal L}_{m}\right]\;,$$ (1) where $R$ is the Ricci scalar, $f$ is general differentiable (at least $C^{2}$) function of the Ricci scalar and $\mathcal{L}_{m}$ corresponds to the matter Lagrangian. Units are chosen so that $c=16\pi G=1$. It follows that the field equations for homogeneous and isotropic spacetimes are the Raychaudhuri equation $$\displaystyle\dot{\Theta}+\frac{1}{3}\Theta^{2}=-\frac{1}{2f^{\prime}}\left[% \rho+3P+f-f^{\prime}R+\Theta f^{\prime\prime}\dot{R}+3f^{\prime\prime\prime}% \dot{R}^{2}+3f^{\prime\prime}\ddot{R}\right]]\;,$$ (2) where $\Theta$ is the volume expansion, which defines the scale factor $a(t)$ along the fluid flow lines via the standard relation $\Theta=3\dot{a}/{a}$, and $f^{(n)}$ abbreviates $\partial^{n}f/{(\partial R)^{n}}$ for $n=1..3$; the Friedmann equation $$\Theta^{2}=\frac{3}{f^{\prime}}\left[\rho+\frac{Rf^{\prime}-f}{2}-\Theta f^{% \prime\prime}\dot{R}\right]\;;$$ (3) the trace equation $$3\ddot{R}f^{\prime\prime}=\rho-3P+f^{\prime}R-2f-3\Theta f^{\prime\prime}\dot{% R}-3f^{\prime\prime\prime}\dot{R}^{2}\;;$$ (4) and the energy conservation equation for standard matter $$\dot{\rho}=-\Theta\left(\rho+P\right)\;.$$ (5) Combining the Friedmann and Raychaudhuri equations, we obtain $$R=2\dot{\Theta}+\frac{4}{3}\Theta^{2}\,.$$ (6) II.1 Requirements for the existence of power–law solutions Analogously to goheer09 , let us now assume there exists an exact power–law solution to the field equations, i.e., the scale factor behaves as $$a(t)=a_{0}t^{m}\;,$$ (7) where $m>0$ is a fixed real number. We further assume that the standard matter can be described by a barotropic perfect fluid such that $P=w\rho$ with $w\in[-1,1]$. From the energy conservation equation, we obtain $$\rho(t)=\rho_{0}t^{-3m(1+w)}\;,$$ (8) and from (6) we see that the Ricci scalar becomes $$R=6m(2m-1)t^{-2}\equiv\alpha_{m}t^{-2}\;.$$ (9) Note that $R>0$ if $m>1/2$, and $R<0$ for $0<m<1/2$, so the value of $m$ fixes the sign of the Ricci scalar. Using the background solutions above, we can write the Friedmann, Raychaudhuri and trace equations in terms of functions of time $t$ only, assuming with no loss of generality that $t>0$. Considering values of $m\neq 1/2$, we can then solve (9) for $t$ and re–write these equations in terms of the Ricci scalar $R$, $f(R)$ and its derivatives with respect to $R$. The Friedmann equation for example becomes $$\displaystyle f^{\prime\prime}R^{2}+\frac{m-1}{2}f^{\prime}R+\frac{1-2m}{2}f+(% 2m-1)K\left(\frac{R}{\alpha_{m}}\right)^{{\textstyle{3\over 2}}m(1+w)}=0\;,$$ (10) where $K=\rho_{0}a_{0}^{3(1+w)}$. Note that for the power–law solution (7), ${R}/{\alpha_{m}}$ is positive at all times, and therefore equation (10) is real-valued over the range of $R$. Since we want (7) to be a solution at all times, i.e., $R$ spans over an entire branch of the real axis, we can interpret (10) as a differential equation for the function $f$ in $R$ space. Solving this equation gives the following general solution $$\displaystyle f(R)=A_{mw}\left(\frac{R}{\alpha_{m}}\right)^{{\textstyle{3\over 2% }}m(1+w)}+C_{1}R^{{\textstyle{3\over 4}}-{\textstyle{m\over 4}}+\frac{\sqrt{% \beta_{m}}}{4}}+\frac{2}{\sqrt{\beta_{m}}}C_{2}R^{{\textstyle{3\over 4}}-{% \textstyle{m\over 4}}-\frac{\sqrt{\beta_{m}}}{4}}\;,$$ (11) where we have abbreviated $$\displaystyle A_{mw}=-\frac{4(2m-1)\rho_{0}}{2-m(13+9w)+3m^{2}(4+7w+3w^{2})}\;,$$ (12) $$\displaystyle\beta_{m}$$ $$\displaystyle=$$ $$\displaystyle 1+10m+m^{2}$$ (13) and $C_{1,2}$ are arbitrary constants of integration. We note that the above form of $f$ identically satisfies the other field equations, if we similarly convert them into differential equations in $R$ space. Note that $\beta_{m}>0$ for cosmologically viable solutions with $m>0$. This means that the exponents in the solution are all real valued in the case considered here. Also $A_{mw}$ is real- valued and non–zero unless $m=1/2$, but diverges if $m$ and $w$ satisfy the relationship $w\equiv\left({3-7m\pm\sqrt{\beta_{m}}}\right)/{6m}$. In general, the function $f(R)$ is real–valued if $m$ and $w$ do not satisfy the above relationship, and if $R>0$ (i.e., $m>1/2$). Furthermore, If we want to ensure that for $m=2/[3(1+w)]$ and $K=4/[3(1+w)^{2}]$ the theory reduces to GR, then we have to set $C_{1}=C_{2}=0$. In that case, the solution (11) is real-valued for all $R$ provided $R/\alpha_{m}>0$. If we re-write ${\textstyle{3\over 2}}m(1+w)\equiv n$, then we can see that we recover the well-known result that in $R^{n}$-gravity, there exists an exact Friedman-like power-law solution $a\propto t^{2n/(3(1+w))}$. The GR–limit can now be identified as the case $n=1$. II.2 Scalar field analogy It is interesting to use the solutions found above to reconstruct the effective scalar field often invoked to describe the dynamics of $f(R)$ gravity models. Using this analogy, it has been argued in Frolov that $f(R)$ theories suffer from a singularity problem, namely that in the past, at finite time, the dynamics drives the model towards infinite values of the curvature corresponding to points in the scalar field potential atteignable for finite values of the scalar field. Moreover, the effective potential of the models studied in Frolov are multivalued, which is a very unnatural feature. In what follows we will show that the models (11) that lead to power-law solutions for the scale factor do not suffer from such pathological behaviors, but admit a well-defined scalar field representation with a single-valued potential and no curvature singularity. The fact that the curvature is well behaved can be directly inferred from Eq. (6) since the only divergence occurs for $t=0$, or equivalently, for $a=0$, and this simply corresponds to a standard Big-Bang type singularity. We adopt the representation in terms of a scalar field used in Frolov , by defining the scalar field $\phi$ and its potential $V(\phi)$ through the following equations: $$\displaystyle\phi$$ $$\displaystyle=$$ $$\displaystyle\frac{df(R)}{dR}-1\;,$$ (14) $$\displaystyle\frac{dV}{dR}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{3}\left(2f(R)-\frac{df}{dR}R\right)\frac{d^{2}f}{dR^{2}}\;.$$ (15) The shape of the potential is illustrated in Figs. 1 and 2 for various values of the equation of state and of the constants $C_{1}$ and $C_{2}$. As long as $m>2/3(1+w)$ (which corresponds to $f(R)\propto R^{n}$ with $n>1$ in the case $C_{1}=C_{2}=0$), the characteristic shape of the potential does no depend on the values of $C_{1}$, $C_{2}$, and $w$ ($w$ is in the range of physical values $0<w<1$.). In any case, the scalar field starts at high absolute values and goes down its potential to asymptotically freeze at $\phi=-1$. In the case $m<2/3(1+w)$, the shape of the potential depends on the presence of non-zero $C_{1}$ and $C_{2}$, as illustrated in Fig. 2, but the dynamics nevertheless drives $\phi$ towards a constant value at late times. III Discussion and Conclusion We have shown here that exact power-law solutions in $f(R)$-gravity can only exist for the very specific form of $f(R)$ given in (11). If we ask for the theory to have the correct GR limit, then this $f(R)$ simply reduces to $R^{n}$. This makes $R^{n}$ gravity very special in the sense that it is the only $f(R)$ model that allows for exact power-law solutions. Models with actions allowing for terms of the form $R+\tilde{f}(R)$ for example, as studied in Hu , do not allow for exact power-law backgrounds unless we reduce to the GR background $\tilde{f}(R)=0$. We do not exclude the existence of cosmologically viable trajectories for a general $f(R)$, but suggest that these trajectories may correspond to more complicated exact solutions that asymptotically scale like power-law solutions. We emphasize that one must make sure that such background solutions exist before performing perturbation theory. It is not enough to simply perturb around an exact power-law background, since this exact background solution does not exist unless $f(R)=R^{n}$. Furthermore, the above work suggests that in a dynamical systems analysis of any ${f}(R)$ theory other than $R^{n}$, one should not expect to find any equilibrium points corresponding to exact power-law solutions. We conclude that there is a qualitative difference between $R^{n}$ and any other $f(R)$-model in the sense that $R^{n}$ has exact power-law solutions even in the non-perturbative non-GR case (e.g. for $n=2$), while any other ${f}(R)$ can only allow these exact solutions in the GR-limit. Therefore, perturbations around these background solutions should be carried out with caution. Moreover, we have shown that the singularity that appears in other $f(R)$ models is not present for the class of models derived from the requirement of a power-law expansion, which is manifest in the scalar field analogy, in which the scalar field potential is a well behaved function of the value of the field $\phi$. Acknowledgements.The authors would like to thank the National Research Foundation (South Africa) for financial support. JL is supported by the Claude Leon Foundation. References (1) A. A. Starobinsky, Phys. Lett. B 91, 99 (1980);K. S. Stelle, Gen. Rel. Grav.  9 (1978) 353. (2) Carroll S M, Duvvuri V, Trodden M and Turner M S 2004 Phys. Rev. D70 043528; Nojiri S and Odintsov S D 2003 Phys. Rev. D68 123512; Capozziello S 2002 Int. Journ. Mod. Phys. D 11 483; Mustafa S arXiv: gr-qc/0607116”; Faraoni V 2005 Phys. Rev. D72 124005; Ruggiero M L and Iorio L 2007 JCAP 0701 010; de la Cruz-Dombriz A and Dobado A 2006 Phys. Rev. D74 087501; Poplawski N J 2006 Phys. Rev. D74 084032; Poplawski N J 2007 Class. Quantum Grav. 24 3013; Brookfield A W, van de Bruck C and Hall L M H 2006 Phys. Rev. D74 064028; Song, Y, Hu W and Sawicki I 2007 Phys. Rev. D75 044004; Li B, Chan K and Chu M 2007 Phys. Rev. D76 024002; Jin X, Liu D and Li X arXiv: astro-ph/0610854; Sotiriou T P and Liberati S 2007 Ann. Phys. (NY) 322 935; Sotiriou T P 2006 Class. Quantum Grav. 23 5117; Bean R, Bernat D, Pogosian L, Silvestri A and Trodden M 2007 Phys. Rev. D75 064020; Navarro I and Van Acoleyen K 2007 JCAP 0702 022; Bustelo A J and Barraco D E 2007 Class. Quantum Grav. 24 2333 Olmo G J 2007 Phys. Rev. D75 023511; Ford J, Giusto S and Saxena A arXiv: hep-th/0612227; Briscese F, Elizalde E, Nojiri S and Odintsov S D 2007 Phys. Lett. B646 105; Baghram S, Farhang M and Rahvar S 2007 75 044024; Bazeia D, Carneiro da Cunha B, Menezes R and Petrov A 2007 Phys. Lett. B649 445; Zhang P 2007 Phys. Rev. D76 024007; Li B and Barrow J D 2007 Phys. Rev. Dbf 75 084010; Rador T arXiv: hep-th/0702081; Rador T 2007 Phys. Rev. D75 064033; Sokolowski L M arXiv: gr-qc/0702097; Faraoni V 2007 Phys. Rev. D75 067302; Bertolami O, Boehmer C G, Harko T and Lobo F S N 2007 Phys. Rev. D75 104016; Srivastava S K arXiv:0706.0410 [hep-th]; Capozziello S, Cardone V F and Troisi A 2006 JCAP 08 001; Starobinsky A A arXiv: 0706.2041 [gr-qc] (3) Kerner R 1982 Gen. Rel. Grav. 14 453 ; Duruisseau J P, Kerner R 1986 Class. Quantum Grav. 3 817. (4) Teyssandier P 1989 Class. Quantum Grav. 6 219. (5) Magnano G, Ferraris M and Francaviglia M 1987 Gen. Rel. Grav. 19 465. (6) Capozziello S., Cardone V.F., Carloni S., Troisi A., 2003, Int. J. Mod. Phys.  D 12, 1969. (7) Capozziello S, Carloni S and Troisi A 2003 Recent Res. Devel.Astronomy & Astrophysics 1, 625, arXiv: astro-ph/0303041 (8) S. Capozziello, V.F. Cardone, A. Troisi, 2006, JCAP 0608, 001 (9) K. i. Maeda and N. Ohta, Phys. Lett.  B 597 (2004) 400 arXiv:hep-th/0405205, K. i. Maeda and N. Ohta, Phys. Rev.  D 71 (2005) 063520 arXiv:hep-th/0411093, N. Ohta, Int. J. Mod. Phys.  A 20 (2005) 1 arXiv:hep-th/0411230, K. Akune, K. i. Maeda and N. Ohta, Phys. Rev.  D 73 (2006) 103506 arXiv:hep-th/0602242. (10) Kishore N. Ananda, Sante Carloni, Peter K. S. Dunsby, A characteristic signature of fourth order gravity, arXiv:0812.2028; Kishore N. Ananda, Sante Carloni, Peter K. S. Dunsby, A detailed analysis of structure growth in $f(R)$ theories of gravity, arXiv:0809.3673. (11) S. Capozziello, V.F. Cardone, A. Troisi, 2007, Mon. Not. Roy. Astron. Soc.  375, 1423. (12) N. Goheer, R. Goswami P. Dunsby & K. Ananda, Phys. Rev. D 79, 121301(R) (2009), arXiv:0904.2559. (13) A. V. Frolov, Phys. Rev. Lett.  101, 061103 (2008) [arXiv:0803.2500 [astro-ph]]. (14) A. De Felice, S. Tsujikawa, Construction of cosmologically viable $f(\mathcal{G})$ dark energy models, arXiv:0810.5712. (15) W. Hu & I. Sawicki, Phys. Rev. D 76, 064004 (2007), arXiv: 0705.1158; H. Oyaizu, M. Lima & W. Hu, Phys. Rev. D 78, 123524 (2008), arXiv: 0807.2462.
Abstract We present an exact black hole solution in a model having besides gravity a dilaton and a monopole field. The solution has three free parameters, one of which can be identified with the monopole charge, and another with the ADM mass . The metric is asymptotically flat and has two horizons and irremovable singularity only at $r=0$. The dilaton field is singular only at $r=0$. The dominant and the strong energy condition are satisfied outside and on the external horizon. According to a formulation of the no hair conjecture the solution is ”hairy”. Also the well know GHS-GM solution is obtained from our solution for certain values of its parameters. PACS number(s): 04.20.Jb, 04.70.Bw, 04.20.Dw Black Hole in a Model with Dilaton and Monopole Fields E. Kyriakopoulos111E-mail: kyriakop@central.ntua.gr Department of Physics National Technical University 157 80 Zografou, Athens, GREECE There are various formulations of the no hair conjecture [1-5]. According to the original one [1] a black hole is uniquely determined by its mass its electric and magnetic charges and its angular momentum. There are no other conserved quantities provided that they cannot be expressed in terms of the above quantities. This happens in the well known solution of D. Garfinkle, G. T. Horowitz and A. Strominger (GHS) [6], found previously by G. W. Gibbons and by G. W. Gibbons and K. Maeda [7], which has two free parameters namely the Arnowitt-Deser-Misner (ADM) mass and the magnetic charge and in addition one more parameter the dilaton charge, which however can be expressed in terms of the other two. Therefore the GHS solution does not violate the no hair theorem in its original formulation [8, 9]. Another formulation of the no hair conjecture is the following [5]:”We say that in a given theory there is black hole hair when the space-time metric and the configuration of the other fields of a stationary black hole solution are not completely specified by the conserved charges defined at asymptotic infinity”. In this work we present, in a model having besides gravity a dilaton and a monopole field, an exact solution which is characterized by three independent quantities namely the monopole charge the ADM mass and another free parameter. The solution is simple and has two horizons. The scalar field is singular only at the origin and the metric has an essential singularity only at the origin. The metric is written in Eddington-Finkelstein type coordinates. Also one coordinate patch of Kruskal type coordinates is given. According to Ref. [5] our solution is ”hairy”. Consider the action $$\int d^{4}x\sqrt{-g}L=\int d^{4}x\sqrt{-g}\{R-\frac{1}{2}\partial_{\mu}\psi% \partial^{\mu}\psi-(g_{1}e^{\psi}+g_{2}e^{-\psi})F_{\mu\nu}F^{\mu\nu}\}$$ (1) where $R$ is the Ricci scalar, $\psi$ is a dilaton field and $F_{\mu\nu}$ is a pure monopole field $$\displaystyle F=Qsin\theta d\theta\wedge d\phi$$ (2) with $Q$ its magnetic charge. From this action we find the following equations of motion $$(\partial^{\rho}\psi)_{;\rho}-(g_{1}e^{\psi}-g_{2}e^{-\psi})F_{\mu\nu}F^{\mu% \nu}=0$$ (3) $$((g_{1}e^{\psi}+g_{2}e^{-\psi})F^{\mu\nu})_{;\mu}=0$$ (4) $$R_{\mu\nu}=\frac{1}{2}\partial_{\mu}\psi\partial_{\nu}\psi+2(g_{1}e^{\psi}+g_{% 2}e^{-\psi})(F_{\mu\sigma}{F_{\nu}}^{\sigma}-\frac{1}{4}g_{\mu\nu}F_{\rho% \sigma}F^{\rho\sigma})$$ (5) We want to find static spherically symmetric solutions of the above equations which are asymptotically flat and have regular horizon. We write the metric in the form [6] $$ds^{2}=-\lambda^{2}dt^{2}+\lambda^{-2}dr^{2}+\xi^{2}d\Omega$$ (6) where $\lambda$ and $\xi$ are functions of $r$ only and $d\Omega=d\theta^{2}+sin^{2}\theta{d{\phi}^{2}}$. From the above metric and Eq (2) we get $F_{\mu\nu}F^{\mu\nu}=\frac{2Q^{2}}{\xi^{4}}$, and we can prove that Eq (4) is satisfied. The dilaton Eq. (3) takes the form $$(\lambda^{2}\xi^{2}\psi^{\prime})^{\prime}=2(g_{1}e^{\psi}-g_{2}e^{-\psi})Q^{2% }\xi^{-2}$$ (7) where prime denotes differentiation with respect to $r$. The non-vanishing components of the Ricci tensor of the metric (6) are $R_{00}$, $R_{11}$, $R_{22}$ and $R_{33}=sin^{2}\theta{R_{22}}$, and for the first three components we get respectively from Eqs (5) the relations $$(\lambda^{2})^{\prime\prime}+(\lambda^{2})^{\prime}(\xi^{2})^{\prime}\xi^{-2}=% 2(g_{1}e^{\psi}+g_{2}e^{-\psi})Q^{2}\xi^{-4}$$ (8) $$-(\lambda^{2})^{\prime\prime}\lambda^{-2}-2(\xi^{2})^{\prime\prime}\xi^{-2}-(% \lambda^{2})^{\prime}(\xi^{2})^{\prime}\lambda^{-2}\xi^{-2}+[(\xi^{2})^{\prime% }]^{2}\xi^{-4}=(\psi^{\prime})^{2}$$ $$-2(g_{1}e^{\psi}+g_{2}e^{-\psi})Q^{2}\lambda^{-2}\xi^{-4}$$ (9) $$-[\lambda^{2}(\xi^{2})^{\prime}]^{\prime}+2=2(g_{1}e^{\psi}+g_{2}e^{-\psi})Q^{% 2}\xi^{-2}$$ (10) Eqs (7)-(10) form a system of four equations for the three unknowns $\lambda^{2}$, $\xi^{2}$ and $\psi$. We found the following solution of this system $$\lambda^{2}=\frac{(r+A)(r+B)}{r(r+\alpha)},\>\>\>\>\xi^{2}=r(r+\alpha)$$ (11) $$e^{\psi}=e^{\psi_{0}}(1+\frac{\alpha}{r})$$ (12) where $A$, $B$, $\alpha$ and $\psi_{0}$ are integration constants, provided that the following relations are satisfied $$g_{1}=\frac{AB}{2Q^{2}}e^{-\psi_{0}},\>\>\>\>\>g_{2}=\frac{(\alpha-A)(\alpha-B% )}{2Q^{2}}e^{\psi_{0}}$$ (13) From Eqs (6) and (11) we get $$ds^{2}=-\frac{(r+A)(r+B)}{r(r+\alpha)}dt^{2}+\frac{r(r+\alpha)}{(r+A)(r+B)}dr^% {2}+r(r+\alpha)d{\Omega}$$ (14) Our solution is given by Eqs (2) and (12)-(14). From Eq. (14) we get asymptotically $-g_{00}=1-\frac{\alpha-A-B}{r}+O(r^{-2})$. Therefore the solution is asymptotically flat and its ADM mass $M$ is given by $$2M=\alpha-A-B$$ (15) It is obvious that $\psi_{0}$ is the asymptotic value of $\psi$. Also for the choice $$\alpha>0,\>\>\>\>\>A<0,\>\>\>\>\>B<0,$$ (16) we have $$g_{1}>0,\>\>\>\>\>g_{2}>0,\>\>\>\>\>M>0,$$ (17) and $\psi$ is singular only at $r=0$. We shall make this choice for $\alpha$, $A$ and $B$. The solution has the integration constants $\alpha$, $A$, $B$, $\psi_{0}$ and $Q$, which for given $g_{1}$ and $g_{2}$ must satisfy Eqs (13). Therefore only three of them are independent. Introducing the ADM mass $M$ by the relation (15) we can take $M$, $Q$ and $\psi_{0}$ as independent parameters and express the parameters $\alpha$, $A$ and $B$ of the dilaton field and the metric in terms of then. This means that the solution has arbitrary mass, arbitrary magnetic charge and an additional arbitrary parameter. Therefore it is a ”hairy” solution according to the definition of such a solution given in Ref. [5]. The metric coefficient $g_{rr}$ is singular at $r=-A$ and $r=-B$ while the coefficient $g_{tt}$ is singular at $r=0$ but not at $r=-\alpha$, since $\alpha>o$. However the singularities at $r=-A$ and $r=-B$ are coordinate singularities. Indeed the Ricci scalar $R$ and the curvature scalar $R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}$ are given by $$R=\frac{\alpha^{2}(r+A)(r+B)}{2r^{3}(r+\alpha)^{3}},\>\>\>\>\>R_{\mu\nu\rho% \sigma}R^{\mu\nu\rho\sigma}=\frac{P(r,\alpha,A,B)}{4r^{6}(r+\alpha)^{6}}$$ (18) where $P(r,\alpha,A,B)$ is a complicated polynomial of $r$, $\alpha$ $A$ and $B$. Therefore $R$ and $R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}$ are regular at $r=-A$ and $r=-B$ , which means that only at $r=0$ we have a real, an irremovable singularity. If we make the Eddington-Finkelstein type transformation $$t=t^{\prime}\pm\{\alpha lnr+\frac{(\alpha-A)A}{B-A}ln|r+A|+\frac{(\alpha-B)B}{% A-B}ln|r+B|\},\>\>r=r^{\prime},\>\>\theta={\theta}^{\prime},\>\>\phi={\phi}^{\prime}$$ (19) the metric $ds^{2}$ of Eq. (14) takes the regular at $r=-A$ and $r=-B$ form $$ds^{2}=-\frac{(r^{\prime}+A)(r^{\prime}+B)}{r^{\prime}(r^{\prime}+\alpha)}dt^{% \prime 2}+\frac{r^{\prime}+\alpha}{r^{\prime 3}}\{r^{\prime 2}-(A+B)r^{\prime}% -AB\}dr^{\prime 2}$$ $$\mp\frac{2}{r^{\prime 2}}\{(A+B)r^{\prime}+AB\}dr^{\prime}dt^{\prime}+r^{% \prime}(r^{\prime}+\alpha)d\Omega$$ (20) From the above expression we find that the radial null directions, i.e. the directions for which $ds^{2}=d\theta=d\phi=0$, are determined by the relations $$r^{\prime}dt^{\prime}\pm(r^{\prime}+\alpha)dr^{\prime}=0$$ (21) $$r^{\prime}(r^{\prime}+A)(r^{\prime}+B)dt^{\prime}\mp(r^{\prime}+\alpha)\{r^{% \prime 2}-(A+B)r^{\prime}-AB\}dr^{\prime}=0$$ (22) Integrating the above relations we get $$t^{\prime}=\mp r^{\prime}\mp\alpha lnr^{\prime}+const$$ (23) $$t^{\prime}=\pm r^{\prime}\mp\alpha lnr^{\prime}\pm\frac{2A(\alpha-A)}{A-B}ln|r% ^{\prime}+A|\pm\frac{2B(\alpha-B)}{B-A}ln|r^{\prime}+B|+const$$ (24) which are the equations of the intersections of the light cone with the $t^{\prime}-r^{\prime}$ plane. Proceeding as in the Eddington-Finkelstein treatment of the Schwarzshild solution and the Reissner-Nordstrom solution we can draw a space-time diagram and show that the solution with the upper sign is a black hole solution. The solution has two horizons at $r^{\prime}=r=-A$ and at $r^{\prime}=r=-B$. Assume that $|A|>|B|$ and call $I$, $II$ and $III$ the regions with $r>-A$, $-A>r>-B$ and $-B>r>0$ respectively. Then we can show that particles from region $I$ can cross the surface $r=-A$ and enter region $II$, while a particle in region $II$ will reach the surface $r=-B$ asymptotically or it will cross it and enter region $III$. A particle cannot move from region $III$ to region $II$. The solution with the lower sign is a white hole solution. Since we have two horizons we need two Kruskal type coordinate patches to cover the whole range of $r$, as in the Reissner- Nordstrom case [10]. To introduce the first patch assume as before that $-A>-B$, let $-A>r_{c}>-B$ and define new coordinates $u$ and $v$ for $r>-A$ by the relations $$u=\frac{2A(\alpha-A)}{A-B}e^{\frac{(A-B)r}{2A(\alpha-A)}}(r+A)^{\frac{1}{2}}(r% +B)^{-\frac{B(\alpha-B)}{2A(\alpha-A)}}cosh\{\frac{(A-B)t}{2A(\alpha-A)}\}$$ (25) $$v=\frac{2A(\alpha-A)}{A-B}e^{\frac{(A-B)r}{2A(\alpha-A)}}(r+A)^{\frac{1}{2}}(r% +B)^{-\frac{B(\alpha-B)}{2A(\alpha-A)}}sinh\{\frac{(A-B)t}{2A(\alpha-A)}\}$$ (26) and for $-A>r>r_{c}$ by the relations $$u=\frac{2A(\alpha-A)}{A-B}e^{\frac{(A-B)r}{2A(\alpha-A)}}|r+A|^{\frac{1}{2}}(r% +B)^{-\frac{B(\alpha-B)}{2A(\alpha-A)}}sinh\{\frac{(A-B)t}{2A(\alpha-A)}\}$$ (27) $$v=\frac{2A(\alpha-A)}{A-B}e^{\frac{(A-B)r}{2A(\alpha-A)}}|r+A|^{\frac{1}{2}}(r% +B)^{-\frac{B(\alpha-B)}{2A(\alpha-A)}}cosh\{\frac{(A-B)t}{2A(\alpha-A)}\}$$ (28) Then Eq. (14) for $r>r_{c}$ takes the form $$ds^{2}=\frac{1}{r(r+\alpha)}(r+B)^{1+\frac{B(\alpha-B)}{A(\alpha-A)}}e^{\frac{% (B-A)r}{A(\alpha-A)}}(du^{2}-dv^{2})+r(r+\alpha)d{\Omega}$$ (29) where, if we know $u$ and $v$, $r$ and $t$ are determined for $r>-A$ by the relations $$u^{2}-v^{2}=\{\frac{2A(\alpha-A)}{A-B}\}^{2}e^{\frac{(A-B)r}{A(\alpha-A)}}(r+A% )(r+B)^{-\frac{B(\alpha-B)}{A(\alpha-A)}}$$ (30) $$\frac{v}{u}=tanh\{\frac{(A-B)t}{2A(\alpha-A)}\}$$ (31) and for $-A>r>r_{c}$ by analogous relations. The factor $r+A$ of Eq. (14) has been removed in Eq. (29). Introducing an analogous second coordinate patch we can remove the factor $r+B$. Of course we cannot eliminate both factors simultaneously. The energy-momentum tensor $T_{\mu\nu}$ of our solution is given by $$T_{\mu\nu}=\partial_{\mu}\psi\partial_{\nu}\psi+4(g_{1}e^{\psi}+g_{2}e^{-\psi}% )F_{\mu\rho}{F_{\nu}}^{\rho}-g_{\mu\nu}\{\frac{1}{2}\partial_{\rho}\psi% \partial^{\rho}\psi+(g_{1}e^{\psi}$$ $$+g_{2}e^{-\psi})F_{\rho\sigma}F^{\rho\sigma}\}=\frac{\alpha^{2}}{r^{2}(r+% \alpha)^{2}}\delta_{\mu r}\delta_{\nu r}+\{\frac{2AB}{r^{2}}+\frac{2(\alpha-A)% (\alpha-B)}{(r+\alpha)^{2}}\}(\delta_{\mu\theta}\delta_{\nu\theta}$$ $$+{sin^{2}\theta}\delta_{\mu\phi}\delta_{\nu\phi})-g_{\mu\nu}\{\frac{{\alpha^{2% }}(r+A)(r+B)}{2r^{3}(r+\alpha)^{3}}+\frac{AB}{r^{3}(r+\alpha)}+\frac{(\alpha-A% )(\alpha-B)}{r(r+\alpha)^{3}}\}$$ (32) Calculating the eigenvalues of $T_{\mu\nu}$ we can show that it satisfies the dominant as well as the strong energy condition outside and on the external horizon. If $-A>-B>0$ and $\alpha>0$ the Hawking temperature $T_{H}$ of our solution is [8] $$T_{H}=\frac{|(\lambda^{2})^{\prime}(-A)|}{4\pi}=\frac{B-A}{4\pi A(A-\alpha)}$$ (33) Assume that $$\alpha=B=\frac{2Q^{2}}{A}e^{\psi_{0}}$$ (34) Then from Eqs (13) and (15) we get $$g_{1}=1,\>\>\>\>g_{2}=0,\>\>\>\>2M=-A$$ (35) and Eqs (12) and (14) become $$e^{\psi}=e^{\psi_{0}}(1-\frac{Q^{2}}{Mr}e^{\psi_{0}})$$ (36) $$ds^{2}=-(1-\frac{2M}{r})dt^{2}+(1-\frac{2M}{r})^{-1}dr^{2}+r(r-\frac{Q^{2}}{M}% e^{\psi_{0}})d\Omega$$ (37) If in Eqs (36) and (37) we make the replacements $$\psi\rightarrow-2\phi,\>\>\>\>\psi_{0}\rightarrow-2\phi_{0}$$ (38) we get the GHS-GM solution. Therefore the GHS-GM solution is a special case of our solution. I am very grateful to A. Kehagias for many illuminating discussions and suggestions. The computation of the Ricci scalar and the curvature scalar was done with the help of a programm given to me by S. Bonanos, whom I thank. References [1] R. Ruffini and J. A. Wheeler, Phys. Today 24 (1),30 (1971). [2] J. D. Bekenstein, Phys. Rev. D 5, 1239 (1972), 5, 2403 (1972), 51 R6608 (1995), gr-gc/9808028, [3] S. Coleman, J. Preskill and F. Wilczek, Nucl. Phys. B 378, 175 (1992). [4] P. Bizon, Acta Phys. Pol. B 22, 877 (1994), gr-gc/9402016, [5] D. Nunez, H. Quevedo and D. Sudarsky, Phys. Rev. Lett. 76, 571 (1996). [6] D. Garfinkle, G. T. Horowitz and A. Strominger, Phys. Rev. D 43, 3140 (1991), 45, 3888(E) (1992). [7] G. W. Gibbons Nucl. Phys. B 207, 337 (1982), G. W. Gibbons and K. Maeda, Nucl. Phys. B 298, 741 (1988). [8] G. T. Horowitz hep-th/9210119. [9] D. Sudarsky and T. Zannias Phys. Rev. D 58 087502 (1998). [10] J. Graves and D. Brill Phys. Rev. 120, 1507 (1960)
{adjustwidth} 1cm1cm Antti Pöllänen Locally Repairable Codes and Matroid Theory Bachelor’s thesis Espoo November 22, 2015 Supervisor: Associate Professor Camilla Hollanti Instructor: Ph.D. Thomas Westerbäck Aalto University School of Science PL 11000, 00076 Aalto http://sci.aalto.fi/en/ Abstract of the bachelor’s thesis Author: Antti Pöllänen Title: Locally Repairable Codes and Matroid Theory Degree programme: Engineering Physics and Mathematics Major subject: Mathematics and Systems Analysis Major subject code: SCI3029 Supervisor: Associate Professor Camilla Hollanti Instructor: Ph.D. Thomas Westerbäck Abstract: Locally repairable codes (LRCs) are error correcting codes used in distributed data storage. A traditional approach is to look for codes which simultaneously maximize error tolerance and minimize storage space consumption. However, this tends to yield codes for which error correction requires an unrealistic amount of communication between storage nodes. LRCs solve this problem by allowing errors to be corrected locally. This thesis reviews previous results on the subject presented in [1]. These include that every almost affine LRC induces a matroid such that the essential properties of the code are determined by the matroid. Also, the generalized Singleton bound for LRCs can be extended to matroids as well. Then, matroid theory can be used to find classes of matroids that either achieve the bound, meaning they are optimal in a certain sense, or at least come close to the bound. This thesis presents an improvement to the results of [1] in both of these cases. Date: November 22, 2015 Language: English Number of pages: $$4+28$$ Keywords: distributed storage, matroid, erasure channel, locally repairable code, almost affine code, generalized Singleton bound Aalto-yliopisto Perustieteiden korkeakoulu PL 11000, 00076 Aalto http://sci.aalto.fi/fi/ Kandidaatintyön tiivistelmä Tekijä: Antti Pöllänen Työn nimi: Paikallisesti korjaavat koodit ja matroiditeoria Koulutusohjelma: Teknillinen fysiikka ja matematiikka Pääaine: Matematiikka ja systeemitieteet Pääaineen koodi: SCI3029 Vastuuopettaja: Professori Camilla Hollanti Ohjaaja: Ph.D. Thomas Westerbäck Tiivistelmä: Paikallisesti korjaavat koodit ovat virheenkorjauskoodeja, joita käytetään hajautetuissa tallennusjärjestelmissä. Perinteisesti on etsitty koodeja, jotka mahdollistavat mahdollisimman monen yhtäaikaisen virheen korjaamisen ja samanaikaisesti kasvattavat tallennustilan tarvetta mahdollisimman vähän. Tällaisilla koodeilla virheenkorjaus edellyttää kuitenkin usein epärealistisen paljon kommunikaatiota tallennusyksiköiden välillä. Paikallisesti korjaavien koodien tarkoitus on ratkaista tämä ongelma tekemällä virheiden korjaamisesta paikallista. Tässä työssä selostetaan aiheeseen liittyvät artikkelissa [1] esitetyt tutkimustulokset. Niihin lukeutuu, että kutakin lähes affiinia paikallisesti korjaavaa koodia vastaa yksikäsitteisesti matroidi, josta ilmenevät koodin olennaiset ominaisuudet. Lisäksi yleistetty Singleton-raja paikallisesti korjaaville koodeille voidaan yleistää koskemaan myös matroideja. Näiden tulosten avulla matroiditeoriaa voidaan hyödyntää sellaisten matroidiluokkien löytämisessä, jotka joko saavuttavat Singleton-rajan, eli ovat tietyssä mielessä optimaalisia, tai ainakin yltävät lähelle sitä. Työssä löydetään parannus aiempiin tuloksiin kummassakin näistä tapauksista. Päivämäärä: 22.11.2015 Kieli: englanti Sivumäärä: $$4+28$$ Avainsanat: hajautetut tallennusjärjestelmät, matroidi, pyyhkiymäkanava, paikallisesti korjaava koodi, lähes affiini koodi, yleistetty Singleton-raja Contents 1 Introduction 2 Locally repairable codes 2.1 Basics 2.2 Locally repairable codes 2.3 The Singleton bound 3 Matroids 4 Almost affine LRCs and their connection to matroids 4.1 Parameters $(n,k,d,r,\delta)$ as matroid invariants 4.2 The generalized Singleton bound for matroids 5 A structure theorem 6 Matroid constructions 7 The maximal value of $d$ for $(n,k,r,\delta)$-matroids 7.1 Achievability of the generalized Singleton bound 7.2 A general lower bound for $d_{max}$ 8 Conclusions 1 Introduction In modern times, the need for large scale data storage is swiftly increasing. This need is present for example in large data centers and in cloud storage. The large scale of these distributed data storage systems makes hardware failures common. However, the data must not be lost, and therefore means to recover corrupted data must be devised. Coding theory can be used as a tool for solving this problem. Coding refers to the process of converting the data into a longer redundant form in such a way that errors occurred after the coding can be corrected. There are various different codes that could be used in the context of distributed storage. However, in this paper we are interested in a class of codes called locally repairable codes (LRCs). Using these codes we can optimize not only storage space consumption and global error tolerance, but also local error tolerance. Local error tolerance or correction is desirable because it reduces the need for communication between storage units. Every almost affine LRC induces a matroid such that the parameters $(n,k,d,r,\delta)$ of the LRC appear as matroid invariants. Consequently, the generalized Singleton bound for the parameters $(n,k,d,r,\delta)$ of an LRC can be extended to matroids. Matroid theory can then be utilized to design LRCs that achieve the bound or at least come close to it. We review these results first introduced in [1] as well as present two improvements to them. 2 Locally repairable codes 2.1 Basics By a locally repairable code we mean a block code with certain local error correction properties. Let us start by reviewing the basic concept of a block code. The coding procedure using a block code can be defined as an injective mapping $\gamma:M\rightarrow\Sigma^{n}$, where $M$ is the set of symbols used to represent the non-coded information and $\Sigma$ is the alphabet used to represent the coded information. By the term code we mean the set $C=\gamma(M)\subsetneq\Sigma^{n}$, i.e. the image of $\gamma$. The code is a proper subset of $\Sigma^{n}$ because redundancy must be introduced by the coding process in order for error detection or correction to be possible. By a codeword we mean an element $c\in C$. The length of a codeword, i.e. the block length of the code, is denoted by $n$. The size of the alphabet $\Sigma$ is denoted by $q$ ($q=|\Sigma|$). The dimension of the code $k$ is defined by $k=\log_{q}(|C|)$. This means that if the original information is presented using the same alphabet $\Sigma$ as the coded information, $k$ symbols are needed to present the non-coded information and we have $M=\Sigma^{k}$. The rate $R$ of a code is given by $R=\frac{k}{n}<1$. A high rate is desirable because then less code symbols are needed to convey the coded information. The Hamming distance $\Delta(\mathbf{x},\mathbf{y})$ of two codewords $\mathbf{x},\mathbf{y}\in C$ is defined as the number of positions at which the two codewords differ. The minimum distance $d$ of a code $C$ is defined by $d=\min_{\mathbf{x},\mathbf{y}\in C,\mathbf{x}\neq\mathbf{y}}(\Delta(\mathbf{x}% ,\mathbf{y}))$. A large minimum distance is desirable because then more simultaneous errors can be corrected. In this paper we use an erasure channel model, which means that the potential errors include only erasures: Each element of the codeword either stays correct or is erased, and we know what the indices of the erased elements are. In this case, $d-1$ simultaneous errors can always be corrected. Finally, we need the concept of a projection of a code. For any subset $X=\{i_{1},...,i_{m}\}\subseteq[n]$, the projection of the code $C$ to $\Sigma^{|X|}$, denoted by $C_{X}$, is the set $$C_{X}=\{(c_{i_{1}},...,c_{i_{m}}):\mathbf{c}=(c_{1},...,c_{n})\in C\}.$$ The expression $[n]$ for $n\in\mathbb{Z}_{+}$ stands for the set of integers from 1 to $n$, i.e. $[n]=\{y\in\mathbb{Z}_{+}:y\leq n\}$. 2.2 Locally repairable codes A large rate and minimum distance are not always the only design criteria for a good code. With locally repairable codes we are also interested in a certain kind of locality of the error correction which is described by the parameters $r$ and $\delta$. Let us next give the definition of an $(n,k,d,r,\delta)$-LRC (originally defined in [2]). First we need the notion of an $(r,\delta)$-locality set: Definition 1. When $1\leq r\leq k$ and $\delta\geq 2$, an $(r,\delta)$-locality set of $C$ is a subset $S\subseteq[n]$ such that (i) $$\displaystyle|S|\leq r+\delta-1,$$ (1) (ii) $$\displaystyle l\in S,L=\{i_{1},...,i_{|L|}\}\subseteq S\setminus\{l\}\textrm{ % and }|L|=|S|-(\delta-1)\Rightarrow$$ $$\displaystyle\exists f:C_{L}\rightarrow\Sigma\textrm{ such that }f((c_{i_{1}},% ...,c_{i_{|L|}}))=c_{l}\textrm{ for all }\mathbf{c}\in C.$$ This definition implies that any $\delta-1$ code symbols of a locality set can be recovered by the rest of the symbols of the locality set and the locality set is at most of size $r+\delta-1$. This also means that any $|S|-(\delta-1)$ code symbols of the locality set can be used to determine the rest of the symbols in the locality set. The condition (ii) could also be equivalently expressed as either of the following: (ii)' $$\displaystyle l\in S,L=\{i_{1},...,i_{|L|}\}\subseteq S\setminus\{l\}\textrm{ % and }|L|=|S|-(\delta-1)\Rightarrow$$ $$\displaystyle|C_{L\cup\{l\}}|=|C_{L}|,$$ (ii)'' $$\displaystyle d(C_{S})\geq\delta\textrm{, where $d(C_{S})$ is the minimum % distance of $C_{S}$}.$$ We say that $C$ is a locally repairable code with all-symbol locality $(r,\delta)$ if every code symbol $l\in[n]$ is included in an $(r,\delta)$-locality set. 2.3 The Singleton bound Let us from now on continue our analysis in the context of linear codes. An (n,k)-linear code $C$ is a linear subspace of $\mathbb{F}_{q}^{n}$ with dimension $k$. Here $\mathbb{F}_{q}$ is a finite field of size $q$ (which is a prime) and acts as the alphabet of the code. If we assume that the original information is presented using the same alphabet $\mathbb{F}_{q}$, we get a linear code via the following encoding function $\gamma:\mathbb{F}_{q}^{k}\rightarrow\mathbb{F}_{q}^{n}$: $$\gamma(\mathbf{x})=\mathbf{x}^{T}\mathbf{G},\quad\textrm{ where }\mathbf{x}\in% \mathbb{F}_{q}^{k}\textrm{ and }\mathbf{G}\in\mathbb{F}_{q}^{k\times n}.$$ Here $\mathbf{G}$ is called a generator matrix of the linear code. So far we have stated the following criteria that are desirable for any code: a large rate $R=\frac{k}{n}$ and a large minimum distance $d$. These objectives are clearly contradictory, since a good rate implies only little redundancy in the code, whereas having a large minimum distance forces the code to have a lot of redundancy. For linear codes, this tradeoff is described by the Singleton bound: $$d\leq n-k+1.$$ (2) For linear locally repairable codes, we obtain the generalized Singleton bound [2]: $$d\leq n-k+1-\left(\left\lceil\frac{k}{r}\right\rceil-1\right)(\delta-1).$$ (3) The notation $\lceil\cdot\rceil$ denotes rounding up to the nearest integer. Similarly we will use $\lfloor\cdot\rfloor$ to denote rounding down to the nearest integer. Note that this is a less strict bound than (2), since every linear code achieving the bound (2) (i.e. satisfying it as an equation) also achieves the bound (3), but a code achieving (3) only achieves (2) if $k=r$ (as we assume $\delta\geq 2$). From now on, we will refer to the generalized Singleton bound (3) merely as the Singleton bound or even as the bound when the meaning is clear from context. When a code satisfies (3) as an equation, we say that the code achieves the bound or that the code is optimal. By optimal we do not here mean optimality in any objective sense. Instead, we could use Pareto optimality as an objective criterion for the favorableness of an LRC. By a Pareto optimal LRC we mean an LRC for which it is impossible to improve one parameter without weakening some other, with a larger $d$, $k$ and $\delta$ as well as a smaller $n$ and $r$ always being more desirable. The Pareto optimal LRCs are a subset of the LRCs with maximal $d$, i.e. the $(n,k,d,r,\delta)$-LRCs for which there exists no LRC with parameters $(n,k,d^{\prime},r,\delta)$ such that $d^{\prime}>d$. The significance of LRCs achieving the bound is that they are Pareto optimal except for some cases where decreasing $r$ would not alter the value of the right side of inequality (3) or where $\lceil\frac{k}{r}\rceil-1=0$ enabling a suboptimal $\delta$. 3 Matroids Matroids are abstract combinatorial structures that capture a certain mathematical notion of dependence that is common to a surprisingly large number of mathematical entities. For example, a set of vectors along with the concept of linear independence yields a matroid. One possibility to define a matroid is the following definition via independent sets [3]: Definition 2. A matroid $M=(E,\mathcal{I})$ is a finite set $E$ with a collection of subsets $\mathcal{I}\subseteq\mathcal{P}(E)$ such that (i) $$\displaystyle\emptyset\in\mathcal{I},$$ (4) (ii) $$\displaystyle Y\in\mathcal{I}\textrm{ and }X\subseteq Y\Rightarrow X\in% \mathcal{I},$$ (iii) $$\displaystyle X,Y\in\mathcal{I}\textrm{ and }|X|>|Y|\Rightarrow\exists x\in X% \setminus Y:\{x\}\cup Y\in\mathcal{I}.$$ Here $\mathcal{P}(E)$ denotes the power set of $E$, i.e. $\mathcal{P}(E)=\{Y:Y\subseteq E\}$. We say that a set $X\subseteq E$ is independent if $X\in\mathcal{I}$, otherwise it is dependent. It is easily verifiable that a set of vectors $E$ along with its linearly independent subsets $\mathcal{I}$ satisfies this definition. We call a matroid that arises from the column vectors of a matrix a matric matroid. Another common class of matroids are those arising from undirected graphs, in which case $E$ is the set of all edges of the graph, and a subset of edges is independent if it does not contain a cycle. The properties in (4) are satisfied for such graphic matroids as well. For the definitions of graph theoretic concepts and additional information on graphs, we refer the reader to [4]. There are various mathematical concepts associated with matroids. These concepts are often analogous to concepts already familiar in the context of structures giving rise to matroids, e.g. the column vectors of a matrix or an undirected graph. Let us start by defining the rank function of a matroid $M$: Definition 3. The rank function $\rho$ of a matroid $M=(E,\mathcal{I})$ is a function $\rho:\mathcal{P}(E)\rightarrow\mathbb{Z}$ satisfying the following for $X\subseteq E$: $$\rho(X)=\max\{|Y|:Y\subseteq X\textrm{ and }Y\in\mathcal{I}\}.$$ For matrices, this concept is analogous to the rank of the matrix formed by the column vectors of $X$. For undirected graphs, rank tells the amount of edges in a minimal spanning forest of the subgraph induced by $X$. The rank function satisfies the following properties [3]: Theorem 1. Let $\rho$ be the rank function of a matroid $M=(E,\mathcal{I})$. Then for $X,Y\subseteq E$: (i) $$\displaystyle 0\leq\rho(X)\leq|X|,$$ (5) (ii) $$\displaystyle X\subseteq Y\Rightarrow\rho(X)\leq\rho(Y),$$ (iii) $$\displaystyle X,Y\subseteq E\Rightarrow\rho(X)+\rho(Y)\geq\rho(X\cup Y)+\rho(X% \cap Y),$$ (iv) $$\displaystyle X\in\mathcal{I}\Leftrightarrow\rho(X)=|X|.$$ Equation (5)(iii) is called the semimodular inequality, and it can for instance be viewed as an upper bound for the rank of a union of two sets. It also implies subadditivity for the rank function: we have $\rho(\bigcup_{X\in S}X)\leq\sum_{X\in S}\rho(X)$ for any collection of subsets $S\subseteq\mathcal{P}(E)$. Actually we can use these properties of the rank function as an alternative matroid definition [3]: Definition 4. A matroid $M=(E,\rho)$ is a finite set $E$ together with a function $\rho:\mathcal{P}(E)\rightarrow\mathbb{Z}$ such that it satisfies the conditions (i)-(iii) in (5). In this case, an independent set is defined by condition (iv), and we get the conditions in Definition 4 as a theorem. In this way, these two definitions are equivalent to each other and we can use them interchangeably. There is also a plethora of further ways to define a matroid. Many of these definitions are constructed using propositions that apply to matroid concepts defined below. This variety of definitions is a truly useful property of matroids: a matroid can be identified using any definition after which we automatically know that the conditions in the other definitions are also satisfied. Let us next define some of these matroid concepts. The nullity of a set $X\subseteq E$ is defined by $\eta(X)=|X|-\rho(X)$. A circuit is a dependent set $X\subseteq E$ whose all proper subsets are independent, i.e. $\rho(X\setminus\{x\})=\rho(X)=|X|-1$ for every $x\in X$. This concept is perhaps most easily understood in the context of graphic matroids since then a set of edges is a circuit if and only if it is a cycle of the undirected graph. We denote the set of circuits of a matroid by $\mathcal{C}(M)$. The closure of a set $X\subseteq E$ is defined by $\textrm{cl}(X)=\{x\in E:\rho(X\cup\{x\})=\rho(X)\}$. In terms of a matric matroid, the closure $\textrm{cl}(X)$ consists of all vectors of $E$ that are in the span of the vectors in $X$. For a graphic matroid, the closure is obtained by adding all the edges of $E$ whose endpoints are connected by a walk in the graph or whose two endpoints are the same vertex. A set $X\subseteq E$ is $cyclic$ if it is a union of circuits. Alternatively a cyclic set is a set $X$ such that $\forall x\in X:\rho(X\setminus\{x\})=\rho(X)$. For matric matroids, this means that $X$ includes no vector that would not be in the span of the rest of the vectors. We denote the set of cyclic sets of a matroid by $\mathcal{U}(M)$. A set $X\subseteq E$ is a flat if $X=\textrm{cl}(X)$. A cyclic flat is a flat that also is a cyclic set, i.e. a union of circuits. The cyclic flats of a matroid have the property that they form a finite lattice $\mathcal{Z}$ with the following meet and join for $X,Y\in\mathcal{Z}$ [5]: $$\displaystyle X\wedge Y=\bigcup_{C\in\mathcal{C}(M):C\subseteq X\cap Y},$$ $$\displaystyle X\vee Y=\textrm{cl}(X\cup Y).$$ The set of the atoms of the lattice is denoted by $A_{\mathcal{Z}}$ and the set of the coatoms by $coA_{\mathcal{Z}}$. We refer the reader unfamiliar with partial orders and order-theoretic lattices to [1] for a minimal background or to [6] for a more comprehensive exposition. Another way to define a matroid is via this lattice of cyclic flats. In fact, this viewpoint is very useful for us since we are later using the lattice of cyclic flats as a tool for constructing and analyzing matroids that correspond to good LRCs. The associated axioms are presented in the following theorem, where $0_{\mathcal{Z}}$ denotes the least element and $1_{\mathcal{Z}}$ denotes the greatest element of the finite lattice $\mathcal{Z}$: Theorem 2. ([5]) Let $\mathcal{Z}\subseteq\mathcal{P}(E)$ and let $\rho$ be a function $\rho:\mathcal{Z}\rightarrow\mathbb{Z}$. There is a matroid $M$ on $E$ for which $\mathcal{Z}$ is the set of cyclic flats and $\rho$ is the rank function restricted to the sets in $\mathcal{Z}$ if and only if $$\displaystyle(Z0)$$ $$\mathcal{Z}$$ is a lattice under inclusion, (6) $$\displaystyle(Z1)$$ $$\displaystyle\rho(0_{\mathcal{Z}})=0,$$ $$\displaystyle(Z2)$$ $$\displaystyle X,Y\in\mathcal{Z}\textrm{ and }X\subsetneq Y\Rightarrow$$ $$\displaystyle 0<\rho(Y)-\rho(X)<|Y|-|X|,$$ $$\displaystyle(Z3)$$ $$\displaystyle X,Y\in\mathcal{Z}\Rightarrow\rho(X)+\rho(Y)\geq$$ $$\displaystyle\rho(X\vee Y)+\rho(X\wedge Y)+|(X\cap Y)\setminus(X\wedge Y)|.$$ The restriction of $M$ to $X$ is the matroid $M|X=(X,\rho_{|X})$ with $\rho_{|X}(Y)=\rho(Y)$ for all subsets $Y\subseteq X$. This is clearly a matroid since $\rho$ satisfying the conditions (5) (i)-(iii) implies $\rho_{X|}$ also satisfying them. Similarly, a subset of $X$ is independent for $M|X$ exactly if it is independent for the original matroid $M$. For each matroid $M=(\rho,E)$ there is a dual matroid $M^{*}=(\rho^{*},E)$ defined by $\rho^{*}(X)=\rho(E\setminus X)+|X|-\rho(E)$ (which can be proved by checking conditions (i)-(iii) in (1)). This means that a subset $X\subseteq E$ is independent for $M^{*}$ if and only if $\rho(E\setminus X)=\rho(E)$ i.e. $X\subseteq\textrm{cl}(E\setminus X)$ for $M$. The circuits of $M^{*}$ are the minimal sets $C^{*}\subseteq E$ such that $\rho(E\setminus C^{*})<\rho(E)$. Lastly, we need the concept of a uniform matroid. An $(n,k)$-uniform matroid is a matroid $M=(E,\mathcal{I})$ for which $|E|=n$ and a set $X\subseteq E$ is independent if and only if $|X|\leq k$. Defined via the rank function, a matroid is $(n,k)$-uniform if $|E|=n$ and $\rho(X)=\min\{|X|,k\}$ for all $X\subseteq E$. Note that $\rho(E)=k$. An $(n,k)$-uniform matroid is obtained for instance as the matric matroid of $n$ randomly chosen uniformly distributed vectors in $\{x\in\mathbb{R}^{k}:||x||<a\}$ with $0<a\in\mathbb{R}$ and where $||\cdot||$ denotes the Euclidean norm. In this case, we get the desired matroid with probability 1. 4 Almost affine LRCs and their connection to matroids The results in this paper apply to a class of codes called almost affine codes, with the following definition: Definition 5. A code $C\subseteq\Sigma^{n}$, where $\Sigma$ is a finite set of size $s\geq 2$, is almost affine if for each $X\subseteq[n]$: $$\log_{s}(|C_{X}|)\in\mathbb{Z}.$$ The projections $C_{X}$ of an almost affine code $C$ are also almost affine. The linear codes are a special case of almost affine codes. The following theorem is the basis for our application of matroid theory to find good LRCs: Theorem 3. ([7]) Every almost affine code $C\subseteq\Sigma^{n}$ with $s=|\Sigma|$ induces a matroid $M_{C}=([n],\rho_{C})$, where $$\rho_{C}(X)=\log_{s}(|C_{X}|).$$ (7) However, not every matroid is a matroid induced by an almost affine code. Examples of matroids not obtainable from almost affine codes are presented in [8]. Later in this paper (Theorem 9) we present a subclass of matroids for which it is shown that there exists a corresponding almost affine code. Note that according to this theorem, the matroid induced by a linear code is the matric matroid induced by the columns of its generator matrix. 4.1 Parameters $(n,k,d,r,\delta)$ as matroid invariants The remarkable theorem that allows us to analyze the parameters $(n,k,d,r,\delta)$ of an LRC via its associated matroid is the following [1]: Theorem 4. ([1]) Let $C$ be an almost affine LRC with the associated matroid $M_{C}=([n],\rho_{C})$. Then the parameters $(n,k,d,r,\delta)$ of $C$ are matroid invariants, where (i) $$\displaystyle k=\rho_{C}([n]),$$ (8) (ii) $$\displaystyle d=\min\{|X|:X\in\mathcal{C}(M^{*})\},$$ (iii) $$C$$ has all-symbol locality $$(r,\delta)$$ if and only if for every $$j\in[n]$$ there exists a subset $$S_{j}\subseteq[n]$$ such that $$\displaystyle\textrm{a) }j\in S_{j},$$ $$\displaystyle\textrm{b) }|S_{j}|\leq r+\delta-1,$$ $$\displaystyle\textrm{c) }d(C_{S_{j}})=\min\{|X|:X\in\mathcal{C}((M_{C}|S_{j})^% {*})\}\geq\delta,$$ or the equivalent condition to (c) that c’) For all $$L\subseteq S_{j}$$ with $$|L|=|S_{j}|-(\delta-1)$$, and all $$l\in S_{j}\setminus L$$, $$\displaystyle\quad\textrm{we have $\rho_{C}(L\cup l)=\rho_{C}(L)$.}$$ We will not give a complete proof here but the main ideas of why this result holds. For $n$, $[n]=E$ and $n=|E|$ follow directly from the definitions in Theorem 3. The result $k=\rho_{C}([n])$ follows straight from (7) since by choosing $X=[n]$, the right side of the equation is the same as the definition for $k$. In [7] it is proven that $d$ equals the minimum size of a circuit of the dual matroid $M_{C}^{*}$ and that $M_{C_{x}}=M|X$ for $X\subseteq[n]$. Because $C_{X}$ is also almost affine, it follows that $$d(C_{X})=\min\{|X|:X\in\mathcal{C}((M|X)^{*})\},$$ where $d(C_{X})$ denotes the minimum distance of $C_{X}$. An equivalent condition to condition (iii) c) is that for every set $X\subseteq S_{j}$ for which $|X|\leq\delta-1$, we have $\rho(S_{j}\setminus X)=\rho(S_{j})$, according to our considerations for dual matroids in Section 3. The condition c’) is easily seen to be equivalent to this. Now we can view the above results as definitions for the parameters $(n,k,d,r,\delta)$ for matroids. From the viewpoint of the lattice of cyclic flats the parameters are obtained as follows: Theorem 5. ([1]) Let $M=(E,\rho)$ be a matroid with $0<\rho(E)$ and $1_{\mathcal{Z}}=E$. Then $$\displaystyle(i)$$ $$\displaystyle n=|1_{\mathcal{Z}}|,$$ (9) $$\displaystyle(ii)$$ $$\displaystyle k=\rho(1_{\mathcal{Z}}),$$ $$\displaystyle(iii)$$ $$\displaystyle d=n-k+1-\max\{\eta(Z):Z\in coA_{\mathcal{Z}}\},$$ $$\displaystyle(iv)$$ $$M$$ has locality $$(r,\delta)$$ if and only if for each $$x\in E$$ there exists a cyclic set $$S_{x}\in\mathcal{U}(M)$$ such that $$\displaystyle\textrm{a) }x\in S_{x},$$ $$\displaystyle\textrm{b) }|S_{x}|\leq r+\delta-1,$$ $$\displaystyle\textrm{c) }d(M|S_{x})=\eta(S_{x})+1-\max\{\eta(Z):Z\in coA_{% \mathcal{Z}(M|S_{x})}\}\geq\delta.$$ The only non-trivial parts of this theorem are the expressions for $d$ in (iii) and (iv) c. From Theorem 4 we know that $d$ equals the size of a minimal circuit of the dual matroid, i.e. the size of a minimal set $X\in E$ for which $\rho(E\setminus X)<\rho(E)$. The problem of finding $d$ is thus reduced to finding the maximal $Y=E\setminus X$ such that $Y$ does not have full rank. In [1] the following result is proved: Lemma 1. If $\rho(X)<\rho(E)$ and $1_{\mathcal{Z}}=E$, then $\eta(X)\leq\max\{\eta(Z):Z\in coA_{\mathcal{Z}}\}$. Let us examine the set $Y^{\prime}$ that we get by taking a coatom of maximal nullity $Z_{max}$ and adding elements to it such that it reaches rank $\rho(E)-1$. Every element added to $Z_{max}$ increases rank, due to Lemma 1 (since adding an element always increases either rank or nullity by one). Thus we have $|Y^{\prime}|=\rho(Y^{\prime})+\eta(Y^{\prime})=\rho(E)-1+\eta(Z_{max})$. Using Lemma 1 again, we notice that $Y^{\prime}$ now has the maximal size of a set with non-full rank. Thus we have $d=|E|-|Y^{\prime}|=n-k+1-\max\{\eta(Z):Z\in coA_{\mathcal{Z}}\}$. 4.2 The generalized Singleton bound for matroids It turns out that the generalized Singleton bound (3) applies to matroids as well [1]. Let us next present the main ideas of how this bound is obtained. Firstly we have the two following lemmas: Lemma 2. ([1]) Let $M=(\rho,E)$ be a matroid with parameters $(n,k,d,r,\delta)$ and let $\{S_{x}\}_{x\in E}$ be a collection of cyclic sets of $M$ for which the conditions (a)-(c) in Theorem 5 are satisfied. Then there is a subset of cyclic sets $\{S_{j}\}_{j\in[m]}$ of $\{S_{x}\}_{x\in E}$ such that for $Y_{j}=\textrm{cl}(Y_{j-1}\cup S_{j})=Y_{j-1}\vee\textrm{cl}(S_{j})$, where $j=1,...,m$ we have (i) $$\displaystyle C:0_{\mathcal{Z}}=Y_{0}\subsetneq Y_{1}\subsetneq...\subsetneq Y% _{m}=E\textrm{ is a chain in $(\mathcal{Z}(M),\subseteq)$},$$ (10) (ii) $$\displaystyle\rho(Y_{j})-\rho(Y_{j-1})\leq r,$$ (iii) $$\displaystyle\eta(Y_{j})-\eta(Y_{j-1})\geq\delta-1.$$ A sketch of the proof is the following: There is a subset $\{S_{j}\}_{j\in[m]}$ satisfying condition (i) as $\bigcup_{x\in E}S_{x}=E$. The semimodular inequality (1) (iii) implies that $\rho(Y_{j})\leq\rho(Y_{j-1})+\rho(S_{j})$, since $\rho(Y_{j})=\rho(Y_{j-1}\cup\rho(S_{j}))$. Together with $\rho(S_{j})\leq r$ we get condition (ii). We have $|S_{j}\setminus Y_{j-1}|\geq\delta$, since otherwise we would have a set $X\subseteq S_{j}$ with $|X|\leq\delta-1$ and $X\cap Y_{j-1}=\emptyset$ for which $X\subsetneq\textrm{cl}(S_{j}\setminus X)$. This is because $Y_{j-1}\cap S_{j}$ is a subset of a flat. Such a set $X$ would contradict with the definition of a locality set $S_{j}$. Now condition (iii) follows because for any $X\subseteq S_{j}\setminus Y_{j-1}$ with $|X|=\delta-1$, $X\subseteq\textrm{cl}(Y_{j}\setminus X)$. Lemma 3. ([1]) Let $M=(E,\rho)$ be a matroid with parameters $(n,k,d,r,\delta)$ and let $C:0_{\mathcal{Z}}=Y_{0}\subsetneq Y_{1}\subsetneq...\subsetneq Y_{m}=E$ be any chain of $(\mathcal{Z}(M),\subseteq)$ given in Lemma 2. Then we have $$d\leq n-k+1-\eta(Y_{m-1})\quad\textrm{and}\quad m\geq\left\lceil\frac{k}{r}% \right\rceil.$$ The second inequality roughly follows from condition (ii) in Lemma 2 which together with the fact that $\rho(S_{i})\leq r$ for every locality set $S_{i}$ implies $k=\rho(E)\leq mr$. The first inequality follows from Theorem 5 (iii) and Lemma 1. Combining Lemma 2 and Lemma 3 we now get the generalized Singleton bound for matroids: Theorem 6. ([1]) Let $M=(E,\rho)$ be an $(n,k,d,r,\delta)$-matroid, then $$d\leq n-k+1-\left(\left\lceil\frac{k}{r}\right\rceil-1\right)(\delta-1).$$ (11) Proof. By Lemma 3 we have that $d\leq n-k+1-\eta(Y_{m-1})$. From Lemma 2 (iii) and $m\geq\left\lceil\frac{k}{r}\right\rceil$ in Lemma 3 it now follows that $\eta(Y_{m-1})\geq(m-1)(\delta-1)\geq\left(\left\lceil\frac{k}{r}\right\rceil-1% \right)(\delta-1)$, which yields the desired result. ∎ When a matroid satisfies (11) as an equation, we say that the matroid achieves the bound or that the matroid is optimal. 5 A structure theorem The rest of this paper will be focused on finding matroids that achieve the generalized Singleton bound or at least come close to it. A good starting point is the following theorem which gives a set of necessary structural properties for a matroid to achieve the bound. We say that a collection of sets $X_{1},X_{2},...,X_{j}$ has a nontrivial union if $$X_{l}\nsubseteq\bigcup_{i\in[j]\setminus\{l\}}X_{i}\textrm{ for }l\in[j].$$ Theorem 7. Let $M=(E,\rho)$ be an $(n,k,d,r,\delta)$-matroid with $r<k$ and $$d=n-k+1-\left(\left\lceil\frac{k}{r}\right\rceil-1\right)(\delta-1).$$ Also, let $\{S_{x}:x\in E\}\subseteq\mathcal{U}(M)$ be a collection of cyclic sets for which the conditions (a)-(c) in Theorem 5 are satisfied. Then (i) $$\displaystyle 0_{\mathcal{Z}}=\emptyset,$$ (ii) $$\displaystyle\textrm{for each }x\in E,$$ $$\displaystyle\textrm{a) }\eta(S_{x})=(\delta-1),$$ $$\displaystyle\textrm{b) }S_{x}\textrm{ is a cyclic flat and }\mathcal{Z}(M|S_{% x})=\{X\in\mathcal{Z}(M):X\subseteq S_{x}\}=\{\emptyset,S_{x}\},$$ (iii) For each collection $$F_{1},...,F_{j}$$ of cyclic flats in $$\{S_{x}:x\in E\}$$ that has a non- trivial union, $$\displaystyle\textrm{c) }\eta(\bigvee_{i=1}^{j}F_{i})=\begin{cases}j(\delta-1)% &\textrm{if }j<\lceil\frac{k}{r}\rceil,\\ n-k\geq\lceil\frac{k}{r}\rceil(\delta-1)&\textrm{if }j\geq\lceil\frac{k}{r}% \rceil,\end{cases}$$ $$\displaystyle\textrm{d) }\bigvee_{i=1}^{j}F_{i}=\begin{cases}\bigcup_{i=1}^{j}% F_{i}&\textrm{if }j<\lceil\frac{k}{r}\rceil,\\ E&\textrm{if }j\geq\lceil\frac{k}{r}\rceil,\end{cases}$$ $$\displaystyle\textrm{e) }\rho(\bigvee_{i=1}^{j}F_{i})=\begin{cases}|\bigcup_{i% =1}^{j}F_{i}|-j(\delta-1)&\textrm{if }j<\lceil\frac{k}{r}\rceil,\\ k&\textrm{if }j\geq\lceil\frac{k}{r}\rceil,\end{cases}$$ $$\displaystyle\textrm{f) }|F_{j}\cap\bigcup_{i=1}^{j-1}F_{i})|\leq|F_{j}|-% \delta\textrm{ if }j\leq\left\lceil\frac{k}{r}\right\rceil.$$ The following is an outline of the proof: Let us have a chain C: $0_{\mathcal{Z}}=Y_{0}\subsetneq Y_{1}\subsetneq...\subsetneq Y_{m}=E$ in $(\mathcal{Z}(M),\subseteq)$ as earlier. In the process of proving Theorem 6 we obtained that for every such chain we have $$\displaystyle d$$ $$\displaystyle\leq n-k+1-\eta(Y_{m-1})$$ (12) $$\displaystyle\leq n-k+1-(m-1)(\delta-1)$$ $$\displaystyle\leq n-k+1-\left(\left\lceil\frac{k}{r}\right\rceil-1\right)(% \delta-1).$$ Thus in order to achieve the bound we must have $\eta(Y_{m-1})=(m-1)(\delta-1)$ for every chain $C$, which together with Lemma 2 (iii) implies $\eta(\bigvee_{i=0}^{j}Y_{i})=j(\delta-1)$ for $j<m$. This in turn implies $0_{\mathcal{Z}}=\emptyset$, $\eta(S_{x})=(\delta-1)$ and $S_{x}$ being a cyclic flat for every $x\in E$ as well as condition (iii) c) together with $m=\lceil\frac{k}{r}\rceil$ which follows from equation (12). The result $\mathcal{Z}(M|S_{x})=\{\emptyset,S_{x}\}$ is required for $\eta(S_{x})=(\delta-1)$ to be possible, since otherwise not every $X\subseteq S_{x}$ with $|X|=\rho(S_{x})$ would have $\rho(X)=\rho(S_{x})$. Note that this also implies that $M|S_{x}$ is a uniform matroid and that $S_{x}$ is an atom of the lattice of cyclic flats. Conditions d) and e) are a direct consequence of c) and that always $m=\lceil\frac{k}{r}\rceil$. If (iii) f) was not true for a locality set $F_{j}$, we would have $\textrm{cl}(\bigcup_{i=1}^{j-1}F_{i})=\bigcup_{i=1}^{j}F_{i}$ which would contradict with (iii) c). 6 Matroid constructions In this chapter we will give some explicit matroid constructions first introduced in [1]. We later use these constructions to prove existence results for matroids with a large $d$. Construction 1 gives a class of matroids that is beneficial in the sense that the cyclic flats of its matroids have high rank and minimal size, which implies that the coatoms have small nullity. This in turn means that the matroids from this construction have a large $d$. They also have a simple structure which makes analyzing them easier. Construction 1: Let $F_{1},...,F_{m}$ be a collection of subsets of a finite set $E$ and let us denote $F_{I}=\bigcup_{i\in I}F_{i}$ for $I\subseteq[m]$ . Let $k$ be a positive integer and let $\rho$ be a function $\rho:\{F_{i}\}_{i\in[m]}\rightarrow\mathbb{Z}$ such that (i) $$\{F_{i}\}_{i\in[m]}$$ has a nontrivial union, with $$F_{[m]}=E$$, (13) (ii) $$\displaystyle 0<\rho(F_{i})<|F_{i}|\textrm{ for every }i\in[m],$$ (iii) There exists $$I\subseteq[m]$$ such that $$F_{I}-\sum_{i\in I}\eta(F_{i})\geq k$$, (iv) If $$F_{I}\in\mathcal{Z}_{<k}$$ and $$j\in[m]\setminus I$$, then $$|F_{I}\cap F_{j}|<\rho(F_{j})$$, (v) If $$F_{I},F_{J}\in\mathcal{Z}_{<k}$$ and $$F_{I\cup J}\notin\mathcal{Z}_{<k}$$, then $$|F_{I\cup J}|-\sum_{t\in I\cup J}\eta(F_{t})\geq k$$, where we define $$\eta(F_{i})=|F_{i}|-\rho(F_{i})\textrm{ for }i\in[m]$$ and $$\mathcal{Z}_{<k}=\{F_{J}:J\subseteq[m]\textrm{ and }|F_{I}|-\sum_{i\in I}\eta(% F_{i})<k\textrm{ for all }I\subseteq J\}.$$ Now, to a collection of subsets $F_{1},...,F_{m}$ of $E$, integer $k$ and function $\rho$ that satisfy the conditions (i)-(v), we extend the function $\rho$ to a function on $\mathcal{Z}$, by $\mathcal{Z}=\mathcal{Z}_{<k}\cup E$ and $$\rho(X)=\begin{cases}|F_{I}|-\sum_{i\in I}\eta(F_{i})&\textrm{ if }X=F_{I}\in% \mathcal{Z}_{<k},\\ k&\textrm{ if }X=E.\end{cases}$$ (14) Theorem 8. ([1]) Let $F_{1},...,F_{m}$ be a collection of subsets of a finite set $E$, $k$ a positive integer and $\rho:\{F_{i}\}_{i\in[m]}\rightarrow\mathbb{Z}$ a function such that the conditions (i)-(v) of Construction 1 are satisfied. Then $\mathcal{Z}$ and $\rho:\mathcal{Z}\rightarrow\mathbb{Z}$, defined in (14), define an $(n,k,d,r,\delta)$-matroid $M(F_{1},...,F_{m};k;\rho)$ on $E$ for which $\mathcal{Z}$ is the collection of cyclic flats, $\rho$ is the rank function restricted to the cyclic flats and $$\displaystyle(i)$$ $$\displaystyle n=|E|,$$ $$\displaystyle(ii)$$ $$\displaystyle k=\rho(E),$$ $$\displaystyle(iii)$$ $$\displaystyle d=n-k+1-\max\{\sum_{i\in I}\eta(F_{i}):F_{I}\in\mathcal{Z}_{<k}\},$$ $$\displaystyle(iv)$$ $$\displaystyle\delta-1=\min_{i\in[m]}\{\eta(F_{i})\},$$ $$\displaystyle(v)$$ $$\displaystyle r=\max_{i\in[m]}\{\rho(F_{i})\}.$$ For each $i\in[m]$, any subset $S\subseteq F_{i}$ with $|S|=\rho(F_{i})+\delta-1$ is a locality set of the matroid. Theorem 9 gives a subclass of matroids obtainable from Construction 1 for which it is proven in [1] that its matroids correspond to almost affine LRCs. This result, given in Theorem 10, is required in order to prove existence results on almost affine LRCs using matroids. The only additional requirement in Theorem 9 compared to Construction 1 is that the manner in which the atoms $F_{i}$ can intersect with each other is more restricted, determined by condition (iv). Theorem 9. ([1]) Let $F_{1},...,F_{m}$ be a collection of subsets of a finite set $E$, $k$ a positive integer and $\rho:\{F_{i}\}_{i\in[m]}\rightarrow\mathbb{Z}$ a function such that $$\displaystyle(i)$$ $$\displaystyle 0<\rho(F_{i})<|F_{i}|\textrm{ for }i\in[m],$$ (15) $$\displaystyle(ii)$$ $$\displaystyle F_{[m]}=E,$$ $$\displaystyle(iii)$$ $$\displaystyle k\leq F_{[m]}-\sum_{i\in[m]}\eta(F_{i}),$$ $$\displaystyle(iv)$$ $$\displaystyle|F_{[m]\setminus\{j\}}\cap F_{j}|<\rho(F_{j})\textrm{ for all }j% \in[m].$$ Then $F_{1},...,F_{m}$, $k$ and $\rho$ define a matroid $M(F_{1},...,F_{m};k;\rho)$ given in Theorem 8. Theorem 10. ([1]) Let $M(F_{1},...,F_{m};k;\rho)$ be an $(n,k,d,r,\delta)$-matroid that we get in Theorem 9. Then for every large enough field there is a linear LRC over the field with parameters $(n,k,d,r,\delta)$. Theorem 10 is proved in [1] by first constructing a directed graph supporting a gammoid isomorphic to the matroid. (See for instance [3] for an explanation on gammoids.) The proof is completed by a result stating that every finite gammoid is representable over every large enough finite field, which is proved in [9]. The following graph construction was introduced in [1]. The matroids it yields are a subclass of those obtainable from Theorem 9. The purpose of it is to give a tool for designing matroids with a large $d$. The main idea of it is to restrict the manner the atoms $F_{i}$ can share elements in such a way that the matroid can be unambiguously described by a weighted undirected graph, together with information on the rank and nullity of each atom. The vertices correspond to atoms and the weights of the edges tell how many elements are shared by the corresponding atoms. Graph construction 1 Let $G=G(\alpha,\beta,\gamma;k,r,\delta)$ be a graph with vertices $[m]$ and edges $W$, where $(\alpha,\beta)$ are two functions $[m]\rightarrow\mathbb{Z}$, $\gamma:W\rightarrow\mathbb{Z}$ and $(k,r,\delta)$ are three integers with $0<r<k$ and $\delta\geq 2$, such that $$\displaystyle\begin{split}&\displaystyle\textrm{(i)}\quad\textrm{$G$ is a % graph with no 3-cycles,}\\ &\displaystyle\textrm{(ii)}\quad 0\leq\alpha(i)\leq r-1\textrm{ for }i\in[m],% \\ &\displaystyle\textrm{(iii)}\quad\beta(i)\geq 0\textrm{ for }i\in[m],\\ &\displaystyle\textrm{(iv)}\quad\gamma(w)\geq 1\textrm{ for }w\in W,\\ &\displaystyle\textrm{(v)}\quad k\leq rm-\sum_{i\in[m]}\alpha(i)-\sum_{w\in W}% \gamma(w),\\ &\displaystyle\textrm{(vi)}\quad r-\alpha(i)>\sum_{w=\{i,j\}\in W}\gamma(W)% \textrm{ for }i\in[m].\end{split}$$ (16) Theorem 11. ([1]) Let $G(\alpha,\beta,\gamma;k,r,\delta)$ be a graph on $[m]$ such that the conditions (i)-(vi) given in (16) are satisfied. Then there is an $(n,k,r,d,\delta)$-matroid $M(F_{1},...,F_{m};k;\rho)$ given in Theorem 9 with $$\displaystyle\begin{split}&\displaystyle\textrm{(i)}\quad n=(r+\delta-1)m-\sum% _{i\in[m]}\alpha(i)+\sum_{i\in[m]}\beta(i)-\sum_{w\in W}\gamma(w),\\ &\displaystyle\textrm{(ii)}\quad d=n-k+1-\max_{I\in V_{<k}}\{(\delta-1)|I|+% \sum_{i\in I}\beta(i)\},\end{split}$$ (17) where $$V_{<k}=\{I\subseteq[m]:r|I|-\sum_{i\in I}\alpha(i)-\sum_{i,j\in I,w=\{i,j\}\in W% }\gamma(w)<k\}.$$ 7 The maximal value of $d$ for $(n,k,r,\delta)$-matroids Let us denote the largest possible $d$ for a matroid with the parameters $(n,k,r,\delta)$ by $d_{max}=d_{max}(n,k,r,\delta)$. In this chapter, we will review the results in [1] on $d_{max}$ and on matroid constructions that yield matroids with large $d$. We will also present two improvements to these results. The complete function $d_{max}$ is unknown, but in [1] two kinds of results on it are presented: Firstly, it is proved that for some classes of parameters $(n,k,r,\delta)$ the generalized Singleton bound (11) can be reached and for some it can not. Secondly, a general lower bound for $d_{max}$ is derived. Existence results are proved using the matroid constructions from the previous chapter to construct matroids with a desired $d$. Non-existence results of optimal matroids are proved using Theorem 7. As new results, we will extend one class of parameters for which the bound can be achieved as well as give an improved lower bound for $d_{max}$. First we will need a result on which parameter sets are possible for an $(n,k,d,r,\delta)$-matroid: Theorem 12. ([1]) Let M = $(E,\rho)$ be an $(n,k,r,\delta)$-matroid. Then we have $$k\leq n-\left\lceil\frac{k}{r}\right\rceil(\delta-1).$$ (18) Proof. The inequality can be equivalently expressed as $\left\lceil\frac{k}{r}\right\rceil(\delta-1)\leq\eta(E)$. The left side can be seen to be a lower bound for $\eta(E)$ by Lemma 2 (iii) and $m\geq\left\lceil\frac{k}{r}\right\rceil$ in Lemma 3. ∎ We also have $\delta\geq 2$, since $\delta=1$ would allow independent locality sets, which would make local error correction impossible. Let us now review the results on $d_{max}$ in [1] and present the two improvements. We will start by stating the results on when the bound can be achieved, after which we will consider a general lower bound for $d_{max}$. 7.1 Achievability of the generalized Singleton bound Exactly an $(n,k)$-uniform matroid achieves the bound when $r=k$. The bound is in this case simplified into $d\leq n-k+1$. From Theorem 5 (iii) we see that the bound is achieved if and only if $\mathcal{Z}=\{\emptyset,E\}$, which is satisfied exactly by uniform matroids. Moreover, uniform matroids allow use to choose the required $(r,\delta)$-locality sets. The same uniform matroid that we used for $r=k$ is also valid for $r>k$ when $k$ is the same as before, although choosing such an $r$ has no practical use whatsoever. For the rest of the discussion, we will consider the case $r<k$. Let $m$ denote the number of atoms of a matroid. In order to achieve the bound, we must have $m\geq\lceil\frac{n}{r+\delta-1}\rceil$, since otherwise we would have an atom $F_{i}$ with $|F_{i}|>r+\delta-1$, and condition (ii) in Theorem 7 would not be satisfied. Let us now introduce two important constants for an $(n,k,d,r,\delta)$-matroid: $$\displaystyle a=\left\lceil\frac{k}{r}\right\rceil r-k,$$ (19) $$\displaystyle b=\left\lceil\frac{n}{r+\delta-1}\right\rceil(r+\delta-1)-n.$$ (20) We notice that $\lceil\frac{k}{r}\rceil r$, denoted by $k_{max}$, gives the largest possible rank of a union of $\lceil\frac{k}{r}\rceil$ atoms, i.e. $F_{T}$ with $|T|=\lceil\frac{k}{r}\rceil$. Also, $\lceil\frac{n}{r+\delta-1}\rceil(r+\delta-1)$, denoted by $n_{max}$, gives the largest possible size of an optimal matroid with $\lceil\frac{n}{r+\delta-1}\rceil$ atoms. We get this kind of a matroid from Theorem 9 by choosing $\lceil\frac{n}{r+\delta-1}\rceil$ disjoint sets $F_{i}$ which are of size $r+\delta-1$, and the parameter $k$ as $k_{max}$. We call such a matroid a broad matroid. Any matroid with the parameters $(n,k,d,r,\delta)$ induces a broad matroid, which in turn has the parameters $(n_{max},k_{max},d_{opt},r,\delta)$, where $d_{opt}$ is always such that the broad matroid achieves the bound. The constants $a$ and $b$ have illustrative interpretations in the context of optimal matroids: $b$ tells us how much smaller the matroid is compared to the size of a corresponding broad matroid. For an interpretation for $a$, remember that an optimal matroid must have $\rho(F_{T})=k$ for every $T\subseteq[m]$ with $|T|=\lceil\frac{k}{r}\rceil$. Thus $a$ tells how much smaller the rank of such unions of atoms can be compared to that of a broad matroid, in order for the original matroid to be optimal. For matroids from Theorem 9, there exists an even more useful interpretation for $a$, which is described by Lemma 4 below. Note that for these matroids, the rank function restricted to cyclic flats, $\rho:\mathcal{Z}\rightarrow\mathbb{Z}$, can be expressed as $\rho(X)=\min\{\rho^{\prime}(X),k\}$, where $\rho^{\prime}(F_{I})=|F_{I}|-\sum_{i\in I}\eta(F_{i})$ for $I\subseteq[m]$. Lemma 4. An $(n,k,d,r,\delta)$-matroid from Theorem 9 has $d=n-k+1-(\left\lceil\frac{k}{r}\right\rceil-1)(\delta-1)$ if and only if (i) $$\displaystyle|F_{T}|\geq\left\lceil\frac{k}{r}\right\rceil(r+\delta-1)-a% \textrm{ for each }T\subseteq[m]\textrm{ with }|T|=\left\lceil\frac{k}{r}% \right\rceil,$$ (21) (ii) $$\displaystyle\eta(F_{i})=\delta-1\textrm{ for each }i\in[m].$$ Proof. Let us first prove that the conditions are sufficient. For each $T\subseteq[m]$ with $|T|=\left\lceil\frac{k}{r}\right\rceil$, we have $$\begin{split}&\displaystyle\left\lceil\frac{k}{r}\right\rceil r-k=a\\ &\displaystyle\geq\left\lceil\frac{k}{r}\right\rceil(r+\delta-1)-|F_{T}|\\ &\displaystyle=\left\lceil\frac{k}{r}\right\rceil r-(|F_{T}|-\sum_{i\in T}\eta% (F_{i})).\end{split}$$ Thus we have $\rho^{\prime}(F_{T})=|F_{T}|-\sum_{i\in T}\eta(F_{i})\geq k$ and $\max_{F_{I}\in\mathcal{Z}_{<k}}\{|I|\}\leq\lceil\frac{k}{r}\rceil-1$. This in turn implies $\max\{\sum_{i\in I}\eta(F_{i}):F_{I}\in\mathcal{Z}_{<k}\}\leq(\lceil\frac{k}{r% }\rceil-1)(\delta-1)$ and finally $d\geq n-k+1-(\lceil\frac{k}{r}\rceil-1)(\delta-1)$ via Theorem 8 (iii). Now we prove that the conditions in (21) are necessary. Condition (ii) not being satisfied would contradict with Theorem 7 (ii) a). If (i) is false and (ii) true, we get $\rho^{\prime}(F_{T})=|F_{T}|-\sum_{i\in T}\eta(F_{i})<k$ for some $T$ with $|T|=\lceil\frac{k}{r}\rceil$ in the same manner as above. Together with the result in [1] that matroids from Theorem 9 have $\rho^{\prime}(F_{I})<\rho^{\prime}(F_{J})$ for $I\subsetneq J\subseteq[m]$, we now get $d\leq n-k+1-\lceil\frac{k}{r}\rceil(\delta-1)$, similarly as above. ∎ Thus for matroids from Theorem 9, the constant $a$ also determines how much smaller unions of $\lceil\frac{k}{r}\rceil$ atoms are allowed to be compared to those of a corresponding broad matroid. There are two ways in which a set $F_{I}$ can be smaller than its broad matroid counterpart: 1. Its atoms $F_{i}$ can intersect with each other. 2. One or more of its atoms $F_{i}$ may have $|F_{i}|<r+\delta-1$, in which case $\rho(F_{i})<r$ but $\eta(F_{i})=\delta-1$. It is now easily seen that if $a\geq b$, any matroid from Theorem 9 satisfying (ii) in (21) and having $\lceil\frac{n}{r+\delta-1}\rceil$ atoms is optimal, since $a\geq b=\lceil\frac{n}{r+\delta-1}\rceil(r+\delta-1)-n\geq\lceil\frac{k}{r}% \rceil(r+\delta-1)-|F_{T}|$. However, the situation $b>a$ is more difficult, the general problem being minimizing $\max\{|F_{I}|:|I|=\lceil\frac{k}{r}\rceil\}$, while having $|F_{[m]}|=n$ and $\eta(F_{i})=\delta-1$ for each $i\in[m]$. In a sense, the $b$ “losses” in the size of $F_{[m]}$ should be distributed as sparsely and evenly across the matroid as possible, so that as few of them as possible could at most be included in a union of $\lceil\frac{k}{r}\rceil$ atoms. In general it seems favorable to have $|F_{i}|=r+\delta-1$ for each $i\in[m]$ and reduce $|F_{[m]}|$ by having the atoms $F_{i}$ intersect. This is because roughly speaking, picking an atom with reduced size to a union of $\lceil\frac{k}{r}\rceil$ atoms automatically decreases the size of the union, but having two atoms intersect requires picking both of them for the size to decrease. If we allow the atoms $F_{i}$ to intersect arbitrarily (but each having at least $\eta(F_{i})+1$ elements unique to them, required by Theorem 15 (iv)), finding the optimal scheme is extremely difficult for most classes of parameters. This is why in [1], the scope of the problem is limited by only considering matroids from Graph construction 1 where no 3-cycles are allowed (implying that an element of $E$ belongs to at most two atoms). In this case, the problem is reduced to constructing graphs with minimal $\max_{G^{\prime}}(\sum_{w\in W_{G^{\prime}}}\gamma(w))$ where $G^{\prime}$ is any subgraph induced by $\lceil\frac{k}{r}\rceil$ vertices. This viewpoint yields optimal matroids for many classes of parameters $(n,k,r,\delta)$, but for some classes of parameters, it does not even though the Singleton bound could be reached by other matroids. As an example, the following bound was derived in [1], using a version of Graph construction 1, for when $b>a\geq\lceil\frac{k}{r}\rceil-1$ and $\lceil\frac{k}{r}\rceil=2$. Then we have $d_{max}=n-k+1-\left(\left\lceil\frac{k}{r}\right\rceil-1\right)(\delta-1)$ if $$\left\lceil\frac{n}{r+\delta-1}\right\rceil\geq\begin{cases}\left\lceil\frac{b% }{a}\right\rceil+1&\textrm{ if }2a\leq r-1,\\ \left\lceil\frac{b}{\left\lfloor\frac{r-1}{2}\right\rfloor}\right\rceil+1&% \textrm{ if }2a>r-1.\end{cases}$$ (22) This bound can be improved however. Previously we viewed our problem as maximizing the minimal union of $\lceil\frac{k}{r}\rceil$ atoms while having a certain $n=|E|$. Let us now adopt the reverse viewpoint: While having $\lceil\frac{k}{r}\rceil r-a$ as a minimum for the size of a union of $\lceil\frac{k}{r}\rceil$ atoms, we minimize $n$. The original bound was derived using a matroid from Graph construction 1, which means no element can be part of three or more atoms. However, as $\lceil\frac{k}{r}\rceil=2$, the fulfilment of the condition (i) in (21) only depends on the sizes of unions of two atoms. Thus we can “pack” the atoms in a smaller room by having a set $X$ with $|X|=a$, for which $X\subseteq F_{i}$ for as many atoms $F_{i}$ as needed. Exactly this is done in our first result, Theorem 13, given below. However, we first need the following lemma: Lemma 5. ([1]) Let $M$ be an $(n,k,d,r,\delta)$-matroid. Then the following hold: $$\left\lceil\frac{n}{r+\delta-1}\right\rceil\geq\begin{cases}\left\lceil\frac{k% }{r}\right\rceil\quad\quad\textrm{if $b\leq a$}\vspace{0.2 cm},\\ \left\lceil\frac{k}{r}\right\rceil+1\quad\textrm{if $b>a$}.\end{cases}$$ Proof. The proof can be found in [1]. The result follows directly from the inequality (18). ∎ Now we can give the enlarged class of parameters for which the Singleton bound can be achieved: Theorem 13. Let $(n,k,r,\delta)$ be integers such that $0<r<k\leq n-\left\lceil\frac{k}{r}\right\rceil(\delta-1)$, $\delta\geq 2$, $b>a\geq\lceil\frac{k}{r}\rceil-1$ and $\lceil\frac{k}{r}\rceil=2$. If $$\left\lceil\frac{n}{r+\delta-1}\right\rceil\geq\left\lceil\frac{b}{a}\right% \rceil+1,$$ (23) then $d_{max}=n-k+1-\left(\left\lceil\frac{k}{r}\right\rceil-1\right)(\delta-1)$. Proof. We prove our result by giving an explicit construction for matroids which achieve the bound when the conditions are satisfied. A matroid construction. Let $n^{\prime}$, $r^{\prime}$, $\delta^{\prime}$ and $k$ be integers such that $0<r^{\prime}<k\leq n^{\prime}-\left\lceil\frac{k}{r^{\prime}}\right\rceil(% \delta^{\prime}-1)$, $\delta^{\prime}\geq 2$, $b^{\prime}>a^{\prime}$, $m\geq\left\lceil\frac{b^{\prime}}{a^{\prime}}\right\rceil+1$, where we define $$\displaystyle b^{\prime}=\left\lceil\frac{n^{\prime}}{r^{\prime}+\delta^{% \prime}-1}\right\rceil(r^{\prime}+\delta^{\prime}-1)-n^{\prime},$$ $$\displaystyle a^{\prime}=\left\lceil\frac{k}{r^{\prime}}\right\rceil r^{\prime% }-k,$$ $$\displaystyle m=\left\lceil\frac{n^{\prime}}{r^{\prime}+\delta^{\prime}-1}% \right\rceil.$$ Let $F_{1},...,F_{m}=\{F_{i}\}_{i\in[m]}$ be a collection of finite sets with $E=\bigcup_{i\in m}F_{i}$ and $X\subseteq E$ a set such that (i) $$\displaystyle F_{i}\cap F_{j}\subseteq X\quad\textrm{ for }i,j\in[m]\textrm{ % with }i\neq j,$$ (24) (ii) $$\displaystyle|X|=a^{\prime},$$ (iii) $$\displaystyle|F_{i}|=r^{\prime}+\delta^{\prime}-1\quad\textrm{ for }i\in[m],$$ (iv) $$\displaystyle|F_{i}\cap X|=a^{\prime}\textrm{ for }1\leq i\leq\left\lceil\frac% {b^{\prime}}{a^{\prime}}\right\rceil,$$ (v) $$\displaystyle|F_{i}\cap X|=b^{\prime}-\left(\left\lceil\frac{b^{\prime}}{a^{% \prime}}\right\rceil-1\right)a^{\prime}\textrm{ for }i=\left\lceil\frac{b^{% \prime}}{a^{\prime}}\right\rceil+1,$$ (vi) $$\displaystyle|F_{i}\cap X|=0\textrm{ for }i>\left\lceil\frac{b^{\prime}}{a^{% \prime}}\right\rceil+1.$$ Let $\rho$ be a function $\rho:\{F_{i}\}_{i\in[m]}\rightarrow\mathbb{Z}$ such that $\rho(F_{i})=r^{\prime}$ for each $i\in[m]$. Now we prove that this gives a matroid obtainable from Theorem 9. The set $E$ is clearly finite and the conditions (i) and (ii) of (15) are trivially satisfied. Let us next calculate the size of $F_{[m]}=E$ by adding each of the sets $F_{i}$ to the union one at a time, in order according to their index $i$. Let us write $s=r^{\prime}+\delta^{\prime}-1$. The first set $F_{1}$ adds $|F_{1}|=s$ elements. As $X\subseteq F_{1}$ and the sets in $\{F_{i}\}_{i\in[m]\setminus\{1\}}$ only intersect with $X$, each subsequent set $F_{i}$ adds $|F_{i}|-|F_{i}\cap X|$ elements. Thus we get the following: The sets $F_{i}$ for $2\leq i\leq\lceil\frac{b^{\prime}}{a^{\prime}}\rceil$ each add $s-a^{\prime}$ new elements. The set $F_{\lceil\frac{b^{\prime}}{a^{\prime}}\rceil+1}$ adds $s-(b^{\prime}-(\lceil\frac{b^{\prime}}{a^{\prime}}\rceil-1)a^{\prime})$ new elements. The rest of the sets add $s$ elements each. Simple cancellation of terms then gives us $$\displaystyle|F_{[m]}|$$ $$\displaystyle=s+\left(\left\lceil\frac{b^{\prime}}{a^{\prime}}\right\rceil-1% \right)(s-a^{\prime})+s-\left(b^{\prime}-\left(\left\lceil\frac{b^{\prime}}{a^% {\prime}}\right\rceil-1\right)a^{\prime}\right)+\left(\left\lceil\frac{n^{% \prime}}{s}\right\rceil-\left(\left\lceil\frac{b^{\prime}}{a^{\prime}}\right% \rceil+1\right)\right)s$$ $$\displaystyle=\left\lceil\frac{n^{\prime}}{s}\right\rceil s-b^{\prime}$$ $$\displaystyle=n^{\prime}.$$ Using $s-a^{\prime}$ as a lower bound for the number of elements added by the sets $F_{i}$ with $3\leq i\leq\lceil\frac{n^{\prime}}{s}\rceil$ and recalling that $r^{\prime}>a$ and $\lceil\frac{k}{r^{\prime}}\rceil=2$, we get the following: $$\displaystyle|F_{[m]}|-\sum_{i\in[m]}\eta(F_{i})$$ $$\displaystyle\geq$$ $$\displaystyle\ s+s-a^{\prime}+\left(\left\lceil\frac{n^{\prime}}{s}\right% \rceil-2\right)(s-a^{\prime})-\left\lceil\frac{n^{\prime}}{s}\right\rceil(% \delta^{\prime}-1)$$ $$\displaystyle=$$ $$\displaystyle\ 2r^{\prime}-a^{\prime}+\left(\left\lceil\frac{n^{\prime}}{s}% \right\rceil-2\right)(r^{\prime}-a^{\prime})$$ $$\displaystyle\geq$$ $$\displaystyle\left\lceil\frac{k}{r^{\prime}}\right\rceil r^{\prime}-a^{\prime}$$ $$\displaystyle=$$ $$\displaystyle\ k.$$ This shows that the construction satisfies (15) (iii). When I $\subseteq[m],j\in[m]\setminus I$, we have $|F_{I}\cap F_{j}|\leq|X\cap F_{j}|\leq a^{\prime}<r^{\prime}=\rho(F_{j})$, so (15) (iv) is also satisfied. Thus the construction gives a matroid obtainable from Theorem 9. According to Theorem 8 we now have (i) $$\displaystyle n=|E|=n^{\prime},$$ (ii) $$\displaystyle k=\rho(E),$$ (iii) $$\displaystyle d=n-k+1-(\delta-1)=n-k+1-\left(\left\lceil\frac{k}{r}\right% \rceil-1\right)(\delta-1),$$ (iv) $$\displaystyle\delta=\delta^{\prime},$$ (v) $$\displaystyle r=r^{\prime}.$$ Note that (i),(ii), (iv) and (v) imply that $a=a^{\prime}$ and $b=b^{\prime}$ so we can stop using the primed letters. For (iii), note that for every $F_{i},F_{j}\in\{F_{t}\}_{t\in[m]}$ we have $$\displaystyle\rho(F_{\{i,j\}})=|F_{\{i,j\}}|-\sum_{i\in\{i,j\}}\eta(F_{i})$$ $$\displaystyle=$$ $$\displaystyle|F_{\{i,j\}}|-2(\delta-1)$$ $$\displaystyle\geq$$ $$\displaystyle|F_{i}|+|F_{j}|-|X|-2(\delta-1)$$ $$\displaystyle=$$ $$\displaystyle\rho(F_{i})+\rho(F_{j})-a$$ $$\displaystyle=$$ $$\displaystyle r+r-(2r-k)$$ $$\displaystyle=$$ $$\displaystyle k,$$ which implies that for any $i,j$, $F_{\{i,j\}}\notin\mathcal{Z}_{<k}$. Due to (5) (ii), for any unions $F_{I}$ with $|I|\geq 2$ we have $F_{I}\notin\mathcal{Z}_{<k}$. Thus $\mathcal{Z}_{<k}=\{F_{i}\}_{i\in[m]}$ as $r<k$. Now we have shown that the construction gives a matroid that achieves the generalized Singleton bound for every desired parameter set $(n,k,d,r,\delta)$. ∎ Even this scheme does not give the full class of matroids which are optimal and have $\lceil\frac{k}{r}\rceil=2$. To see how the atoms could be organized even more efficiently, notice that we may have $a<r-1$, in which case the atoms of our construction have more non-shared elements than would be necessary. Using these unnecessarily non-shared elements, the atoms $F_{i}$ could be packed even more efficiently. For example, if we denote by $F_{i}^{\prime}$ the set of elements the atom $F_{i}$ is allowed to share ($|F_{i}^{\prime}|=\rho(F_{i})-1$), the following setup of three atoms $F_{i}^{\prime}$ would be possible, when $a=2$ and $|F_{i}^{\prime}|=4$: $$\displaystyle F_{1}^{\prime}=\{1,2,3,4\},$$ (25) $$\displaystyle F_{2}^{\prime}=\{1,2,5,6\},$$ $$\displaystyle F_{3}^{\prime}=\{3,4,5,6\}.$$ Here we have $n=6+3(\eta(F_{i})+1)$. For a reference, the construction used in Theorem 13 in a similar case would be $$\displaystyle F_{1}^{\prime}=\{1,2,3,4\},$$ $$\displaystyle F_{2}^{\prime}=\{1,2,5,6\},$$ $$\displaystyle F_{3}^{\prime}=\{1,2,7,8\},$$ where $n=8+3(\eta(F_{i})+1)$. Here we have $2a\leq r-1$, so the construction for (22) would also have $n=8+3(\eta(F_{i})+1)$. Graph construction 1 is also used to derive classes of optimal matroids for $\lceil\frac{k}{r}\rceil\geq 3$. Improvements similar to Theorem 13 may be possible here as well, but they would be considerably more complicated. 7.2 A general lower bound for $d_{max}$ Besides finding the classes of parameters $(n,k,r,\delta)$ for which the bound can be achieved, we are also interested in finding $d_{max}$ for those classes for which the bound can not be achieved. The following partial result towards this goal was presented in [1], for $b>a$: $$d_{max}\geq\begin{cases}n-k+1-\left\lceil\frac{k}{r}\right\rceil(\delta-1)&% \textrm{if $b\leq r-1,$}\\ n-k+1-\left\lceil\frac{k}{r}\right\rceil(\delta-1)+(b-r)&\textrm{if $b\geq r$.% }\end{cases}$$ (26) If we do not require optimality, we can ignore some of the requirements in (21). If we let go of (i) and allow that we may reach full rank only with a union of $\lceil\frac{k}{r}\rceil+1$ atoms, we get the bound for $b\leq r-1$ in (26). If we let go of (ii) we can use $m=\left\lceil\frac{n}{r+\delta-1}\right\rceil-1$, which lets us have atoms of full rank $r$ that are pairwise disjoint. In order that the union of the atoms is large enough we must however have at least one atom with $\eta(F_{i})>\delta-1$. This was done in [1] to obtain the bound for $b\geq r$ in (26). The corresponding matroid construction had one atom that contained all the extra nullity required. However, a better bound can be derived by spreading the extra nullity evenly among the atoms, since then all the extra nullity can not be included in a coatom, i.e. a union of $\left\lceil\frac{k}{r}\right\rceil-1$ atoms. This is done in the following theorem: Theorem 14. Let $(n,k,r,\delta)$ be integers such that $0<r<k\leq n-\left\lceil\frac{k}{r}\right\rceil(\delta-1)$, $\delta\geq 2$ and $b>a$. Also let $m=\left\lceil\frac{n}{r+\delta-1}\right\rceil-1$ and $v=r+\delta-1-b-\left\lfloor\frac{r+\delta-1-b}{m}\right\rfloor m$. Then, if $\delta-1\leq\left(\left\lceil\frac{k}{r}\right\rceil-1\right)\left\lfloor\frac% {r+\delta-1-b}{m}\right\rfloor+\min\{v,\left\lceil\frac{k}{r}\right\rceil-1\}$, we have $$d_{max}\geq n-k+1-\left\lceil\frac{k}{r}\right\rceil(\delta-1).$$ (27) Otherwise, if $\delta-1>\left(\left\lceil\frac{k}{r}\right\rceil-1\right)\left\lfloor\frac{r+% \delta-1-b}{m}\right\rfloor+\min\{v,\left\lceil\frac{k}{r}\right\rceil-1\}$, then $$d_{max}\geq n-k+1-\left(\left\lceil\frac{k}{r}\right\rceil-1\right)\left(\left% \lfloor\frac{r+\delta-1-b}{m}\right\rfloor+\delta-1\right)-\min\left\{v,\left% \lceil\frac{k}{r}\right\rceil-1\right\}.$$ (28) This bound is always at least as good as the bound in (26). Moreover, denoting the bound for $b\geq r$ in (26) by $d_{old}=n-k+1-\left\lceil\frac{k}{r}\right\rceil(\delta-1)+(b-r)$ and similarly the bound in (28) by $$d_{new}=n-k+1-\left(\left\lceil\frac{k}{r}\right\rceil-1\right)\left(\left% \lfloor\frac{r+\delta-1-b}{m}\right\rfloor+\delta-1\right)-\min\left\{v,\left% \lceil\frac{k}{r}\right\rceil-1\right\},$$ it follows that $$d_{new}-d_{old}\geq\left\lfloor\frac{r+\delta-1-b}{m}\right\rfloor\left(m-% \left\lceil\frac{k}{r}\right\rceil+1\right)\geq 0.$$ (29) Proof. Let $n^{\prime}\in\mathbb{Z}$ be such that it satisfies the conditions for n in Theorem 14. Then, analogously to the result in Lemma 5, $n^{\prime}$ also satisfies $\left\lceil\frac{n^{\prime}}{r+\delta-1}\right\rceil\geq\left\lceil\frac{k}{r}% \right\rceil+1$. Graph construction. Let $G(\alpha,\beta,\gamma;k,r,\delta)$ be intended as an instance of Graph construction 1 with $$\displaystyle\begin{split}&\displaystyle\textrm{(a)}\quad m=\left\lceil\frac{n% ^{\prime}}{r+\delta-1}\right\rceil-1,\\ &\displaystyle\textrm{(b)}\quad W=\emptyset,\\ &\displaystyle\textrm{(c)}\quad\alpha(i)=0\textrm{ for }i\in[m],\\ &\displaystyle\textrm{(d)}\quad\beta(i)=\begin{cases}\left\lceil\frac{r+\delta% -1-b^{\prime}}{m}\right\rceil\textrm{ for }1\leq i\leq v^{\prime}\vspace{1.5 % mm},\\ \left\lfloor\frac{r+\delta-1-b^{\prime}}{m}\right\rfloor\textrm{ for }v^{% \prime}<i\leq m,\end{cases}\end{split}$$ (30) where $b^{\prime}=\left\lceil\frac{n^{\prime}}{r+\delta-1}\right\rceil(r+\delta-1)-n^% {\prime}$ and $v^{\prime}=r+\delta-1-b^{\prime}-\left\lfloor\frac{r+\delta-1-b^{\prime}}{m}% \right\rfloor m$. Now we have to show that the conditions in equation (16) apply in order to prove that this is indeed an instance of Graph construction 1. Conditions (i), (ii) and (iv) are trivially satisfied. As $b^{\prime}<r+\delta-1$, (iii) is also true. For our construction G, requirement (v) is simplified to the form $k\leq rm$, which is true as $$k=\left\lceil\frac{k}{r}\right\rceil r-a\leq\left\lceil\frac{k}{r}\right\rceil r% \leq\left(\left\lceil\frac{n^{\prime}}{r+\delta-1}\right\rceil-1\right)r=mr.$$ As $W=\emptyset$, $\alpha(i)=0\textrm{ for }i\in[m]$ and $r>0$, (vi) is also true. Thus G is an instance of Graph construction 1. We have $\sum_{i\in[m]}\beta(i)=r+\delta-1-b^{\prime}$ because of the following: If $\frac{r+\delta-1-b^{\prime}}{m}\in\mathbb{Z}$: $$\sum_{i\in[m]}\beta(i)=m\cdot\frac{r+\delta-1-b^{\prime}}{m}=r+\delta-1-b^{% \prime}.$$ If $\frac{r+\delta-1-b^{\prime}}{m}\notin\mathbb{Z}$: $$\begin{split}\displaystyle\sum_{i\in[m]}\beta(i)&\displaystyle=\left\lceil% \frac{r+\delta-1-b^{\prime}}{m}\right\rceil v^{\prime}+\left\lfloor\frac{r+% \delta-1-b^{\prime}}{m}\right\rfloor(m-v^{\prime})\\ &\displaystyle=v^{\prime}+\left\lfloor\frac{r+\delta-1-b^{\prime}}{m}\right% \rfloor m\\ &\displaystyle=r+\delta-1-b^{\prime}.\end{split}$$ From Theorem 11 we get thus that $$\begin{split}\displaystyle n&\displaystyle=(r+\delta-1)m+\sum_{i\in[m]}\beta(i% )\\ &\displaystyle=(r+\delta-1)(m+1)-b^{\prime}\\ &\displaystyle=(r+\delta-1)\left\lceil\frac{n^{\prime}}{r+\delta-1}\right% \rceil-\left(\left\lceil\frac{n^{\prime}}{r+\delta-1}\right\rceil(r+\delta-1)-% n^{\prime}\right)\\ &\displaystyle=n^{\prime}.\end{split}$$ This shows that we can use this construction for any desired parameter set $(n,k,r,\delta)$ satisfying the requirements in Theorem 14. As $n=n^{\prime}$ also $a=a^{\prime}$, $b=b^{\prime}$ and $v=v^{\prime}$, so we can from now on only use the non-primed letters. Using Theorem 11 (ii), we show next that this construction gives a $d$ that is the desired lower bound for $d_{max}$. We have $\max_{I\in V_{<k}}(|I|)=\lceil\frac{k}{r}\rceil-1$, as for every $I\in[m],\sum_{i\in I}\alpha(i)=\sum_{w\in I\times I}\gamma(w)=0$. Clearly $(\delta-1)|I|+\sum_{i\in I}\beta(i)$ is maximized with a maximal $|I|$, so we get from Theorem 11 (ii) that $$d=n-k+1-\left(\left\lceil\frac{k}{r}\right\rceil-1\right)(\delta-1)-\left(% \left\lceil\frac{k}{r}\right\rceil-1\right)\left\lfloor\frac{r+\delta-1-b}{m}% \right\rfloor-\min\left\{v,\left\lceil\frac{k}{r}\right\rceil-1\right\}.$$ We now get the bound in (28) by rearranging the right side of this equation. Next we prove the result in (29). By simple cancellation of identical terms and some rearranging we get the following inequality that is equivalent to the first inequality in (29): $$m\left\lfloor\frac{r+\delta-1-b}{m}\right\rfloor+\min\left\{v,\left\lceil\frac% {k}{r}\right\rceil-1\right\}\leq r+\delta-1-b.$$ This inequality can be seen to be true by using $v$ as an upper bound for $\min\{v,\lceil\frac{k}{r}\rceil-1\}$ and substituting $v$ by its definition. Thus the first inequality (29) is true. The second inequality in 29 is true because $m=\lceil\frac{n}{r+\delta-1}\rceil-1\geq\lceil\frac{k}{r}\rceil$ by Lemma 5. The combined bound of (27) and (28) is always at least as good as (26) because $d_{new}\geq d_{old}$ and we always use the stricter of the two bounds (27) and (28) in our result. ∎ Could this bound be improved even further? The following theorem gives a partial answer by stating that for matroids from Theorem 9, the bound is strict for parameter sets $(n,k,r,\delta)$ for which there exists no optimal matroid from Theorem 9. Theorem 15. Let $(n,k,r,\delta)$ be integers such that there exists no $(n,k,d^{\prime},r,\delta)$-matroid from Theorem 9 with $d^{\prime}=n-k+1-(\lceil\frac{k}{r}\rceil-1)(\delta-1)$. Let $M$ be a $(n,k,d,r,\delta)$-matroid from Theorem 9 and let us denote the bound in Theorem 14 by $d_{b}=d_{b}(n,k,r,\delta)$. Then $d\leq d_{b}$. Proof. Let $M=M(F_{1},...,F_{m};k;\rho)$ be a matroid from Theorem 9 for which there exists no optimal matroid from Theorem 9 with the same parameters $(n,k,r,\delta)$. Assume that $\max\{|I|:F_{I}\in\mathcal{Z}_{<k}\}\geq\lceil\frac{k}{r}\rceil$. Using Theorem 8 (iii), we then obtain $d\leq n-k+1-\left\lceil\frac{k}{r}\right\rceil(\delta-1)$, as $\eta(F_{i})\geq\delta-1$ for every $i\in[m]$. Thus the theorem holds in this case and we are only left with the case $\max\{|I|:F_{I}\in\mathcal{Z}_{<k}\}=\lceil\frac{k}{r}\rceil-1$. Having $\max\{|I|:F_{I}\in\mathcal{Z}_{<k}\}<\lceil\frac{k}{r}\rceil-1$ is impossible, since it would imply $\rho(F_{J})=k$ for every $J$ with $|J|=\lceil\frac{k}{r}\rceil-1$. This in turn is impossible since $\rho(F_{i})\leq r$ and rank behaves subadditively, according to (5)(iii). There must be an atom $F_{i}$ with $\eta(F_{i})>\delta-1$, since otherwise the matroid would be optimal. Next we show that our current assumptions imply $m<\lceil\frac{n}{r+\delta-1}\rceil$. We do this by showing that $m\geq\lceil\frac{n}{r+\delta-1}\rceil$ would allow the existence of optimal matroids, which is a contradiction. The optimal matroids are constructed by repeatedly applying Algorithm 1, which takes a Theorem 9 -matroid $M_{i}$ satisfying 1. $\max\{|I|:F_{I}\in\mathcal{Z}_{<k}\}=\lceil\frac{k}{r}\rceil-1$, 2. $m\geq\lceil\frac{n}{r+\delta-1}\rceil$, 3. $\exists F_{u}\in A_{\mathcal{Z}(M_{i})}:\ \eta(F_{u})>\delta-1$ , and returns another Theorem 9 -matroid $M_{i+1}$ still satisfying 1. and 2. but having the nullity of the atom $F_{u}$ with $\eta(F_{u})>\delta-1$ reduced by one. The definition of the algorithm is otherwise clearly sound, but we need to prove that the atoms $F_{k}$ and $F_{l}$ required on line 9 exist. Assume on the contrary that they do not. Then, $$\left|\bigcup_{i\in[m]}F_{i}\right|>\left\lceil\frac{n}{r+\delta-1}\right% \rceil(r+\delta-1)\geq n,$$ which is contradiction. Thus the desired $F_{k}$ and $F_{l}$ exist. Next, we need to prove that $M_{i+1}$ is a matroid from Theorem 9. We do this by proving that the conditions (i)-(iv) of (15) are satisfied. Condition (i) stays satisfies after executing lines 1-3 as originally $|F_{u}|=\rho(F_{u})+\eta(F_{u})>\rho(F_{u})+1$. Lines 5-7 preserve the condition as both the rank and the size of the atom are increased by one. Lines 9-12 preserve the size and rank of $F_{k}$ and $F_{l}$. Condition (ii) is satisfied as $x$ is re-added and $y$ always remains in $F_{l}$. Condition (iii) is satisfied for $M_{i+1}$ as it is satisfied for $M_{i}$, and the nullity of an atom is never increased and $|F_{[m]}|=|E|$ stays constant. To prove (iv), first note that the condition is equivalent to every atom $F_{i}$ having at least $\eta(F_{i})+1$ elements that are not contained in any other atom, i.e. are non-shared. On lines 1-3, $F_{u}$ may lose one non-shared element, but its nullity also decreases by one. Also, $x$ can not be unique to any other atom besides $F_{u}$. On lines 5-6, $x$ does not belong to any atom and we do not increase the nullity or decrease the amount of unique elements of $F_{j}$. Similarly on lines 10-11, $y$ is not unique to any atom and we do not alter the nullities or decrease the amounts of unique elements of $F_{k}$ or $F_{l}$. Thus $M_{i+1}$ satisfies (iv) and we have proved that $M_{i+1}$ is a matroid from Theorem 9. The new matroid $M_{i+1}$ also satisfies $\max\{|I|:F_{I}\in\mathcal{Z}_{<k}\}=\lceil\frac{k}{r}\rceil-1$. This is because the nullity of an atom is never increased, and $F_{u}$ is the only atom whose size is reduced, but this is compensated by correspondingly reducing its nullity. Thus $\rho^{\prime}_{i+1}(F_{I})\geq\rho^{\prime}_{i}(F_{I})$ for every $I\subseteq[m]$, where we denote $\rho^{\prime}(F_{I})=|F_{I}|-\sum_{i\in I}\eta(F_{i})$, and the subscript distinguishes between the matroids $M_{i}$ and $M_{i+1}$. Thus we can repeatedly use Algorithm 1 to decrease the nullity of some atom $F_{u}$ with $\eta(F_{u})>\delta-1$ until every atom has $\eta(F_{i})=\delta-1$. From Theorem 8 (iii) we then see that we have obtained an optimal matroid. However, this is a contradiction and therefore $m<\left\lceil\frac{n}{r+\delta-1}\right\rceil$. Let us denote $s=\sum_{i\in[m]}\eta(F_{i})$. Let us distribute this nullity evenly among the atoms $F_{i}$, i.e. set $$\eta(F_{i})=\begin{cases}\left\lceil\frac{s}{m}\right\rceil\textrm{ for }1\leq i% \leq s-\left\lfloor\frac{s}{m}\right\rfloor m\vspace{1.5 mm},\\ \left\lfloor\frac{s}{m}\right\rfloor\textrm{ for }s-\left\lfloor\frac{s}{m}% \right\rfloor m<i\leq m.\end{cases}$$ (31) For minimizing $\max\left\{\sum_{i\in I}\eta(F_{i}):|I|=\left\lceil\frac{k}{r}\right\rceil-1\right\}$, this setup is optimal and yields the bound $$\begin{split}&\displaystyle\max\left\{\sum_{i\in I}\eta(F_{i}):|I|=\left\lceil% \frac{k}{r}\right\rceil-1\right\}\\ &\displaystyle\geq\left(\left\lceil\frac{k}{r}\right\rceil-1\right)\left% \lfloor\frac{s}{m}\right\rfloor+\min\left\{\left\lceil\frac{k}{r}\right\rceil-% 1,s-\left\lfloor\frac{s}{m}\right\rfloor m\right\}.\end{split}$$ (32) A proof can be given by contradiction: Assume that there exists a set of $\lceil\frac{k}{r}\rceil-1$ atoms whose sum of nullities is maximal, but lower than the bound in (32). This imposes such an upper bound on the nullities of the other atoms that $\sum_{i\in[m]}\eta(F_{i})$ must be lower than $s$, which is of course a contradiction. The bound in (32) is increasing as a function of $s$. Let us show this by considering how the value of the bound changes when the value of $s$ is increased by one. If $\lfloor\frac{s}{m}\rfloor$ remains unchanged, the value of the bound clearly does not decrease. If the value of $\lfloor\frac{s}{m}\rfloor$ is increased, the first term is increased by $(\lceil\frac{k}{r}\rceil-1)$, whereas the value of the second term is altered by at most $(\lceil\frac{k}{r}\rceil-1)$, since $0\leq\min\{\lceil\frac{k}{r}\rceil-1,s-\lfloor\frac{s}{m}\rfloor m\}\leq\lceil% \frac{k}{r}\rceil-1$. Thus an increment of $s$ by one never decreases the value of the bound and it is increasing as a function of $s$. We have $$\sum_{i\in[m]}|F_{i}|=\sum_{i\in[m]}\rho(F_{i})+\sum_{i\in[m]}\eta(F_{i})\geq|% E|,$$ (33) so $s\geq n-rm$. As the bound in (32) is increasing as a function of $s$, we obtain the bound $$\begin{split}&\displaystyle\max\left\{\sum_{i\in I}\eta(F_{i}):|I|=\left\lceil% \frac{k}{r}\right\rceil-1\right\}\\ &\displaystyle\geq\left(\left\lceil\frac{k}{r}\right\rceil-1\right)\left% \lfloor\frac{n-rm}{m}\right\rfloor+\min\left\{\left\lceil\frac{k}{r}\right% \rceil-1,n-rm-\left\lfloor\frac{n-rm}{m}\right\rfloor m\right\}.\end{split}$$ (34) By a similar consideration as above, we note that this bound is decreasing as a function of $m$. Thus we can obtain an a new bound by substituting $m=\left\lceil\frac{n}{r+\delta-1}\right\rceil-1$. This is also the definition of $m$ in Theorem 14. By additionally substituting $v$ and $b$ by their definitions in (28), we can see that the bounds (28) and (34) are equal. We have thus proved that the value of $d$ for non-optimal matroids is always bounded from above by either the bound (27) or the bound (28). This proves the theorem. ∎ 8 Conclusions In this paper, we reviewed several results first established in [1]: We first demonstrated how almost affine codes induce a matroid in such a way that the key parameters $(n,k,d,r,\delta)$ of a locally repairable code appear as matroid invariants. We then discussed how this enables us to use matroid theory to study properties of almost affine locally repairable codes. We extended the generalized Singleton bound to matroids after which we derived the structure theorem stating a list of requirements for a matroid to achieve the bound. We reviewed results on $d_{max}$ for different classes of parameters $(n,k,r,\delta)$ and gave matroid constructions used to prove these results. Lastly, we presented two improvements to previous results: We extended the class of parameters for which the bound can be achieved when $\lceil\frac{k}{r}\rceil=2$. We also presented an improved general lower bound for $d_{max}$. There still remains significant gaps in our knowledge on the complete function $d_{max}(n,k,r,\delta)$. We particularly lack results on when the Singleton bound can not be achieved, except for the class of parameters $a<\left\lceil\frac{k}{r}\right\rceil-1$ for which the reachability of the bound is completely solved in [1]. Further such non-existence results could be derived using Theorem 7. The classes of parameters for which the bound can be achieved could be extended as well, using matroids from Theorem 9 without restricting ourselves to Graph Construction 1, as demonstrated in the example in the previous chapter. However, designing such matroid constructions probably becomes increasingly complicated as we progress in finding improvements. Finding a general optimal scheme would probably be exceedingly difficult. Some related problems are studied by a branch of mathematics called extremal set theory, see for instance [10]. Whether there exists a better general lower bound than the one derived is unknown. We proved that to have a chance at finding such a bound, one needs to use a class of matroids more general than the class given by Theorem 9. However, it seems plausible that our lower bound actually gives the best possible $d$ for the whole class of parameters $(n,k,r,\delta)$ for which the generalized Singleton bound can not be achieved. Many interesting areas of research remain open in the wider context of almost affine codes and locally repairable codes. For instance, the class of matroids induced by almost affine codes could possibly be expanded from that given in Theorem 9. Also, the exact size of the finite field required to represent matroids from Theorem 9 is unknown. References [1] T. Westerbäck, R. Freij, T. Ernvall and C. Hollanti, “On the Combinatorics of Locally Repairable Codes via Matroid Theory”, arXiv:1501.00153 [cs.IT], 2014. [2] N. Prakash, G. M. Kamath, V. Lalitha and P. V. Kumar, “Optimal linear codes with a local-error-correction property”, 2012 IEEE Int.Symp. Inf. Theory (ISIT), pp. 2776-2780, 2012. [3] J. G. Oxley, “Matroid Theory”, Oxford Graduate Texts in Mathematics, Oxford University Press, 1992. [4] Reinhard Diestel, “Graph theory”, New York, Springer-Verlag, 1997. [5] J. E. Bonin and A. de Mier, “The lattice of cyclic flats of a matroid”, Annals of Combinatorics, 12, pp. 155-170, 2008. [6] P. Crawley and R. P. Dilworth, “Algebraic theory of lattices”, Englewood Cliffs N.J., Prentice-Hall, 1973. [7] J. Simonis and A. Ashikhmin, “Almost affine codes”, Design, codes and cryptography, 14(2), pp. 179-197, 1998. [8] F. Matúŝ, “Matroid representation by partitions”, Discrete Math., 203(1-3), pp. 169-194, 1999. [9] B. Lindström, “On the vector representation of induced matroids”, Bull. London Math. Soc., 5(1), pp. 85-90, 1973. [10] P. Erdös and D. J. Kleitman, “Extremal problems among subsets of a set”, Discrete Math., 8(3), pp. 281-294, 1974.
Coagulation-transport equations and the nested coalescents Amaury Lambert ${}^{\dagger\star}$ Emmanuel Schertzer${}^{\dagger\star}$ (Date:: January 18, 2021) ${}^{\dagger}$Laboratoire de Probabilités, Statistiques et Modélisation (LPSM), Sorbonne Université, CNRS UMR 8001, Paris, France ${}^{\star}$Center for Interdisciplinary Research in Biology (CIRB), Collège de France, CNRS UMR 7241, PSL Research University, Paris, France Abstract. The nested Kingman coalescent describes the dynamics of particles (called genes) contained in larger components (called species), where pairs of species coalesce at constant rate and pairs of genes coalesce at constant rate provided they lie within the same species. We prove that starting from $rn$ species, the empirical distribution of species masses (numbers of genes$/n$) at time $t/n$ converges as $n\to\infty$ to a solution of the deterministic coagulation-transport equation $$\partial_{t}d\ =\ \partial_{x}(\psi d)\ +\ a(t)\left(d\star d-d\right),$$ where $\psi(x)=cx^{2}$, $\star$ denotes convolution and $a(t)=1/(t+\delta)$ with $\delta=2/r$. The most interesting case when $\delta=0$ corresponds to an infinite initial number of species. This equation describes the evolution of the distribution of species of mass $x$, where pairs of species can coalesce and each species’ mass evolves like $\dot{x}=-\psi(x)$. We provide two natural probabilistic solutions of the latter IPDE and address in detail the case when $\delta=0$. The first solution is expressed in terms of a branching particle system where particles carry masses behaving as independent continuous-state branching processes. The second one is the law of the solution to the following McKean-Vlasov equation $$dx_{t}\ =\ -\psi(x_{t})\,dt\ +\ v_{t}\,\Delta J_{t}$$ where $J$ is an inhomogeneous Poisson process with rate $1/(t+\delta)$ and $(v_{t};t\geq 0)$ is a sequence of independent rvs such that ${\mathcal{L}}(v_{t})={\mathcal{L}}(x_{t})$. We show that there is a unique solution to this equation and we construct this solution with the help of a marked Brownian coalescent point process. When $\psi(x)=x^{\gamma}$, we show the existence of a self-similar solution for the PDE which relates when $\gamma=2$ to the speed of coming down from infinity of the nested Kingman coalescent. Keywords and phrases. Kingman coalescent; Smoluchowski equation; McKean-Vlasov equation; degenerate PDE; PDE probabilistic solution; hydrodynamic limit; entrance boundary; empirical measure; coalescent point process; continuous-state branching process; phylogenetics. MSC2010 Classification. Primary 60K35; secondary 35Q91; 35R09; 60G09; 60B10; 60G55; 60G57; 60J25; 60J75; 60J80; 62G30; 92D15. Acknowledgements. The authors thank Center for Interdisciplinary Research in Biology (CIRB, Collège de France) for funding. Contents 1 Introduction and informal description of the main results 2 Weak solutions and branching CSBP 3 Finite population McKean–Vlasov equation 4 $\infty$-population McKean-Vlasov equation 5 Main convergence results 6 Some useful estimates 7 Convergence of the empirical measure 8 Coming down from infinity in the nested Kingman coalescent A 1. Introduction and informal description of the main results 1.1. The nested Kingman coalescent The Kingman coalescent [16] is a stochastic process describing the dynamics of a system of coalescing particles, where each pair of particles independently merges at constant rate. It originates from population genetics, where it is used to model the dynamics of gene lineages in the backward direction of time, thus generating a random genealogy. This model can be enriched by embedding gene lineages into species (each gene belongs to a living organism which in turn belongs to some species). Nested coalescents were recently introduced [8] to model jointly the genealogy of the genes and the genealogy of the species. The nested Kingman coalescent is the simplest example of a nested coalescent, where both the gene tree and the species tree are given by (non-independent) Kingman coalescents: • Each pair of species independently coalesces at rate $1$, and when two species coalesce into one so-called mother species, all the genes they harbor are pooled together into the mother species (but do not merge). See Fig. 1. • Conditional on the species tree, each pair of gene lineages lying in the same species independently coalesces at rate $c$. It is easy to see (and well-known) that $\infty$ is an entrance boundary for the number of particles in the Kingman coalescent, it is said that the Kingman coalescent comes down from infinity. It is further known that for the Kingman coalescent started at $\infty$, called the standard coalescent, the number of particles $K_{t}$ behaves as $t\to 0$ like the solution to (1.1) $$\dot{x}=-x^{2}/2,\ \ x_{0}=+\infty$$ so that $tK_{t}$ converges to 2 a.s. [5]. In the nested Kingman coalescent starting from infinitely many species containing infinitely many genes, the number of species behaves like $2/t$ and the mass of each species decreases at the same speed, but is constantly replenished with the genes of other species upon species coalescences. However, because of the domination by the standard coalescent, it is reasonable to conjecture that each species at time $t$ still carries of the order of $1/t$ genes, so that the total number of genes at time $t$ scales like $1/t^{2}$. This conjecture has been confirmed to hold in a recent work [9]. The starting point of this paper was the study of the distribution (rather than the total mass) of species masses at small times in the nested Kingman coalescent, for arbitrary initial conditions. This led us to a journey through Smoluchowski coagulation-transport PDEs and McKean-Vlasov equations in which we develop new techniques that are interesting per se and could presumably be applied to more general nested coalescents than the nested Kingman coalescent. We now describe informally the main results of this work. For a random variable (rv) $X$, ${\mathcal{L}}(X)$ denotes the law of $X$. We let $M_{F}({\mathbb{R}}^{+})$ (resp., $M_{P}({\mathbb{R}}^{+})$) denote the set of finite measures (resp. probability measures) on ${\mathbb{R}}^{+}$. Definition 1.1. We denote by $\mathscr{H}$ the set of increasing homeomorphisms $\psi:{\mathbb{R}}^{+}\rightarrow{\mathbb{R}}^{+}$. In particular, for any $\psi\in\mathscr{H}$, $\psi$ is continuous, $\psi(0)=0$, $\psi(x)>0$ for all $x>0$ and $\lim_{x\to\infty}\psi(x)=+\infty$. The following (optional) condition $$\int_{1}^{\infty}\frac{dx}{\psi(x)}<\infty,$$ will be called Grey’s condition and sometimes abbreviated as $\int^{\infty}1/\psi<\infty$. 1.2. Convergence to the Smoluchowski equation We first consider a nested coalescent with large but finite number of initial species. More formally, we consider a sequence of nested Kingman coalescents indexed by $n$. Let $s_{t}^{n}$ be the number of species at time $t$ (in the model indexed by $n$). Let $\Pi_{t}^{n}$ be the vector of size $s_{t}^{n}$ recording the number of gene lineages in each species. We call this vector the genetic composition vector. We wish to investigate the dynamics of the distribution of species masses on a time scale $O(1/n)$ and rescaling the number of gene lineages by $1/n$. Namely, we define (1.2) $$g_{t}^{n}=\frac{1}{s_{t}}\sum_{i=1}^{s_{t}}\delta_{\Pi_{t}(i)/n}\quad\mbox{ % and }\quad\tilde{g}^{n}_{t}\ =\ \ g_{t/n}^{n}$$ that is, $g^{n}$ is the empirical distribution of the number of gene lineages per species renormalized by $n$ and $\tilde{g}^{n}$ is obtained from $g^{n}$ by rescaling time by $n$. Result 1 (Theorem 5.1). Assume that there exist two deterministic quantities $r\in(0,\infty)$ and $\nu\in M_{P}({\mathbb{R}}^{+})$ such that (i) $\frac{s_{0}^{n}}{n}\to r$ in $L^{2+\epsilon}$ (ii) $\tilde{g}^{n}_{0}\Longrightarrow\nu$ as $n\to\infty$ in the weak topology for finite measures. Then the sequence of rescaled empirical measures $\left(\tilde{g}^{n}_{t};t\geq 0\right)$ converges to the unique solution of the following IPDE (Integro Partial Differential Equation) (1.3) $$\displaystyle\ \ \partial_{t}d(t,x)\ =\ \partial_{x}(\psi d)(t,x)\ +\ \frac{1}% {t+\delta}\left(d\star d(t,x)\ -\ d(t,x)\right)\quad t,x\geq 0$$ with initial condition $d(0,x)\,dx=\nu(dx)$. Here, $d\star d(t,x)=\int_{0}^{x}d(t,x-y)d(t,y)\,dy$ denotes the convolution product, $\psi(x)=\frac{c}{2}x^{2}$ and $\delta=2/r$. Remark 1.2. The solution displayed in Result 1 is the unique “weak” solution of (1.3). The notion of weak solution will be made precise in forthcoming Definition 1.6. The latter result provides a natural interpretation of the two terms on the RHS of (1.3). The transport term is interpreted as the number of gene lineages inside each species obeying the asymptotical dynamics (1.1) whereas the coagulation term is due to the coalescence of species lineages. Finally, $2\delta=1/r$ is the inverse population size. By a slight abuse of language, the parameter $\delta$ will be referred to as the inverse population size in the rest of this manuscript. Remark 1.3. Consider the more usual coagulation-transport equation $$\ \ \partial_{t}n\ =\ \partial_{x}(\psi n)\ +\ \frac{1}{2}n\star n\ -\ \hat{n}% (t)n,$$ where $\hat{n}(t)=\int_{0}^{\infty}n(t,x)\,dx$. Here, pairs of clusters coalesce at rate $1$ and $n(t,x)$ is the amount (rather than the density) of clusters of mass $x$ at time $t$. Then informal computations yield that $\partial_{t}\hat{n}=-\frac{1}{2}\hat{n}^{2}$, so that $$\hat{n}(t)\ =\ \frac{2}{t+\delta}\ \ \mbox{ with $\delta:=2/\hat{n}(0)$.}$$ Motivated by the previous heuristics, define $d(t,x)=n(t,x)\frac{(t+\delta)}{2}$ so that the function $d$ is interpreted as the density of clusters of mass $x$ at time $t$. Straightforward computations then show that $d$ satisfies (1.3). This gives an additional motivation for studying (1.3) and an additional justification for why $\delta$ is called the inverse population size. 1.3. Coming down from infinity Let us now motivate Equation (1.3) when $\delta=0$. Since $\delta$ is interpreted as the inverse population size, this equation will be referred to as the infinite population ($\infty$-pop.) Smoluchowski equation. In the previous section, we started with a finite but large population. Not surprisingly, the $\infty$-pop. Smoluchowski equation arises when the initial number of species is infinite. Result 2 (Theorem 5.4, Theorem 5.6). Consider a nested coalescent with the following two properties. (i) $s_{0}=\infty$ (ii) Each species contains at least one gene lineage. Then the sequence of rescaled empirical measures $\left(\tilde{g}^{n}_{t};t\geq 0\right)$ converges to the unique weak $\infty$-pop. solution of (1.3). As an application of this result, we can derive the speed of coming down from infinity in the nested coalescent. If $\rho_{t}$ denotes the number of gene lineages at time $t$, (1.4) $$\frac{1}{n^{2}}\rho_{t/n}\ \Longrightarrow\ \frac{2}{t}\int_{0}^{\infty}x\mu_{% t}^{(0)}(dx)=\frac{2{\mathbb{E}}(\Upsilon)}{t^{2}}<\infty,\ \ \mbox{as $n\to% \infty$}.$$ where $\Upsilon$ is a random r.v. which is characterized in Result 4 below. Remark 1.4. The $\infty$-pop. solution displayed in Result 2 is the unique weak solution of (1.3) with $\delta=0$ ($\infty$-population), which is “proper” , in a sense that it has a “non-degenerate” initial condition, which will be made precise in forthcoming Definition 1.6. Remark 1.5. Result 1 required to “tune” the initial number of species and the number of gene lineages per species in order to get a nondegenerate scaling limit at time $t/n$ as $n\to\infty$. Indeed, according to Result 1(i)(ii), both quantities must be of order $n$. In contrast, a fact that stands out in Result 2 is that the $\infty$-pop. equation arises without any delicate scaling. (Compare conditions (i)(ii) in Result 1 and in Result 2.) 1.4. General coagulation-transport equation and solution classes Motivated by the previous convergence results, we will consider (1.3) with a general depletion term $\psi\in\mathscr{H}$. As already discussed, Equation (1.3) describes the density of clusters of mass $x$, where pairs of clusters coalesce at rate $1$ (coagulation) and each cluster’s mass evolves like $\dot{x}=-\psi(x)$ (transport). It will be referred to as the Smoluchowski equation with (initial) inverse population size $\delta$ (and depletion term $\psi$). The case $\delta=0$ corresponds to an infinite initial number of species. Note that this specific case raises some important issues since (1.3) becomes degenerate at $t=0$. (One of the main contributions of the present work is to make sense of such a degenerescence – see below.) In the next two sections, we will define two types of solution of the IPDE (1.3). (with a general transport term.) Namely, (1) The notion of weak solution which is the usual framework in the PDE literature and which arose in our previous convergence results. See Definition 1.6; (2) A more restrictive class of solutions that we call McKean-Vlasov solutions, and that relates (1.3) to a natural McKean-Vlasov process describing the evolution of a ‘typical cluster’ in the population. See Definition 1.10. 1.5. Weak solutions In this section, we assume that $\psi$ is the Laplace exponent of a spectrally positive and (sub)critical Lévy process $Y$. Note that in particular $\psi\in\mathscr{H}$. This choice of $\psi$ should encompass cases of interest regarding the descent from infinity of nested coalescents where the intra-species coalescence mechanism is more general than the Kingman coalescent. See Section 1.7(3) for a discussion on a natural conjecture extending the convergence results of the previous two sections to general nested $\Lambda$-coalescents. Let us now proceed with the definition of weak solutions in the sense of measures. We are interested in solutions to (1.3) which have total mass 1 at all times, which implies that $\int_{0}^{\infty}\partial_{x}(\psi d)(t,x)\,dx=0$ or equivalently that $\lim_{x\to\infty}\psi(x)\,d(t,x)=0$ (since $\psi(0)=0$). To be more specific, we will say that $f$ is a test-function iff $f\in{\mathcal{C}}^{1}({\mathbb{R}}^{+})$, and further $f$ and $\psi f^{\prime}$ are bounded. (Think of $f(x)=\exp(-\lambda x)$ for $\lambda\geq 0$.) Integrating both sides of (1.3) with respect to such a test-function $f$ and performing an integration by parts yields the following definition in the spirit of [20], which also follows the usual framework of the PDE literature. Definition 1.6. Let $\delta>0$ and $\nu\in M_{P}({\mathbb{R}}^{+})$. We say that a probability-valued process $(\mu_{t};t\geq 0)$ is a weak solution of (1.3) with initial condition $\nu$ if for every test-function $f$ and every $t\geq 0$ (1.5) $$\left<\mu_{t},f\right>=\left<\nu,f\right>-\int_{0}^{t}\left<\mu_{s},\psi f^{% \prime}\right>\,ds+\ \int_{0}^{t}\left(\frac{1}{s+\delta}\left<\mu_{s}\star\mu% _{s},f\right>-\left<\mu_{s},f\right>\right)\,ds,$$ where we used the notation $\left<\mu,f\right>=\int_{{\mathbb{R}}^{+}}f(x)\,\mu(dx)$ for any finite measure $\mu$. Let $\delta=0$. We say that a probability-valued process $(\mu_{t};t>0)$ is a weak solution of (1.3) if for every test-function $f$ and every $s,t>0$: (1.6) $$\left<\mu_{t},f\right>=\left<\mu_{s},f\right>-\int_{s}^{t}\left<\mu_{u},\psi f% ^{\prime}\right>\,du\ +\ \ \int_{s}^{t}\frac{1}{u}(\left<\mu_{u}\star\mu_{u},f% \right>-\left<\mu_{u},f\right>)\,du,$$ • We say that the solution is a dust solution iff $\mu_{t}\to\delta_{0}$ in the weak topology as $t\downarrow 0$. • We say that the solution is proper otherwise. Remark 1.7. The previous terminology is borrowed from fragmentation theory [6]. Let $(\mu_{t};t\geq 0)$ be a weak solution to (1.3). In view of the convolution product in (1.3) and (1.6), it is natural to consider the Laplace transform $u(t,\lambda)$ of $\mu_{t}$, namely $$u(t,\lambda)\ =\ \int_{{\mathbb{R}}^{+}}e^{-\lambda x}\,\mu_{t}(dx)\qquad% \lambda,t\geq 0.$$ Taking $f(x)=\exp(-\lambda x)$ in (1.6) yields that the Laplace transform $u(t,\lambda)$ satisfies the non-linear IPDE (1.7) $$\partial_{t}u\ =\ \lambda A^{\psi}u+a(t)(u^{2}-u),$$ with initial condition $u(0,\lambda)=\int_{{\mathbb{R}}^{+}}e^{-\lambda x}\,\nu(dx)$, where $A^{\psi}$ is the generator of $Y$ and $a(t)=1/(t+\delta)$. In particular, it is crucial that $\lambda A^{\psi}$ is the generator of the Continuous-State Branching Process (CSBP) $Z$ with branching mechanism $\psi$ [19]. When $\gamma=2$, $A^{\psi}$ acting on ${\mathcal{C}}^{2}({\mathbb{R}}^{+})$ functions coincide with the differential operator $\partial^{2}/\partial^{2}\lambda$, $Y$ is Brownian motion and $Z$ is the Feller diffusion [12]. By a martingale approach, we can prove the uniqueness (under mild, natural conditions) of the solution of (1.7) when $\lambda A^{\psi}$ is replaced with any generator $A$ of a non-negative Feller process and $a$ is a general continuous function. Result 3 (Theorem 2.4, Theorem 2.7, Theorem 4.2). Assume that $\psi$ is the Laplace exponent of a spectrally positive and (sub)critical Lévy process. ($\delta>0$) There exists a unique weak solution solution to (1.3) with initial condition $\nu$. ($\delta=0$) Assume that $\psi$ satisfies Grey’s condition, i.e.., that $\int^{\infty}1/\psi<\infty$. Then • There exists a unique proper solution. See Theorem 2.7(ii) for a probabilistic expression of the solution. • In the stable case (when $\psi(x)=cx^{\gamma}$), there exist infinitely many dust solutions. Remark 1.8. The existence of infinitely many dust solutions is reminiscent of a similar behavior for the Boltzmann equation [26]. Result 4 (Theorem 2.8). Let $\psi(x)=cx^{\gamma}$ with $\gamma\in(1,2]$. (The case $\gamma=2$ corresponds to the nested Kingman coalescent.) The proper solution $(\mu_{t};t>0)$ is self-similar in the sense that there exists a r.v. $\Upsilon$ such that $$\forall t>0,\ \mu_{t}={\mathcal{L}}\left(t^{-\beta}\Upsilon\right).$$ Further, (1) With $h(x)={\mathbb{E}}(\exp(-x\Upsilon))$ then $h$ is the unique solution of the ODE described in (2.19). (2) $-h^{\prime}(0)={\mathbb{E}}(\Upsilon)<\infty$ can be expressed as the measure of mass extinction under the $0$-entrance measure of a branching CSBP with branching mechanism $\psi$. See also Theorem 2.7(ii) for a detailed description of $\Upsilon$ in terms of a Branching CSBP. Remark 1.9. The variable $\Upsilon$ also arises in the study [9] of the speed of coming down from infinity of the nested Kingman coalescent. However, $\Upsilon$ is not characterized in terms of an IDE or an excursion measure. (as in items (1) and (2) of Result 4.) Instead, it is characterized as the fixed point of a distributional transformation. (see Theorem 1 and Eq (1) in [9].) 1.6. McKean-Vlasov (MK-V) equation Recall that in Section 1.5, we have restricted our attention to the case where $\psi$ is the Laplace exponent of a Lévy process. In this section, we only assume that $\psi\in\mathscr{H}$. (Additional assumptions will be needed when considering uniqueness in $\infty$-pop. regime.) Let us now consider the McKean–Vlasov equation (1.8) $$dx_{t}\ =\ -\psi(x_{t})\,dt\ +\ v_{t}\,\Delta J^{(\delta)}_{t},\ \mathcal{L}(x% _{0})=\nu$$ where $J^{(\delta)}$ is an inhomogeneous Poisson process with rate $1/(t+\delta)$ at time $t$, and $(v_{t};t\geq 0)$ is a sequence of independent random variables (indexed by ${\mathbb{R}}^{+}$) with the property that ${\mathcal{L}}(v_{t})={\mathcal{L}}(x_{t})$, and $\nu\in M_{P}({\mathbb{R}}^{+})$. Result 5 (Theorem 3.4, Corollary 3.5). For any $\psi\in\mathscr{H}$ and $\delta>0$, there exists a unique solution to MK-V (1.8) with initial condition $\nu\in M_{P}({\mathbb{R}}^{+})$. Informally, one can think of $(x_{t};t>0)$ as the evolution of the mass of a focal cluster in the population: mass is depleted at rate $\psi(x_{t})$ and upon coalescence, occurring at rate $1/(t+\delta)$, the cluster gains a mass with law ${\mathcal{L}}(x_{t})$, i.e., the cluster (or species) gains a random mass whose law is the one of another ‘typical’ and independent cluster at time $t$. It is straightforward to check from Itô’s formula that $(\mu_{t}:={\mathcal{L}}(x_{t});t\geq 0)$ is also a weak solution of (1.3) with inverse population size $\delta$ and initial measure $\nu$ (in the sense of Definition 1.6). See Lemma 3.2 for more details. This motivates the following definition. Definition 1.10. Let $\delta>0$ and $\nu\in M_{P}({\mathbb{R}}^{+})$. We say that $\left(\mu_{t};t>0\right)$ is a MK-V solution of the Smoluchowski equation (1.3) iff $(\mu_{t};t\geq 0)$ is the law of a solution to the MK-V equation (1.8) (with the same parameters). Note that any MK-V solution is also a weak solution of the IPDE, and as a consequence, the class of MK-V solutions is smaller than its weak counterpart. In Section 3, we study the solutions to the MK-V equation when $\delta>0$ under minimal assumptions on $\psi$. In particular, we show that the solution can be constructed naturally from the Brownian Coalescent Point Process (CPP) of Popovic [21]. This construction is reminiscent of the construction of the solution to the classical Smoluchowski equation when the coagulation kernel $K$ is equal to $1$ [2]. This explicit representation will allow us to develop some coupling techniques to investigate the $\infty$-pop. regime ($\delta=0$) and the long-time behavior of the solution to (1.3) when $\delta>0$. (We also note that at the intuitive level, our representation of MK-V solutions in terms of the CPP can be understood in the light of [18] where it is shown that the Kingman coalescent at small scales can be described in terms of the CPP. In our setting the CPP describes the limiting species coalescent.) Let us now consider the $\infty$-pop. MK-V equation in more detail. More precisely, we consider (1.8) when $\delta=0$ and with no prescription of the initial condition. Analogously to Definition 1.6, we can define proper and dust solutions for the $\infty$-pop. MK-V equation. By arguing as before, if $(x_{t},t>0)$ is a proper (resp., dust) solution to MK-V, then $(\mu_{t}:={\mathcal{L}}(x_{t}),t>0)$ is a proper (resp., dust) solution to the $Smoluchowski$ equation (1.3). In the same vein as Definition 1.10, the law of such processes will be referred to as solutions of the IPDE in the MK-V sense. Result 6 (Proposition 4.4, Theorem 4.1, Theorem 4.2). Assume that $\delta=0$ and assume that $\psi\in\mathscr{H}$ satisfies Grey’s condition and is convex. Then (1) There exists at least one proper solution to the $\infty$-pop. MK-V equation. (2) In the stable case $\psi(x)=cx^{\gamma}$ with $\gamma>1$, there is a unique proper solution $(x^{(0)}_{t};t\geq 0)$. Further, this process is self-similar in the sense that $${\mathcal{L}}\left(x_{t}^{(0)}\right)\ =\ {\mathcal{L}}\left(t^{-\beta}% \Upsilon\right),\ \ \ \ \mbox{where}\ \ \beta:=\frac{1}{\gamma-1},$$ where $\Upsilon$ is a positive rv (depending implicitly on $c$ and $\gamma$). (3) In the stable case, there exists infinitely many dust solutions. Remark 1.11. Note that in the finite population case ($\delta>0$) the uniqueness of the MK-V solution advertized in Result 5 holds for any $\psi\in\mathscr{H}$ while in the $\infty$-pop. case ($\delta=0$), we have only been able to show uniqueness of the proper solution for $\psi(x)=cx^{\gamma}$ with $\gamma>1$ (Result 6). We nevertheless conjecture that it holds in the general case. Remark 1.12. When $\gamma\in(1,2]$, the definition of $\Upsilon$ in Results 4 and 6 must coincide. This follows from the fact that any solution in the MK-V sense must be a solution in the weak sense, and the uniqueness of a weak proper solution when $\gamma\in(1,2]$. See Result 3. Result 7 (Theorem 4.13). Let $\gamma>1$ and recall $\beta=1/(\gamma-1)$. Assume that $\psi(x)=cx^{\gamma}$. Let $\nu\in M_{P}({\mathbb{R}}^{+})$ such that $\nu\neq\delta_{0}$ and let $x^{(\delta)}$ be the MK-V solution with inverse population size $\delta>0$ and initial probability measure $\nu$. Then $$\lim_{t\to\infty}t^{\beta}\ x_{t}^{(\delta)}\ =\ \Upsilon,\ \ \mbox{in law}$$ where $\Upsilon$ is defined in Result 6. In particular, this shows that as $t\to\infty$, the typical mass of a cluster goes to $0$ as $O(t^{-\beta})$. 1.7. Discussion and conjectures We have called ‘infinite population regime’ the case where the coagulation term becomes degenerate at $t=0$. The uniqueness of solutions to the Smoluchowski equation that we obtain in these cases in the apparent absence of initial condition is actually due to the fact that this initial condition can be seen as a Dirac mass at $\infty$, which ‘comes down from infinity’, in the sense that $\mu_{t}$ is a probability measure on $[0,\infty)$ at any positive time $t$. In this regard, Grey’s condition $\int^{\infty}1/\psi<\infty$ is not anecdotal. It ensures that $\dot{x}=-\psi(x)$ comes down from infinity, so that in the Smoluchowski equation, the antagonism between mass transport towards 0 and the increase of mass by coagulation is dominated by the transport term, in such a way that the mass distribution over species converges to the Dirac mass at 0 as $t\to\infty$. We conjecture that when $1/\psi$ is not integrable at $\infty$, coagulation overwhelms transport, in such a way that the mass distribution densifies around large masses as $t\to\infty$. We further conjecture that in this case, our results concerning uniqueness of solutions in the infinite population regime will not hold any longer. In addition to the previous remarks, and in view of our results, it is natural to make the additional following conjectures. (1) The notion of weak and MK-V solutions coincide. (We only showed that if $(x_{t};t\geq 0)$ is a solution to MK-V, then $({\mathcal{L}}(x_{t});t\geq 0)$ is a weak solution.) (2) There exists a unique proper weak and MK-V solution for any convex $\psi\in\mathscr{H}$ satisfying Grey’s condition. (3) Let $\Lambda$ be a finite measure on $[0,1]$. Results 1 and 2 concerning the convergence of the rescaled distribution of species masses at small times to the solution of the Smoluchowski equation extends to the case when the genes undergo a $\Lambda$-coalescent process coming down from infinity (and the species still undergo a Kingman coalescent) to the same Smoluchowski equation where $\psi(\lambda)=c\lambda^{2}$ is replaced with (1.9) $$\psi(\lambda)=\int_{(0,1]}(e^{-\lambda r}-1+\lambda r)\,r^{-2}\Lambda(dr),$$ which is the Laplace exponent of a spectrally positive Lévy process. This is due to the fact that in the standard $\Lambda$-coalescent coming down from infinity, the number of lineages obeys the asymptotical dynamics $\dot{x}=-\psi(x)$ [4, 3]. (4) The entrance law at $\infty$ (in the sense that the number of species at time $t$ goes to $\infty$ as $t\downarrow 0$) of the nested Kingman coalescent is unique. (5) Recall the notion of dust solution of Definition 1.6. For a given dust solution, we conjecture the existence of a scaling such that the nested Kingman coalescent converges to this solution. (6) For the sake of simplicity, we only considered in the manuscript a Kingman species coalescent. For a general $\Lambda$-coalescent, we expect the same type of results to hold. More precisely, the coagulation term in the IPDE should be replaced the coagulation kernel described in [7] Proposition 3. 2. Weak solutions and branching CSBP 2.1. Main assumptions In this section, we consider the case where $\psi$ is of the form $$\psi(x)\ =\ ax+bx^{2}\ +\ \int_{(0,\infty)}(\exp(-rx)-1+rx)\,\pi(dr)$$ where $b\geq 0$, $\pi$ is a $\sigma$-finite measure on $(0,\infty)$ such that $\int_{{\mathbb{R}}^{+}}(r\wedge r^{2})\pi(dr)<\infty$, and $a=\psi^{\prime}(0^{+})\geq 0$, so that $\psi\in\mathscr{H}$ is the Laplace exponent of a spectrally positive (sub)critical Lévy process $Y$ with $$\mathbb{E}_{0}(\exp(-\lambda Y_{t}))=e^{t\psi(\lambda)}\qquad t,\lambda\geq 0.$$ In particular, we can recover the case $\psi(x)=cx^{2}$ by taking $\pi=0$ and we can recover the cases $\psi(x)=cx^{\gamma}$ for $\gamma\in(1,2)$ by taking $b=0$ and the jump measure of the form $\pi(dx)=\frac{\bar{c}}{x^{\gamma+1}}$, where $\bar{c}$ is some positive constant. Note that under our assumptions, for any $t_{0}\in{\mathbb{R}}$ and $x_{0}>0$, there exists a unique solution $(v(t);t\geq t_{0})$ to the ODE $$\dot{v}\ =\ -\psi(v),\ v(t_{0})=x_{0},$$ and that $\lim_{t\to\infty}v(t)=0$. 2.2. Laplace transform of weak solutions We let $A^{\psi}$ denote the generator of the Lévy process $Y$. There is a Feller process with generator $A$ given by $Af(\lambda)=\lambda A^{\psi}f(\lambda)$, also known as the CSBP (Continuous State Branching Process) with branching mechanism $\psi$. For any $\mu\in M_{F}({\mathbb{R}}^{+})$, $f(\lambda)\ =\ \int_{{\mathbb{R}}^{+}}\exp(-\lambda x)\mu(dx)$ is in the domain of the generator $A$, and further $$\forall\lambda\geq 0,\ \ Af(\lambda)\ =\ \lambda\int_{{\mathbb{R}}^{+}}\psi(x)% \exp(-\lambda x)\mu(dx).$$ Let us now consider $(\mu_{t};t\geq 0)$ a weak solution of the Smoluchowski equation and set $$u(t,\lambda)\ =\ \int_{{\mathbb{R}}^{+}}e^{-\lambda x}\,\mu_{t}(dx)\qquad% \lambda,t\geq 0.$$ Since $f(x)=\exp(-\lambda x)$ with $\lambda>0$ is a test-function then plugging this choice of $f$ into (1.6) shows, after differentiation with respect to $t$, that $u$ satisfies (1.7) for all $\lambda>0$ and $t\geq 0$, that is, $$\partial_{t}u\ =\ Au+a(t)(u^{2}-u),$$ with initial condition $u(0,\lambda)=\int_{{\mathbb{R}}^{+}}e^{-\lambda x}\,\nu(dx)$, where $a(t)=1/(t+\delta)$. In the next subsection, we recall some well-known facts on the CSBP with branching mechanism $\psi$. Then we introduce a closely related object: the branching CSBP. In Subsection 2.5, we prove that there exists a unique weak solution to (1.3) which can be expressed in terms of a branching particle system. In Subsection 2.6, we focus on proper weak solutions in the infinite population of the Smoluchowski equation (1.3). In Subsection 2.7, we provide additional results in the stable case $\psi(\lambda)=c\lambda^{\gamma}$ for $\gamma\in(1,2]$. (that we call the stable case.) 2.3. Continuous-state branching processes (CSBP) We collect here known results about CSBP. See e.g., [11] or Section 2.2.3. in [17]. A CSBP $Z=(Z_{t};t\geq 0)$ is a Feller process with values in ${\mathbb{R}}^{+}$ with the branching property, namely if $P_{x}$ denotes the law of $Z$ started at $x\geq 0$, then $P_{x}\star P_{y}=P_{x+y}$. It is well-known [19, 10], that CSBPs are in one-to-one correspondence with Lévy processes with no negative jumps via several different bijections, including a random time-change known as Lamperti’s transform. If $\psi$ is the Laplace exponent of such a Lévy process $Y$ (assumed to be (sub)critical), then the CSBP $Z$ associated to $Y$ is called the CSBP with branching mechanism $\psi$ and the generator $A$ of $Z$ is given by $$Af(\lambda)=\lambda A^{\psi}f(\lambda)\qquad\lambda\geq 0,$$ for any $f$ in the domain of the generator $A^{\psi}$ of $Y$. Further, (2.10) $${\mathbb{E}}(\exp(-\lambda Y_{t}))=\ \exp(-xu_{t}),\ \mbox{where}\ \dot{u}=-% \psi(u),\ \ u_{0}=\lambda.$$ Note that by the branching property, $0$ is an absorbing state for $Z$. It is accessible iff $1/\psi$ is integrable at $\infty$ (Grey’s condition [14]). We then denote by $T_{0}$ the first hitting time of 0 by $Z$. By the branching property, the law of $Z_{t}$ is infinitely divisible. More specifically, if Grey’s condition is fulfilled, then under $P_{x}$, $Z_{t}$ is equal to the sum $\sum_{i}Z_{t}^{(i)}$, where $(Z_{t}^{(i)})$ are the atoms of a Poisson point process with intensity measure $xN$, where $N$ is a $\sigma$-finite measure on càdlàg processes started at 0 and with non-negative values. We will call $N$ the entrance measure at 0 of the CSBP. In particular, for any $\lambda\geq 0$, (2.11) $$E_{x}(\exp(-\lambda Z_{t}))=\exp\left(-xN\left(1-e^{-\lambda Z_{t}}\right)% \right),$$ and so for any measurable functional of paths $G$ such that $|G|\leq KT_{0}$ for some $K$, (2.12) $$\lim_{x\downarrow 0}x^{-1}E_{x}(G)=N(G)<\infty.$$ Note in particular that $N(T_{0}>t)<\infty$ for all $t>0$. 2.4. Duality between a branching and a coalescing particle system In this section, we fix $\mathbbm{t}$ an ultrametric binary tree with $n$ labelled leaves and depth $T$ (i.e., the distance from the root to each leaf is $T$). We let $N_{t}(\mathbbm{t})$ denote the number of points in $\mathbbm{t}$ at time $t$, i.e., at distance $t$ from the root. We will now introduce two particle systems, a coalescing particle system initialized at the leaves of $\mathbbm{t}$ and a branching particle system initialized at the root of $\mathbbm{t}$. Let us first introduce the coalescing particle system. We start by assigning a mark $\lambda_{i}\geq 0$ to the leaf of the tree $\mathbbm{t}$ labelled by $i$. Then we let the marks propagate from the leaves to the root according to the following rules: (i) along each branch, the marking evolves according to the deterministic dynamics $\dot{x}=-\psi(x)$, and (ii) when two branches merge, we add the corresponding two marks. We call $F(\mathbbm{t},\lambda)$ the resulting mark at the root. Now, we consider a system of branching particles with random mass running along the branches of the tree, from the root to the leaves, according to the following rules: (i) we start with one particle at the root, (ii) at each branching point the incoming particle, carrying say mass $x$, duplicates into two copies of itself, one copy for each branch, each with mass $x$, and (iii) along each branch, the mass of the particle on that branch evolves independently according to a CSBP with branching mechanism $\psi$. See Fig. 2. Let $Q_{x}^{\mathbbm{t}}$ denote the law of the branching particle system started with one particle with mass $x$ at time 0 and let $\mathcal{Z}^{\mathbbm{t}}_{t}=(Z_{t}^{i})_{1\leq i\leq N_{t}(\mathbbm{t})}$ denote the masses carried by the particles at time $t$. Finally, as a direct consequence of the branching property, it is not hard to see that $\mathcal{Z}^{\mathbbm{t}}$ is infinitely divisible, and as for a simple CSBP, under Grey’s condition, $\mathcal{Z}^{\mathbbm{t}}$ under $Q_{x}^{\mathbbm{t}}$ can be decomposed into a Poisson sum of elementary processes starting from $0$ with intensity measure $xM^{\mathbbm{t}}$, where $M^{\mathbbm{t}}$ is the entrance measure at 0 of ${\mathcal{Z}}^{\mathbbm{t}}$. In view of (2.12), we have $$M^{\mathbbm{t}}(G)=\lim_{x\downarrow 0}x^{-1}Q_{x}^{\mathbbm{t}}(G)$$ for any measurable functional of paths $G$ such that $|G|\leq KT_{0}$ for some $K$ (where here $T_{0}$ is the extinction time of the particle on the root edge of $\mathbbm{t}$, set to $+\infty$ if the particle splits before going extinct). Now analogously to (2.10), there exists a nice characterization of the Laplace transform of $\mathcal{Z}^{\mathbbm{t}}_{t}$. Proposition 2.1. For any non-negative numbers $x$ and $(\lambda_{i})_{1\leq i\leq n}$, (2.13) $${\mathbb{E}}_{x}\left(\exp(-\lambda\cdot{\mathcal{Z}}^{\mathbbm{t}}_{T})\right% )=\exp(-xF(\mathbbm{t},\lambda)),$$ with the notation ${\lambda\cdot\mathcal{Z}}^{\mathbbm{t}}_{T}=\sum_{i=1}^{n}\lambda_{i}Z_{t}^{i}$. Under Grey’s condition, we additionally have (2.14) $$M^{\mathbbm{t}}(1-\exp(-\lambda\cdot{\mathcal{Z}}_{T}^{\mathbbm{t}}))\ =\ F(% \mathbbm{t},\lambda).$$ Proof. Using (2.10), it is straightforward to get (2.13) by induction on the number of nodes of the tree. Under Grey’s condition, analogously to (2.11), we have $${\mathbb{E}}_{x}\left(\exp(-\lambda\cdot{\mathcal{Z}}^{\mathbbm{t}}_{T})\right% )=\exp\left\{-xM^{\mathbbm{t}}(1-\exp(-\lambda\cdot{\mathcal{Z}}_{t}))\right\},$$ which yields (2.14). ∎ 2.5. Smoluchowski equation in finite population Let $P_{t}$ denote the semigroup and $A$ the infinitesimal generator of a time-homogeneous Feller process $Z$ with values in $[0,\infty)$. We are interested in the case when $Af(\lambda)=\lambda A^{\psi}(\lambda)$ but do not immediately restrict our study to this case. Let $\mathcal{E}_{A}$ be the space of continuous functions $f:[0,\infty)\to[0,1]$ such that $Af$ is a well-defined continuous, bounded function on $(0,\infty)$ and $f(Z_{t})-\int_{0}^{t}Af(Z_{s})\,ds$ is a martingale (or equivalently $t\mapsto P_{t}f(x)$ is differentiable at 0 with derivative $Af(x)$ for all $x\geq 0$). Further let $\mathcal{E}_{A}^{\prime}$ be the space of two-variable continuous functions $f:[0,\infty)\times[0,\infty)\to[0,1]$ such that $f(t,\cdot)\in\mathcal{E}_{A}$ and $f(\cdot,x)\in{\mathcal{C}}^{1}$. Let $a:[0,\infty)\to[0,\infty)$ be a continuous map and fix $T>0$. Let us consider a time-inhomogeneous system of branching particles, where each particle carries an individual mass evolving like the process $Z$ and each particle independently gives birth at rate $\tilde{a}(t)=a(T-t)$ (at time $t$) to a copy of itself (i.e., a branching particle with mass $x$ splits into two particles, each with mass $x$). Let $\mathcal{Z}_{t}$ denote the state of this system (e.g., its empirical measure) at time $t$, assumed to be càdlàg. We assume that the system starts at time 0 with one particle carrying mass $x$, and we then let $Q_{x}^{T}$ denote the law of $\mathcal{Z}$ up to time $T$. We also let $N_{t}$ denote the number of particles at time $t$ and by $(Z_{t}^{i})_{1\leq i\leq N_{t}}$ the masses carried by these particles. Remark 2.2. Let $\mathbbm{t}$ be an ultrametric tree with depth $T$. When $Af(\lambda)=\lambda A^{\psi}(\lambda)$, and if we condition on the genealogy of the process up to $T$ to be $\mathbbm{t}$, the process $\mathcal{Z}$ coincides with $\mathcal{Z}^{\mathbbm{t}}$ as defined in the previous section. Lemma 2.3. For any $g\in\mathcal{E}_{A}$ there is at most one solution $u\in\mathcal{E}_{A}^{\prime}$ to the PDE (or IPDE) (2.15) $$\displaystyle\partial_{t}u\ =\ Au\ +\ a(t)(u^{2}-u),$$ with initial condition $u(0,x)=g(x)$. If $v$ is defined by $$v(T,x)=Q_{x}^{T}\left(\prod_{i=1}^{N_{T}}g(Z_{T}^{i})\right)\qquad T>0,x\geq 0$$ is in $\mathcal{E}_{A}^{\prime}$ then $u=v$. Before proving this lemma, we wish to state the relevant corollary regarding weak solutions to (1.3), taking $A$ in the previous lemma equal to the generator of the CSBP $Z$ with branching mechanism $\psi$. Theorem 2.4. Assume that $\psi$ is the Laplace exponent of a (sub)critical spectrally positive Lévy process. (i) There exists a unique weak solution $(\mu_{t};t\geq 0)$ to the Smoluchowski equation (1.3) with initial distribution $\nu$ and inverse population $\delta>0$. (ii) For any $T\geq 0$, $\mu_{T}$ is equal to the law of $F({\bf T},(W_{i});1\leq i\leq N_{T}(\bf T))$, where $\bf T$ is the time-inhomogeneous tree started at 0 with one particle, stopped at time $T$ and with birth rate $\tilde{a}(t)=1/(T-t+\delta)$, the $(W_{i})$ are iid with law $\nu$. Proof. Let us apply Lemma 2.3 with $A$ defined by $Af(\lambda)=\lambda A^{\psi}(\lambda)$ the generator of the CSBP $Z$ with branching mechanism $\psi$ and $a(t)=\frac{1}{t+\delta}$. Then Equation (2.16) is the same equation as (1.7), with initial condition $g(\lambda)=\int_{{\mathbb{R}}^{+}}e^{-\lambda x}\,\nu(dx)$, which we can write $$g(\lambda)={\mathbb{E}}(\exp(-\lambda W)),$$ where $W$ denotes a rv with law $\nu$. Note that $g$ takes values in $[0,1]$. Let us check that $g\in\mathcal{E}_{A}$. Recall that for any $f$ of the form $f(\lambda)\ =\ \int_{{\mathbb{R}}^{+}}\exp(-\lambda x)\mu(dx)$, $f$ is in the domain of $A$, and further $Af(\lambda)=\lambda\int_{{\mathbb{R}}^{+}}\psi(x)\exp(-\lambda x)\mu(dx)$. This shows that $$Ag(\lambda)=\lambda\int_{{\mathbb{R}}^{+}}\psi(x)\exp(-\lambda x)\nu(dx),$$ so that $Ag$ is a well-defined continuous function on $(0,\infty)$ (since $\psi$ increases at most polynomially at $\infty$). In addition, it is well-known (see e.g. [10]) that $$e^{-xZ_{t}}-\psi(x)\int_{0}^{t}Z_{s}\,e^{-xZ_{s}}\,ds$$ is a martingale for any $x\geq 0$, so by integrating $x$ wrt the probability measure $\nu$, we get that $g(Z_{t})-\int_{0}^{t}Ag(Z_{s})\,ds$ is also a martingale. By dominated convergence, $Ag$ vanishes at $\infty$ and so is bounded and we conclude that $g\in\mathcal{E}_{A}$. So by Lemma 2.3 there is at most one solution $v\in\mathcal{E}_{A}^{\prime}$ to (1.7) given by $$v(T,\lambda)=Q_{\lambda}^{T}\left(\prod_{i=1}^{N_{T}}g(Z_{T}^{i})\right).$$ Now let $(\mu_{t};t\geq 0)$ be a weak solution to the Smoluchowski equation (1.3) and set $$u(t,\lambda)\ =\ \int_{{\mathbb{R}}^{+}}e^{-\lambda x}\,\mu_{t}(dx)\qquad% \lambda,t\geq 0.$$ Recall that $u$ satisfies (1.7) with initial condition $u(0,\lambda)=g(\lambda)$. The exact same reasoning used to prove that $g\in\mathcal{E}_{A}$ shows that $u(t,\cdot)\in\mathcal{E}_{A}$ for all $t$, and because $u$ satisfies (1.7), $$\partial_{t}u(t,\lambda)-a(t)(u^{2}-u)(t,\lambda)=\lambda A^{\psi}u(t,\lambda)% =\lambda\int_{{\mathbb{R}}^{+}}\psi(x)\exp(-\lambda x)\mu_{t}(dx),$$ which is continuous in $t$. Since $a$ is continuous, we get that $u(\cdot,\lambda)$ is of class ${\mathcal{C}}^{1}$ and so $u\in\mathcal{E}_{A}^{\prime}$. This shows that $u=v$, so that $$u(T,\lambda)=Q_{\lambda}^{T}\left(\prod_{i=1}^{N_{T}}{\mathbb{E}}(\exp(-Z_{T}^% {i}W_{i})|Z_{T}^{i})\right)={\mathbb{E}}\left[Q_{\lambda}^{T}\left(\exp\left(-% \sum_{i=1}^{N_{T}}W_{i}Z_{T}^{i}\right)\right)\right],$$ where ${\mathbb{E}}$ is the expectation taken wrt the $(W_{i})$, which are independent copies of $W$ (and we have applied Fubini–Tonelli Theorem). From (2.13), we get $$u(T,\lambda)={\mathbb{E}}\left[Q_{\lambda}^{T}\left(\exp\left(-\sum_{i=1}^{N_{% T}}W_{i}Z_{T}^{i}\right)\right)\right]=\mathbb{E}_{T}\left[\exp\left(-\lambda F% ({\bf T},(W_{i});1\leq i\leq N_{T}(\bf T))\right)\right],$$ where now ${\mathbb{E}}_{T}$ is the expectation taken wrt the Yule tree $\bf T$ with branching parameter $\tilde{a}(t)=1/(T-t+\delta)$ stopped at time $T$ and the iid rvs $(W_{i})$. It follows that $$\int_{{\mathbb{R}}^{+}}e^{-\lambda x}\mu_{T}(dx)=u(T,\lambda)=\mathbb{E}\left[% \exp\left(-\lambda F({\bf T},(W_{i});1\leq i\leq N_{T}(\bf T))\right)\right],$$ and by the injectivity of the Laplace transform, $\mu_{T}$ is the law of $F({\bf T},(W_{i});1\leq i\leq N_{T}(\bf T))$. For the sake of completeness, in Appendix A we check that $\mu_{T}$ defined as $F({\bf T},(W_{i});1\leq i\leq N_{T}(\bf T))$ indeed is solution to (1.3). (As a matter of fact, the existence of a weak solution will be proved in the MK-V section (see Theorem 3.4) and thus checking that $\mu$ is indeed a weak solution is not formally needed.) ∎ Proof of Lemma 2.3. Recall that $P_{t}$ denotes the semigroup of $Z$. We will use the notation $E_{x}$ to denote the expectation associated with its law when started from $x$. We extend the semigroup and generators by defining for any $f\in\mathcal{E}_{A}^{\prime}$, $$\bar{P}_{s}f(t,x)=E_{x}(f(t+s,Z_{s}))\quad\mbox{ and\^{A}~}\quad\bar{A}f(t,x)=% Af(t,x)+\partial_{t}f(t,x),$$ so that in particular, $$\lim_{\varepsilon\downarrow 0}\frac{1}{\varepsilon}\left(\bar{P}_{\varepsilon}% f(t,x)-f(t,x)\right)=\bar{A}f(t,x).$$ Let $u\in\mathcal{E}_{A}^{\prime}$ be a solution to (2.16) with initial condition $g$. Fix $T>0$ and recall the system of branching particles defined before the statement of the lemma. Because the dynamics of the system are time-inhomogeneous, we will need to denote by $Q^{T}_{t,x}$ the law of $\mathcal{Z}$ when started with one single particle with mass $x$ at time $t$. In particular, $Q^{T}_{x}=Q^{T}_{0,x}$. Let $\mathcal{F}_{t}$ denote the $\sigma$-field generated by $\mathcal{Z}_{t}$. Fix $T>0$ and set $\mathcal{Z}^{u}$ the $(\mathcal{F}_{t})$-adapted process given by $$\forall t\in[0,T],\ \ \ \mathcal{Z}^{u}_{t}=\prod_{i=1}^{N_{t}}\tilde{u}(t,Z_{% t}^{i}),\ \ \mbox{where }\tilde{u}(t,x):=u(T-t,x),$$ $N_{t}$ is the number of particles present at time $t$ and $Z_{t}^{i}$ is the mass of particle $i$ (note from the definition of $\mathcal{Z}_{t}^{u}$ that it does not depend on the labelling chosen). We aim at proving that $Q^{T}_{x}(\mathcal{Z}^{u}_{t})$ is constant as a function of $t$. Let $t,\varepsilon$ such that $0\leq t\leq t+\varepsilon\leq T$. Conditional on $\mathcal{F}_{t}$, denote by $\tau_{i}$ time when the $i$-th particle splits, $i=1,\ldots,N_{t}$. Then by the branching property, $$\displaystyle Q^{T}(\mathcal{Z}^{u}_{t+\varepsilon}\mid\mathcal{F}_{t})=% \mathbb{P}(\tau_{i}>t+\varepsilon,\forall i)\prod_{i=1}^{N_{t}}E_{Z_{t}^{i}}(% \tilde{u}(t+\varepsilon,Z_{\varepsilon}))\\ \displaystyle+\sum_{j=1}^{N_{t}}\mathbb{P}(\tau_{i}>t+\varepsilon,\forall i% \not=j)\int_{t}^{t+\varepsilon}\mathbb{P}(\tau_{j}\in dv)\,\,E_{Z_{t}^{j}}(Q^{% T}_{v,Z_{v}}(\mathcal{Z}^{u}_{t+\varepsilon})^{2})\,\prod_{i\not=j}E_{Z_{t}^{i% }}(\tilde{u}(t+\varepsilon,Z_{\varepsilon}))+C_{\varepsilon},$$ where $C_{\varepsilon}\leq\mathbb{P}(B_{\varepsilon}\geq 2)$, with $$B_{\varepsilon}:=\#\{i\leq N_{t}:t\leq\tau_{i}\leq t+\varepsilon\}.$$ Now $B_{\varepsilon}$ is a binomial rv with parameters $N_{t}$ and $u_{\varepsilon}:=1-e^{-\int_{t}^{t+\varepsilon}\tilde{a}(u)\,du}$, so $$\mathbb{P}(B_{\varepsilon}\geq 2)\leq\frac{N_{t}(N_{t}-1)}{2}\,u_{\varepsilon}% ^{2}\leq N_{t}^{2}M^{2}\varepsilon^{2},$$ where $M:=\sup_{t\in[0,T]}a(t)$. Re-arranging, we get $$\displaystyle\frac{1}{\varepsilon}(Q^{T}(\mathcal{Z}^{u}_{t+\varepsilon}\mid% \mathcal{F}_{t})-\mathcal{Z}^{u}_{t})=\frac{1}{\varepsilon}\left(\prod_{i=1}^{% N_{t}}E_{Z_{t}^{i}}(\tilde{h}(t+\varepsilon,Z_{\varepsilon}))-\mathcal{Z}^{u}_% {t}\right)\\ \displaystyle+\frac{1}{\varepsilon}(1-e^{-N_{t}\int_{t}^{t+\varepsilon}\tilde{% a}(u)\,du})\prod_{i=1}^{N_{t}}E_{Z_{t}^{i}}(\tilde{u}(t+\varepsilon,Z_{% \varepsilon}))\\ \displaystyle+\frac{e^{-(N_{t}-1)\int_{t}^{t+\varepsilon}\tilde{a}(u)\,du}}{% \varepsilon}\sum_{j=1}^{N_{t}}\int_{t}^{t+\varepsilon}\tilde{a}(v)dv\,e^{-\int% _{t}^{v}\tilde{a}(u)\,du}\,E_{Z_{t}^{j}}(Q^{T}_{v,Z_{v}}(\mathcal{Z}^{u}_{t+% \varepsilon})^{2})\,\prod_{i\not=j}E_{Z_{t}^{i}}(\tilde{u}(t+\varepsilon,Z_{% \varepsilon}))\\ \displaystyle+\frac{1}{\varepsilon}C_{\varepsilon}.$$ Because $u\in\mathcal{E}_{A}^{\prime}$ and $a$ is continuous, the right-hand side of the last equality converges as $\varepsilon\downarrow 0$ to $$\displaystyle\sum_{i=1}^{N_{t}}(A\tilde{u}(t,Z_{t}^{i}))\prod_{j\not=i}\tilde{% u}(t,Z_{t}^{j})-\tilde{a}(t)\,N_{t}\mathcal{Z}_{t}^{u}+\tilde{a}(t)\,\sum_{j=1% }^{N_{t}}\tilde{u}(t,Z_{t}^{j})\,\mathcal{Z}_{t}^{u}$$ $$\displaystyle=\sum_{i=1}^{N_{t}}\left(\prod_{j\not=i}\tilde{u}(t,Z_{t}^{j})% \right)\,\left[A\tilde{u}(t,Z_{t}^{i}))-\tilde{a}(t)\,\tilde{u}(t,Z_{t}^{i})(1% -\tilde{u}(t,Z_{t}^{i}))\right].$$ Now the last quantity is zero because for any $x$ $$\displaystyle A\tilde{u}(t,x)-\tilde{a}(t)\,\tilde{u}(t,x)(1-\tilde{u}(t,x))\\ \displaystyle=Au(T-t,x)-\partial_{t}u(T-t,x)-a(T-t)\,u(T-t,x)(1-u(T-t,x)),$$ which is zero by (2.16). So we have proved $$\lim_{\varepsilon\downarrow 0}\frac{1}{\varepsilon}(Q^{T}(\mathcal{Z}^{u}_{t+% \varepsilon}\mid\mathcal{F}_{t})-\mathcal{Z}^{u}_{t})=0.$$ We would now like to take expectations inside the limit. Since $u$ and so $\mathcal{Z}^{u}$ take values in $[0,1]$, we first have $$\displaystyle\left|\frac{1}{\varepsilon}(Q^{T}(\mathcal{Z}^{u}_{t+\varepsilon}% \mid\mathcal{F}_{t})-\mathcal{Z}^{u}_{t})\right|$$ $$\displaystyle\leq$$ $$\displaystyle\left|\frac{1}{\varepsilon}\left(\prod_{i=1}^{N_{t}}\bar{P}_{% \varepsilon}\tilde{u}(t,Z_{t}^{i})-\mathcal{Z}^{u}_{t}\right)\right|+2MN_{t}+N% _{t}^{2}M^{2}\varepsilon.$$ Now because $\bar{A}\tilde{u}=a(t)(\tilde{u}-\tilde{u}^{2})$, $\bar{A}\tilde{u}$ takes values in $[0,M]$, and since $$\frac{\bar{P}_{\varepsilon}\tilde{u}(t,x)-\tilde{u}(t,x)}{\varepsilon}=\frac{1% }{\varepsilon}\int_{0}^{\varepsilon}\bar{P}_{s}\bar{A}\tilde{u}(t,x)\,ds,$$ we get $$0\leq\frac{\bar{P}_{\varepsilon}\tilde{u}(t,x)-\tilde{u}(t,x)}{\varepsilon}% \leq M.$$ So we can write $$\frac{1}{\varepsilon}\left(\prod_{i=1}^{N_{t}}\bar{P}_{\varepsilon}\tilde{u}(t% ,Z_{t}^{i})-\mathcal{Z}^{u}_{t}\right)=\frac{H(\varepsilon)-H(0)}{\varepsilon}$$ where $$H(\varepsilon)=\prod_{i=1}^{N_{t}}(x_{i}+\varepsilon y_{i}),$$ with $x_{i}=\tilde{u}(t,Z_{t}^{i})$ and $y_{i}=\frac{\bar{P}_{\varepsilon}\tilde{u}(t,Z_{t}^{i})-\tilde{u}(t,Z_{t}^{i})% }{\varepsilon}$, so that $0\leq x_{i}\leq 1$ and $0\leq y_{i}\leq M$. This shows that for any $z\in[0,\varepsilon]$ $$0\leq H^{\prime}(z)\leq H^{\prime}(\varepsilon)=\sum_{i=1}^{N_{t}}y_{i}\prod_{% j\not=i}(x_{j}+\varepsilon y_{j})\leq N_{t}M(1+\varepsilon M)^{N_{t}-1}.$$ Then by the Mean Value Theorem $$\left|\frac{1}{\varepsilon}\left(\prod_{i=1}^{N_{t}}\bar{P}_{\varepsilon}% \tilde{u}(t,Z_{t}^{i})-\mathcal{Z}^{u}_{t}\right)\right|=\left|\frac{H(% \varepsilon)-H(0)}{\varepsilon}\right|\leq N_{t}M(1+\varepsilon M)^{N_{t}-1}.$$ Finally we get $$\displaystyle\left|\frac{1}{\varepsilon}(Q^{T}(\mathcal{Z}^{u}_{t+\varepsilon}% \mid\mathcal{F}_{t})-\mathcal{Z}^{u}_{t})\right|$$ $$\displaystyle\leq$$ $$\displaystyle N_{t}M(1+\varepsilon M)^{N_{t}-1}+2MN_{t}+N_{t}^{2}M^{2}% \varepsilon=:S_{t}(\varepsilon).$$ Since under $Q^{T}_{x}$, $N_{t}$ is dominated by the number of lineages at time $t$ in a Yule process with birth rate $M$ started at 1, it is geometrically distributed and so there is $\varepsilon_{0}$ such that for any $\varepsilon\in[0,\varepsilon_{0}]$, $S_{t}(\varepsilon)\leq S_{t}(\varepsilon_{0})$ and $\mathbb{E}(S_{t}(\varepsilon_{0}))<\infty$. Then the Dominated Convergence Theorem ensures that $$\lim_{\varepsilon\downarrow 0}\frac{1}{\varepsilon}(Q^{T}_{x}(\mathcal{Z}^{u}_% {t+\varepsilon})-Q^{T}_{x}(\mathcal{Z}^{u}_{t}))=0.$$ This proves that $Q^{T}_{x}(\mathcal{Z}^{u}_{t})$ is constant as a function of $t$, so that $$u(T,x)=Q^{T}_{x}(\mathcal{Z}^{u}_{0})=Q^{T}_{x}(\mathcal{Z}^{u}_{T})=Q^{T}_{x}% \left(\prod_{i=1}^{N_{T}}u(0,Z_{T}^{i})\right)=Q^{T}_{x}\left(\prod_{i=1}^{N_{% T}}g(Z_{T}^{i})\right)=v(T,x),$$ which yields the announced result. ∎ Remark 2.5. By the branching property, $$\displaystyle v(T+\varepsilon,x)-v(T,x)$$ $$\displaystyle=$$ $$\displaystyle-v(T,x)+Q^{T+\varepsilon}_{x}\left(\prod_{i=1}^{N_{T+\varepsilon}% }g(Z_{T+\varepsilon}^{i}),N_{\varepsilon}=1\right)+a(T)\varepsilon\,v(T,x)^{2}% +o(\varepsilon)$$ $$\displaystyle=$$ $$\displaystyle-v(T,x)+(1-a(T)\varepsilon)\,E_{x}(v(T,Z_{\varepsilon}))+a(T)% \varepsilon\,v(T,x)^{2}+o(\varepsilon)$$ $$\displaystyle=$$ $$\displaystyle E_{x}(v(T,Z_{\varepsilon}))-v(T,x)+a(T)\varepsilon\,(v(T,x)^{2}-% v(T,x))+o(\varepsilon).$$ So for any $t,x\geq 0$, $$\lim_{\varepsilon\downarrow 0}\frac{(v(t+\varepsilon,x)-v(t,x))-(P_{% \varepsilon}v(t,x)-v(t,x))}{\varepsilon}=a(t)\,(v(t,x)^{2}-v(t,x)).$$ If we could prove that $v(\cdot,x)$ is of class ${\mathcal{C}}^{1}$ or that $v(t,\cdot)\in\mathcal{E}_{A}$, then the RHS would equal $\partial_{t}v-Av$ and $v$ would indeed be solution to (2.16). 2.6. Proper solutions for the $\infty$-pop. Smoluchowski equation In addition to the assumptions of Theorem 2.4, we now assume Grey’s condition. Under this assumption, $0$ is accessible and since $Y$ is assumed to be (sub)critical $$P_{x}(T_{0}<\infty)=1\qquad x\geq 0,$$ where we recall that $T_{0}=\inf\{t\geq 0:Z_{t}=0\}$. As in the proof of Theorem 2.4, we start with a general lemma, which is the $\infty$-pop. analog of Lemma 2.3. We make the same general assumptions with the notable difference that we only assume that $a$ is only defined on $(0,T)$. We denote by $T_{\mathscr{M}}$ the time of mass extinction of the branching particle system with birth rate $\tilde{a}$ defined on $[0,T)$, i.e., the first time when all particles carry zero mass. Under $Q_{x}^{T}$, we denote by ${\mathscr{M}}$ the event $\{T_{\mathscr{M}}<T\}$. Lemma 2.6. Assume that $\int_{(0,T)}a(t)\,dt=\infty$. Then there exists at most unique solution $u\in\mathcal{E}_{A}^{\prime}$ to the PDE (or IPDE) (2.16) $$\displaystyle\partial_{t}u\ =\ Au\ +\ a(t)(u^{2}-u)$$ defined for $t>0$, such that $\limsup_{t\downarrow 0}\sup_{y\in[x,\infty)}u(t,y)<1$ for all $x>0$ and $u(t,0)=1$ for all $t>0$. In addition this solution is given for any $T>0$ by $$u(T,x)=Q^{T}_{x}({\mathscr{M}}).$$ Before proving this lemma, we wish to state the relevant corollary regarding proper (weak) solutions to (1.3) when $A=\lambda A^{\psi}$. Recall $M^{\mathbbm{t}}$ is the entrance measure at 0 of the branching particle system conditional on the genealogy $\mathbbm{t}$, where particle masses evolve like the CSBP $Z$ with branching mechanism $\psi$ (and at each branching event in $\mathbbm{t}$, the particle with mass $x$ undergoing division, splits into two particles, each with mass $x$). Theorem 2.7. Let $\psi$ be the Laplace exponent of a spectrally positive and (sub)critical Lévy process such that $1/\psi$ is integrable at $\infty$. Then (i) There exists a unique proper weak solution $(\mu_{t};t>0)$ to the Smoluchowski equation (1.3). (ii) For any $T>0$, $\mu_{T}$ is the law of $M^{\bf T}(\mathscr{M}^{c})$, where $\bf T$ is the time-inhomogeneous binary tree started at 0 with one particle, stopped at time $T$ and with birth rate $\tilde{a}(t)=1/(T-t)$. Proof. The proof follows the same lines as the proof of Theorem 2.4, but here in the absence of initial condition. If $(\mu_{t};t>0)$ is a weak solution to the Smoluchowski equation (1.3), then its Laplace transform $u(t,\lambda)$ is solution of (2.16). Further, the fact that the solution is proper implies that $$\limsup_{t\downarrow 0}\sup_{y\in[x,\infty)}u(t,y)=\limsup_{t\downarrow 0}u(t,% x)\ <\ 1$$ and as a consequence of the previous lemma $u(T,x)=Q^{T}_{x}({\mathscr{M}})$. The same application of the branching property as in the proof of Theorem 2.4 shows that $$Q^{T}_{x}({\mathscr{M}})=\mathbb{E}\left[\exp\left(-xM^{\bf T}(\mathscr{M}^{c}% )\right)\right],$$ where the expectation is taken wrt to the binary tree $\bf T$ with branching rate $\tilde{a}(t)=1/(T-t)$. More specifically, we can write conditionally on ${\bf T}=\mathbbm{t}$ $$\mathbbm{1}_{\mathscr{M}}=\exp(-\sum_{i}\chi_{i}),$$ where the sum is taken over the atoms of the Poisson process of branching particles with intensity $xM^{\mathbbm{t}}$ and $\chi_{i}=0$ if the $i$-th particle has zero mass in its descendance at time $T$ and $+\infty$ otherwise. The result is obtained by an application of the exponential formula and taking expectation wrt $\bf T$. As a consequence, we get that $\mu_{T}$ is the law of $M^{\bf T}(\mathscr{M}^{c})$. Finally, it remains to show the existence of a proper solution. One option consists in checking that ${\mathcal{L}}(M^{\bf T}(\mathscr{M}^{c}))$ is solution. Alternatively, we will provide a construction through the MK-V approach in the next section. ∎ Proof of Lemma 2.6. Let us first prove the uniqueness part of the statement. Let $u$ be a solution to (2.16) with the requested properties. Following the proof of Lemma 2.3, $Q^{T}_{x}(\mathcal{Z}^{u}_{t})$ is constant as a function of $t\in[0,T-\varepsilon]$ for any $\varepsilon\in(0,T)$, so that (2.17) $$u(T,x)=Q^{T}_{x}(\mathcal{Z}^{u}_{0})=Q^{T}_{x}(\mathcal{Z}^{u}_{T-\varepsilon% })=Q^{T}_{x}\left(\prod_{i=1}^{N_{T-\varepsilon}}u(\varepsilon,Z_{T-% \varepsilon}^{i})\right).$$ If we set $X_{\varepsilon}:=\prod_{i=1}^{N_{T-\varepsilon}}u(\varepsilon,Z_{T-\varepsilon% }^{i})$, we can write $$u(T,x)=Q^{T}_{x}(X_{\varepsilon}\mathbbm{1}_{T_{\mathscr{M}}<T})+Q^{T}_{x}(X_{% \varepsilon}\mathbbm{1}_{T_{\mathscr{M}}\geq T}).$$ On $\mathscr{M}$, because $u(\varepsilon,0)=1$, $X_{\varepsilon}=1$ for any $\varepsilon$ such that $T_{\mathscr{M}}<T-\varepsilon$. By dominated convergence, $$\lim_{\varepsilon\downarrow 0}Q^{T}_{x}(X_{\varepsilon}\mathbbm{1}_{\mathscr{M% }})=Q^{T}_{x}(\mathscr{M}^{c}).$$ Then it only remains to show that $$\lim_{\varepsilon\downarrow 0}Q^{T}_{x}(X_{\varepsilon}\mathbbm{1}_{\mathscr{M% }^{c}})=0.$$ First observe that because $a$ is not integrable in the neighborhood of $0+$, the birth rate $\tilde{a}$ is not integrable in the neighborhood of $T-$, so that a.s. $\lim_{\varepsilon\downarrow 0}N_{T-\varepsilon}=+\infty$. On $\{T_{\mathscr{M}}\geq T\}$, there is at least one particle born before $T$ which carries positive mass up until time $T$. The probability that the mass of this particle vanishes exactly at time $T$ is zero. As a consequence, there is $\eta,\epsilon>0$ such that this focal particle has mass larger than $\eta$ on $[T-\epsilon,T]$. Let $t_{n}$ the times at which the focal particle gives birth, where $(t_{n})$ is an increasing sequence converging to $T$. Let $\eta_{n}\geq\eta$ be the mass carried by the focal particle at time $t_{n}$. Then conditional on $(t_{n})$ and $(\eta_{n})$, let $\alpha_{n}$ denote the probability that the particle born at time $t_{n}$ with mass $\eta_{n}$ carries mass always larger than $\eta/2$ between $t_{n}$ and $T$. Since $Z$ is Feller, the sequence $(\alpha_{n})$ is bounded away from 0 and by independence of these particles conditional on $(\eta_{n})$, infinitely many of them carry mass larger than $\eta/2$ on $[t_{n},T]$. As a consequence, a.s. on $\{T_{\mathscr{M}}\geq T\}$, there is $\eta>0$ such that $$X_{\varepsilon}\leq\prod_{i=1}^{N^{\eta}_{T-\varepsilon}}u(\varepsilon,Z^{\eta% ,i}_{T-\varepsilon})$$ where $N^{\eta}_{t}$ is the number of particles at time $t$ which carry more than $\eta/2$ on $[t,T]$ and $(Z^{\eta,i}_{t})$ are their masses, which satisfy $$\lim_{\varepsilon\downarrow 0}N^{\eta}_{T-\varepsilon}=+\infty\quad\mbox{ and % }\quad Z^{\eta,i}_{T-\varepsilon}\geq\eta/2.$$ Since $\limsup_{\varepsilon\downarrow 0}\sup_{[\eta/2,\infty)}u(\varepsilon,x)<1$, $\lim_{\varepsilon\downarrow 0}X_{\varepsilon}=0$ a.s. on ${\mathscr{M}}^{c}$. The result follows from dominated convergence. The fact that such defined $u$ satisfies (2.16) is due to the same reasoning as in Remark 2.5, except that here $u(\cdot,x)$ is of class ${\mathcal{C}}^{1}$ (since $T_{0}$, and so $T_{\mathscr{M}}$, has a continuous density), so we can conclude that $u(t,\cdot)\in\mathcal{E}_{A}$ and that indeed $\partial_{t}u=Au+a(t)\,(u^{2}-u)$. ∎ 2.7. The self-similar case Here, we assume that $A$ is the generator of the stable CSBP $Z$ with Laplace exponent $\psi(x)=cx^{\gamma}$, for $\gamma\in(1,2]$ and $c>0$. For any real number $r$, we denote by $Z^{(r)}$ the CSBP with branching mechanism $\psi(\lambda)-r\lambda$, which is the Feller process with generator $A_{r}$ defined by $$A_{r}f(x)=Af(x)+rxf^{\prime}(x)\qquad x\geq 0.$$ Set $$\beta=\frac{1}{\gamma-1}.$$ Here we denote by $Q_{x}^{(\beta)}$ the law of a branching particle system started with a single particle with mass $x$, where particles branch at rate 1 and masses follow independent copies, not of the original CSBP (with Laplace exponent $cx^{\gamma}$), but of the CSBP $Z^{(\beta)}$. Similarly, for any infinite binary tree $\mathbbm{t}$ embedded in continuous time, $Q_{x}^{(\beta),\mathbbm{t}}$ now denotes the law of the particle system started with one particle with mass $x$ at time 0, where particle masses evolve like independent copies of $Z^{(\beta)}$ and where the genealogy of particles is given by $\mathbbm{t}$. Consistently, the entrance measure at 0 of the branching particle system with genealogy $\mathbbm{t}$ is $$M^{(\beta),\mathbbm{t}}=\lim_{x\downarrow 0}x^{-1}Q_{x}^{(\beta),\mathbbm{t}}.$$ Here $\mathscr{M}$ denotes the event of total mass extinction, i.e., the event $\mathscr{M}=\{T_{\mathscr{M}}<\infty\}$ that after some finite time all particles have mass $0$. Theorem 2.8. If $(\mu_{t};t\geq 0)$ is the unique proper solution of the Smoluchowski equation (1.3) advertized in Theorem 2.7, then (i) $\mu_{t}$ is the law of $t^{-\beta}\Upsilon$, where (2.18) $$\Upsilon=M^{\bf T,(\beta)}(\mathscr{M}^{c}),$$ where $\bf T$ denotes the Yule tree, i.e., the pure-birth tree with unit birth rate. (ii) Let $h(x)=\exp(-x\Upsilon)$. Then $h(x)\leq\exp(-x(\beta/c)^{\beta})$ and $h$ is the unique solution in $\mathcal{E}_{A_{\beta}}$ to $$\displaystyle A_{\beta}h\ +h^{2}-h\ =\ 0$$ (2.19) $$\displaystyle h(0)=1,\ \ \limsup_{x\to\infty}h(x)<1.$$ Proof. We first recall some known facts about $Z^{(r)}$ (see for example [17]). For any $x\geq 0$, the law of $Z^{(r)}$ started at $x$ is denoted $P^{(r)}_{x}$. It is well-known that $0$ is absorbing for $Z^{(r)}$ and that if $T_{0}$ denotes the absorbing time of $Z^{(r)}$ at 0, then $$P^{(r)}_{x}(T_{0}<t)=e^{-x\varphi_{r}(t)}\qquad x,t\geq 0,$$ where $\varphi_{r}$ is the inverse of $$\phi_{r}(\lambda)=\int_{\lambda}^{\infty}\frac{dx}{\psi(x)-rx}=-\frac{\beta}{r% }\ln\left(1-\frac{r}{c\lambda^{1/\beta}}\right)$$ when $r\not=0$, and if $r=0$, $$\phi_{0}(\lambda)=\frac{\beta/c}{\lambda^{1/\beta}}.$$ This yields $$\varphi_{r}(t)=\left(\frac{r/c}{1-e^{-rt/\beta}}\right)^{\beta}$$ when $r\not=0$, and if $r=0$, $$\varphi_{0}(t)=\left(\frac{\beta/c}{t}\right)^{\beta}.$$ In particular, the probability of extinction (of a non-branching particle) is $$P^{(r)}_{x}(T_{0}<\infty)=\exp(-x(r/c)^{\beta})$$ when $r>0$ and is 1 if $r\leq 0$. More specifically, $$E^{(r)}_{x}(e^{-\lambda Z^{(r)}_{t}})=e^{-x\varphi_{r}(t+\phi_{r}(\lambda))},$$ where (2.20) $$\varphi_{r}(t+\phi_{r}(\lambda))=\left(\frac{r/c}{1-e^{-rt/\beta}\left(1-\frac% {r}{c\lambda^{1/\beta}}\right)}\right)^{\beta}$$ when $r\not=0$. Now we wish to compare the two branching particle systems. We will refer to the $(0,T)$-system as the time-inhomogeneous branching particle system with inhomogeneous branching rate $\tilde{a}(t)=a(T-t)=1/(T-t)$ blowing up at time $T$ and masses evolving as independent copies of $Z^{(0)}$. We will refer to the $(\beta,\infty)$-system as the time-homogeneous particle system where particles branch at rate 1 and masses evolve as independent copies of $Z^{(\beta)}$. For either system, the genealogy of particles can be represented by the infinite binary tree, and for each node $v$ of the infinite binary tree, we record the corresponding branching time $U_{0}(v)$ (resp. $U_{\beta}(v)$) and the mass of the corresponding particle just before it splits $M_{0}(v)$ (resp. $M_{\beta}(v)$) in the $(0,T)$-system (resp. the $(\beta,\infty)$-system). Recall from Lemma 2.6 that the probability that all masses go extinct in the $(0,T)$-system is $u(T,x)$. We denote by $h(x)$ this probability in the $(\beta,\infty)$-system. We claim that $u(T,x)=h(x/T^{\beta})$. First, it is straightforward to check that the first branching time $U_{0}$ of the first particle in the $(0,T)$-system is uniformly distributed in $(0,T)$, so that using (2.20) with $r=\beta$, we get $$\displaystyle E_{T^{\beta}x}(e^{-\lambda Z^{(0)}_{U_{0}}/(T-U_{0})^{\beta}})$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{T}\int_{0}^{T}ds\,\exp\left\{-T^{\beta}x\,\varphi_{0}(s+% \phi_{0}(\lambda/(T-s)^{\beta}))\right\}$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{\infty}du\,e^{-u}\,\exp\left\{-T^{\beta}x\,\varphi_{0}(% T(1-e^{-u})+\phi_{0}(\lambda/T^{\beta}e^{-\beta u}))\right\}$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{\infty}du\,e^{-u}\,\exp\left\{-T^{\beta}x\left(\frac{% \beta/c}{T(1-e^{-u})+\frac{\beta/c}{\lambda^{1/\beta}/Te^{-u}})}\right)^{\beta% }\right\}$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{\infty}du\,e^{-u}\,\exp\left\{-x\left(\frac{\beta/c}{1-% e^{-u}+e^{-u}\frac{\beta/c}{\lambda^{1/\beta}})}\right)^{\beta}\right\}$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{\infty}du\,e^{-u}\,e^{-x\varphi_{\beta}(u+\phi_{\beta}(% \lambda))}$$ $$\displaystyle=$$ $$\displaystyle E_{x}(e^{-\lambda Z^{(\beta)}_{U_{\beta}}}),$$ where $U_{\beta}$ is an independent exponential variable with parameter 1. By an immediate induction, we see that if the $(0,T)$-system starts with mass $T^{\beta}x$ and the $(\beta,\infty)$-system starts with mass $x$, then the sequence of rescaled masses $(w^{(0)}(v)/(T-U^{(0)}(v))^{\beta})_{v}$ indexed by the binary tree is equally distributed as the sequence $(w^{(\beta)}(v))_{v}$. Now by a similar argument as the one used in the proof of Lemma 2.6, it can be seen that in both cases, there is extinction of mass iff all masses are zero except in a finite number of nodes $v$ of the genealogy. This shows that $u(T,T^{\beta}x)=h(x)$, so that $u(T,x)=h(x/T^{\beta})$, as claimed earlier. Finally, the same application of the branching property as in the proof of Theorem 2.7 shows (2.18). Now let us show the properties of $h$ stated in (ii) of the theorem. Let $\mathcal{Z}_{t}$ denote the state (e.g., the empirical measure) at time $t$ of the $(\beta,\infty)$-system. Let $(P_{t})$ denote the semigroup of $Z^{(\beta)}$, that we now simply denote $Z$ (more generally, we will omit the $\beta$ superscript until the end of the proof). Since the first branching time $\tau$ of the initial particle is independent of the dynamics of its mass, by the branching property $$h(x)=E_{x}(h(Z_{t}))\,\mathbb{P}(\tau>t)+\int_{0}^{t}ds\,\mathbb{P}(\tau\in ds% )\,E_{x}(h(Z_{s})^{2}).$$ Re-arranging, we get for any $t>0$ $$\frac{1}{t}(P_{t}h(x)-h(x))=\frac{1-e^{-t}}{t}\,P_{t}h(x)-\frac{1}{t}\int_{0}^% {t}ds\,e^{-s}\,P_{s}(h^{2})(x)$$ Since $h$ is bounded continuous and $Z$ is a Feller process, the right-hand side converges as $t\downarrow 0$ to $h(x)-h(x)^{2}$. Then $h$ is in the domain of $A_{\beta}$ and we have $$A_{\beta}h(x)=h(x)-h(x)^{2}\qquad x\geq 0.$$ Note that $h(x)\in[0,1]$ and $h(0)=1$. Also notice that $\mathscr{M}\subset\mathscr{E}$, where $\mathscr{E}$ is the event that the mass of a single (randomly chosen, say) lineage goes to 0. Now $Q_{x}(\mathscr{E})=P^{(\beta)}_{x}(T_{0}<\infty)=\exp(-x(\beta/c)^{\beta})$, which yields $$h(x)=Q_{x}(\mathscr{M})\leq Q_{x}(\mathscr{E})=\exp(-x(\beta/c)^{\beta}).$$ As a consequence $\lim_{x\to\infty}h(x)=0$ and so $h$ is a solution to (2.19), which yields the existence part of the statement. For the uniqueness part, it is sufficient to note that if $h$ is solution of the ODE, then $u(t,x)=h(x/t^{\beta})$ is solution of the IPDE (2.16). Since the solution of this equation is unique, the result follows. ∎ 3. Finite population McKean–Vlasov equation In this section, we only assume that $\psi\in\mathscr{H}$, which notably encompasses the case studied in the previous section, i.e., when $\psi$ is the Laplace transform of a spectrally positive (sub)critical Lévy process. For any $u>0$, $\theta_{u}$ will be the time shift operator by $u$ so that $\theta_{u}\circ f(t)\ =f(t+u)$ for any generic function $f$ of time. We fix $\delta>0$ and $\nu\in M_{P}({\mathbb{R}}^{+})$. (Recall that we think of $\delta$ as the inverse population size.) We consider the McKean–Vlasov equation (1.8). As already mentioned in the introduction, one may think of $(x_{t};t\geq 0)$ as the evolution of the mass of a typical cluster in the population described by the Smoluchowski equation (1.3). We start by giving a more formal definition in terms of a fixed point problem (see Proposition 3.1). Let us consider the Skorohod space $D({\mathbb{R}}^{+},{\mathbb{R}}^{+})$ (i.e., the space of càdlàg functions equipped with the Skorohod topology on every finite interval $[0,T]$). For every probability measure $m$ on $D({\mathbb{R}}^{+},{\mathbb{R}}^{+})$, define $\phi(m)$ the law of the process $$dy_{t}\ =\ -\psi(y_{t})dt\ +\ \Delta J_{t}^{(\delta)}v_{t},\ \ {\mathcal{L}}(y% _{0})=\nu$$ where $(v_{t};t\geq 0)$ is a family of independent random variables with $v_{t}$ being distributed as $z_{t}$ – the process with law $m$ evaluated at time $t$ – and $J^{(\delta)}$ is an inhomogeneous Poisson process with rate $1/(t+\delta)$. (More precisely, conditioned on the jump times $\{s_{i}\}$ of $J^{(\delta)}$, $\{v_{s_{i}}\}_{i}$ is a sequence of independent rv’s with respective law ${\mathcal{L}}(z_{t_{i}})$.) We will say that $\left(x_{t};t\geq 0\right)$ is solution of the McKean–Vlasov equation (1.8) iff the law of the process $x$ is a fixed point for the map $\phi$. Proposition 3.1 (Uniqueness to MK-V). The operator $\phi$ has a unique fixed point. As a consequence, there is at most one solution to the MK-V equation (1.8). Proof. We give a contraction argument analogous to Theorem 1.1. in [23]. For every pair of measures $m^{1},m^{2}$ on $D({\mathbb{R}}^{+},{\mathbb{R}}^{+})$, and every $T\geq 0$, we define the Wasserstein distance $$D_{T}\left(m_{1},m_{2}\right)\ =\ \inf\left\{{\mathbb{E}}(\sup_{s\in[0,T]}|y_{% s}^{1}-y_{s}^{2}|)\ :\ {\mathcal{L}}(y^{1})=m^{1},\ {\mathcal{L}}(y^{2})=m^{2}\right\}$$ where the infimum is taken over every possible coupling between $y^{1},y^{2}$ under the constraint ${\mathcal{L}}(y^{1})=m^{1}$ and ${\mathcal{L}}(y^{2})=m^{2}$. Consider $z^{1},z^{2}$ be two processes in $D({\mathbb{R}}^{+},{\mathbb{R}}^{+})$ with respective laws $m^{1}$ and $m^{2}$ and define (3.21) $$\displaystyle dx^{i}_{t}\ =\ -\psi(x^{i}_{t})dt\ +\ \Delta J^{(\delta)}_{t}v^{% i}_{t},\ \ {\mathcal{L}}(x_{0}^{i})\ =\ \nu,$$ where $x_{0}^{1}=x_{0}^{2}$ and $(v^{1}_{s},v^{2}_{s})$ are independent random variables with ${\mathcal{L}}(v^{i}_{s})\ =\ {\mathcal{L}}(z_{s}^{i}),i=1,2$ and $(v^{1}_{s},v^{2}_{s})$ are coupled in a minimal way, i.e., for every $s\geq 0$ $${\mathbb{E}}(|v^{1}_{s}-v^{2}_{s}|)\ =\ \inf\left\{\ {\mathbb{E}}(|a-b|)\ :\ {% \mathcal{L}}(a)\ =\ z_{s}^{1},{\mathcal{L}}(b)=z_{s}^{2}\right\}.$$ (One can show that the minimum is attained by considering an approximating subsequence and using a standard tightness argument.) Write $\Delta x_{t}=x_{t}^{2}-x_{t}^{1}$ and note that $\Delta x_{0}=0$. We have $$\Delta x_{t}\ =\ \Delta x_{0}\ -\ \int_{0}^{t}\left(\psi(x^{2}_{s})-\psi(x^{1}% _{s})\right)ds\ +\ \sum_{s_{i}\leq t\ :\ \Delta J^{(\delta)}_{s_{i}}=1}(v_{s_{% i}}^{2}-v_{s_{i}}^{1}).$$ Since $\psi$ is positive and non-decreasing in $x$, the part of the dynamics $\left(\psi(x^{2}_{s})-\psi(x^{1}_{s})\right)ds$ can only reduce the distance between $x^{1}$ and $x^{2}$, it is not hard to see that $$\sup_{s\leq t}|\Delta x_{s}|\leq\sum_{s_{i}\leq t:\Delta J^{(\delta)}_{s_{i}}=% 1}|v^{2}_{s_{i}}-v^{1}_{s_{i}}|$$ and thus $$\displaystyle{\mathbb{E}}(\sup_{s\leq t}|\Delta x_{s}|)$$ $$\displaystyle\leq$$ $$\displaystyle\int_{0}^{t}\frac{1}{\delta+s}{\mathbb{E}}\left(|v_{s}^{2}-v_{s}^% {1}|\right)ds$$ $$\displaystyle\leq$$ $$\displaystyle\frac{1}{\delta}\int_{0}^{t}D_{s}(m^{1},m^{2})ds$$ where the last inequality follows from the choice of our coupling $(v_{s}^{1},v_{s}^{2})$. This implies that $$\displaystyle D_{t}(\phi(m_{1}),\phi(m_{2}))$$ $$\displaystyle\leq$$ $$\displaystyle\frac{1}{\delta}\ \int_{0}^{t}D_{s}(m_{1},m_{2})ds$$ By a simple induction, this implies that $$\displaystyle D_{t}(\phi^{k}(m_{1}),\phi^{k}(m_{2}))$$ $$\displaystyle\leq$$ $$\displaystyle\frac{t^{k}}{\delta^{k}k!}D_{t}(m_{1},m_{2}).$$ Thus if $m_{1}$ and $m_{2}$ are two fixed points for $\phi$, letting $k\to\infty$, yields that $m_{1}=m_{2}$. ∎ Lemma 3.2. Let $(x_{t};t\geq 0)$ be a solution of (1.8), so that that $(\mu_{t}:={\mathcal{L}}(x_{t});t\geq 0)$ be is a (MK–V) solution of the Smoluchowski equation (1.3). Then $\left(\mu_{t};t\geq 0\right)$ is also a weak solution to the Smoluchowski equation (1.3) with initial condition $\nu$ and inverse population size $\delta$. Proof. By definition of the process $x$, for any test function $f$, a direct application of Itô’s formula yields $${\mathbb{E}}\left(f\left(x_{t}\right)\right)\ =\ {\mathbb{E}}\left(f\left(x_{0% }\right)\right)-\int_{0}^{t}{\mathbb{E}}(\psi(x_{s})f^{\prime}(x_{s}))ds\ +% \int_{0}^{t}\frac{1}{\delta+s}\int{\mathbb{E}}\left(f(x_{s}+u)-f(x_{s})\right)% \mu_{s}(du)$$ which can be rewritten as (1.6). ∎ 3.1. The Brownian CPP Let us denote by ${\mathcal{P}}$ a Poisson point process with intensity measure $dl\times\frac{dt}{t^{2}}$ on ${\mathbb{R}}^{+}_{*}\times{\mathbb{R}}^{+}_{*}$. We call Brownian Coalescent Point Process the ultrametric tree ${\mathcal{T}}$ associated with ${\mathcal{P}}$ $${\mathcal{T}}\ =\ \{(l,s)\in{\mathbb{R}}^{+}\times{\mathbb{R}}^{+}\ :\ \exists t% \geq s\ \mbox{s.t. }(l,t)\in{\mathcal{P}}\}\cup\{(0,t);t\in{\mathbb{R}}^{+}\}$$ equipped with the distance $$d_{{\mathcal{T}}}\left((l^{\prime},s^{\prime}),(l,s)\right)\ =\left\{\begin{% array}[]{cc}\ 2\max\{\bar{t}\ :\ (\bar{l},\bar{t})\in{\mathcal{P}},\ \mbox{s.t% .}\ \bar{l}\in[l\wedge l,l^{\prime}\vee l]\}-({s+s^{\prime}})&\mbox{if $l\neq l% ^{\prime}$},\\ |s-s^{\prime}|&\mbox{otherwise.}\end{array}\right.$$ $\{(0,t);t\in{\mathbb{R}}^{+}\}\subset{\mathcal{T}}$ will be referred to as the eternal branch of the tree. See Fig. 3 for a pictorial representation of the tree ${\mathcal{T}}$ above level $\delta>0$. 3.2. Marking the CPP and construction of a solution to MK-V In this section, we describe a marking of the tree which will provide a solution to MK-V (Theorem 3.4). Fix $\delta>0$ and let $\{\zeta^{(\delta)}_{i}\}_{i}$ be a (possibly random) sequence in ${\mathbb{R}}^{+}$. Let $\{l_{i},\delta\}_{i}$ be the elements in ${\mathcal{T}}$ with time coordinate $\delta$, and assume that the $l_{i}$’s are listed in increasing order. For each point $(l,s)$ of the tree ${\mathcal{T}}$ with time coordinate $s\geq\delta$, we assign a mark $m_{l}(s)$ such that (1) $m_{l_{i}}(\delta)=\zeta_{i}^{(\delta)}$ for every $i\in{\mathbb{N}}$, and (2) the marks with higher time coordinates are deduced (deterministically conditioned on ${\mathcal{T}}$ and the initial marking) from the differential relation (3.22) $$\displaystyle\forall t\geq\delta,\ dm_{l}^{(\delta)}(t)\ =\ -\psi(m_{l}^{(% \delta)}(t))dt\ +\ \sum_{(l^{\prime},t)\in{\mathcal{P}}:l^{\prime}>l}\sigma_{(% l,l^{\prime},t)}m^{(\delta)}_{l^{\prime}}(t),$$ $$\displaystyle\ \mbox{where }\ \sigma_{(l,l^{\prime},t)}\ =\ 1\mbox{ if $\sup\{% \bar{t}\ :\ \bar{l}\in(l,l^{\prime}],(\bar{l},\bar{t})\in{\mathcal{P}}\}\ =\ t% $},\ 0\ \mbox{otherwise.}$$ In words, we start by marking each point at level $\delta$ from left to right with the sequence $\zeta_{i}^{(\delta)}$; then the marks evolve according to the ODE $\dot{x}\ =\ -\psi(x)$ along each branch, and when two branches merge, we simply add up the values of the marks. The previous procedure defines a marking of the tree ${\mathcal{T}}$ above time horizon $\delta$. Remark 3.3. Let us assume here that $\psi$ is again the Laplace exponent of (sub)critical spectrally positive Lévy process. The previous marking is completely identical to the one involved in the definition of the function $F$ in (2.13). This is obviously not coincidental. Let $(\mu_{t};t\geq 0)$ be the weak solution of the Smoluchowski equation. Recall from Theorem 2.4 that $\mu_{T}$ is the law of $F({\bf T},(W_{i});1\leq i\leq N_{T}(\bf T))$, where $\bf T$ is the time-inhomogeneous tree started at 0 with one particle, stopped at time $T$ and with birth rate $\tilde{a}(t)=1/(T-t+\delta)$, the $(W_{i})$ are iid with law $\nu$. Now, it is not hard to see that the tree originated from $(0,T)$ and stopped at time horizon $\delta$ in the CPP is identical in law with ${\bf T}$. From Theorem 2.4, this implies that $\left({\mathcal{L}}(\theta_{\delta}\circ m_{0}^{(\delta)}(t));t\geq 0\right)$ is a weak solution of (1.3). We will actually show more, namely that $\left(\theta_{\delta}\circ m_{0}^{(\delta)}(t);t\geq 0\right)$ is solution to MK-V (and with no restriction on $\psi$). See Theorem 3.4. This provides a natural interpretation for the expression of $\mu_{t}$ in terms of the MK-V equation. The marks $(m_{l}^{(\delta)}(t);t\geq\delta,(l,t)\in{{\mathcal{T}}})$ will be referred to as the partial marking of ${\mathcal{T}}$ above level $\delta$ with initial condition $\{\zeta_{i}^{(\delta)}\}_{i}$. When the initial marks $\{\zeta_{i}^{(\delta)}\}_{i}$ are distributed as independent copies with law $\nu$, the partial marking will be referred to as the partial marking above level $\delta$ with initial condition $\nu$. Finally, a full marking of ${\mathcal{T}}$ will refer to marks $(m_{l}(t);(l,t)\in{{\mathcal{T}}})$ defined on the whole tree ${\mathcal{T}}$ such that for every $\delta>0$, $(m_{l}(t);t\geq\delta,(l,t)\in{{\mathcal{T}}})$ is a partial marking above level $\delta$ (with initial marking $\{m_{l_{i}}(\delta),(l_{i},\delta)\in{\mathcal{T}}\}$). Theorem 3.4. Let us consider $m^{(\delta)}$ the partial marking above level $\delta$ with initial law $\nu$. Then ($\theta_{\delta}\circ m_{0}^{(\delta)}(t);t\geq 0)$ is solution to the MK-V equation (1.8) with initial condition $\nu$ and inverse population $\delta$. Proof. It is enough to show that $\left(m_{0}^{(\delta)}(t);t\geq\delta\right)$ solves the McKean–Vlasov (3.23) $$\forall t\geq\delta,\ \ \bar{x}_{t}-\bar{x}_{\delta}\ =\ \underbrace{-\int_{% \delta}^{t}\psi(\bar{x}_{s})ds}_{\mbox{term I}}\ +\ \underbrace{\sum_{\delta% \leq s_{i}\leq t\ :\ \Delta\bar{J}^{(0)}_{s_{i}}=1}\bar{v}_{s_{i}}}_{\mbox{% term II}},\ \ \mbox{and}\ \ {\mathcal{L}}(\bar{x}_{\delta})\ =\ \nu$$ where $\bar{J}^{(0)}$ is identical in law to a Poisson Point process with rate $1/t$ and where conditional on $\bar{J}^{(0)}$, $\{\bar{v}_{s_{i}}\}$ is a collection of independent random variables with ${\mathcal{L}}(v_{s})\ =\ {\mathcal{L}}(\bar{x}_{s})$. First, the initial condition is obviously satisfied. Secondly, we note that in the absence of a coalescence event, $m^{(\delta)}_{0}(t)$ decreases at rate $-\psi(m_{0}(t))$ which exactly corresponds to term I in the dynamics (3.23). For term II, we note that $m_{0}^{(\delta)}(t)$ experiences a jump upon a coalescence event (see the second term on the RHS of (3.22)). Recall that the Brownian CPP is defined as a Poisson Point process with intensity rate $dl\times dt/t^{2}$, and by definition of $m^{(\delta)}_{0}$ such a coalescence event occurs whenever the left-most branch “alive” at time $t$ with a strictly positive $l$–coordinate dies out (see Fig. 3). This occurs at a rate $$\frac{1}{t^{2}}/\int_{t}^{\infty}\frac{ds}{s^{2}}\ =\ \frac{1}{t}\ \ \mbox{at % time $t$}$$ which exactly corresponds to the rate of $\bar{J}^{(0)}$ in (3.23). Finally, by translation invariance and the independence structure in the Brownian CPP, the branch coalescing with the eternal branch $\{0\}\times{\mathbb{R}}^{+}_{*}$ carries a mark that is identical in law to $m^{(\delta)}_{0}(t)$, and independent of $m^{(\delta)}_{0}(t)$. See again Fig. 3. ∎ As a corollary of Proposition 3.1 and Theorem 3.4, we get the following existence and uniqueness result. Corollary 3.5. For $\delta>0$ and $\nu\in M_{P}({\mathbb{R}}^{+})$, there exists a unique solution to the MK-V equation (1.8). 3.3. Scaling For every $\tau>0$, define the scaling map $$F_{\tau}:(l,t)\to(\tau l,\tau t)$$ Fix $\gamma>1$. We set $$\beta:=\frac{1}{\gamma-1}$$ and we define for any $\nu\in M_{F}({\mathbb{R}}^{+})$ ${\mathcal{S}}^{\tau,\gamma}(\nu)$ as the push-forward of the measure $\nu$ by the map $$x\mapsto\tau^{-\beta}x.$$ Proposition 3.6 (scaling). For every $\tau>0$ (i) $\tilde{\mathcal{T}}=F_{\tau}({\mathcal{T}})$ is identical in law with ${\mathcal{T}}$. (ii) Assume that $\psi(x)=cx^{\gamma}$ for $c>0$ and $\gamma>0$. Let $m^{(\delta)}$ be a partial marking above level $\delta$ with initial measure $\nu$. Define $$(l,t)\in\tilde{\mathcal{T}},\ t\geq\delta\tau,\ \ \ \ \tilde{m}^{(\delta\tau)}% _{l}(t):\ =\ \frac{1}{\tau^{\beta}}m^{(\delta)}_{l/\tau}(t/\tau)$$ Then $\tilde{m}^{(\delta\tau)}$ is a partial marling of $\tilde{\mathcal{T}}$ above level $\tau\delta$ with initial measure $\mathcal{S}^{\tau,\gamma}(\nu)$. Proof. (i) is a direct consequence of the definition of the Brownian CPP. (ii) is a consequence of the observation that (a) for every $(l,\delta\tau)\in F_{\tau}({\mathcal{T}})$, we have ${\mathcal{L}}\left(\tilde{m}_{l}^{(\tau\delta)}(\tau\delta)\right)=\mathcal{S}% ^{\tau,\gamma}(\nu)$, and (b) along each branch of the tree $\tilde{\mathcal{T}}$, the marking evolve according to the dynamics $$dx_{t}\ =\ -cx_{t}^{\gamma}dt$$ (because of the pre-factor $\frac{1}{\tau^{\beta}}$ in the definition of $\tilde{m}$) and at a coalescence point, marks add up. (So that $\tilde{m}$ defines a partial marking on $\tilde{\mathcal{T}}$ with initial marking $\mathcal{S}^{\tau,\gamma}(\nu)$.) ∎ 4. $\infty$-population McKean-Vlasov equation In this section, we assume that $\psi\in\mathscr{H}$ and that Grey’s condition holds. Then we can define the homeomorphism $q:(0,\infty)\to(0,V)$ as $$q:x\mapsto\int^{\infty}_{x}\frac{1}{\psi(s)}ds,$$ with $V=q(0+)\in(0,+\infty]$. Define $\phi:(0,\infty)\to(0,\infty)$ as $$\phi(t)=\left\{\begin{array}[]{cl}q^{-1}(t)&\mbox{ if }t<V\\ 0&\mbox{ if }t\geq V.\end{array}\right.$$ Then for any $x_{0}\in(0,+\infty]$, the ODE $$\dot{u}=-\psi(u),\ \ u(0)=x_{0}$$ has a unique solution on ${\mathbb{R}}^{+}$ given by $$u(t)=\phi(t+q(x_{0})),$$ with $q(x_{0})=0$ if $x_{0}=\infty$. Notice that the flow $x_{0}\mapsto\phi(t+q(x_{0}))$ is continuous and keep in mind that $\phi$ is the unique solution to $$\dot{u}=-\psi(u),\ \ u_{0}=\infty.$$ In this section, we will also make the extra assumption that $\psi$ is also convex. (so that $V=\infty$.) Note that if $\psi$ is the Laplace exponent of a spectrally positive Lévy process the latter condition holds. We will say that $(x_{t};t>0)$ is an $\infty$–pop. solution of (1.8) if it is a solution of (1.8) for $\delta=0$ (without prescribing the initial condition at $t=0$). More precisely, $x$ will be an $\infty$-pop solution iff for every $\tau>0$, conditional on $x_{\tau}$, the process $(x_{t};t\geq\tau)$ is identical in law to the solution of (4.24) $$d\bar{x}_{t}\ =\ -\psi(\bar{x}_{t})dt\ +\ \Delta J^{(0)}_{t}\bar{v}(t);t\geq% \tau;\ \ \bar{x}_{\tau}=x_{\tau}$$ where $(\bar{v}_{t})_{t\geq 0}$ is a family of independent rv’s with ${\mathcal{L}}(\bar{v}_{t})={\mathcal{L}}(\bar{x}_{t})$. Recall the definition of dust and proper solutions for the $\infty$-pop. Smoluchowski equation. (See Definition 1.6.) There is a natural extension of this definition to the MK-V equation. We will say that $(x_{t};t>0)$ is a dust solution if it satisfies the $\infty$-pop. MK-V equation with $\delta=0$ and $\lim_{t\to 0}x_{t}=0$ in probability. Solutions are said to be proper otherwise. By a direct application of Itô’s formula (analogous to Lemma 3.2), if $x$ is a dust (resp., proper) solution then $({\mathcal{L}}(x_{t});t>0)$ is a dust (resp., proper) solution of the Smoluchowski equation. We state the two main results of this section. Theorem 4.1 (Proper solutions). (i) There exists at least one proper solution to the $\infty$-pop. MK-V equation and any such solution satisfies (4.25) $$\forall t>0,\ x_{t}\geq\phi(t)\ \ \mbox{a.s.}\phantom{kzfgyouregviquerzltv}% \mbox{\rm(Growth condition)}$$ (ii) In the stable case $$\psi(x)=cx^{\gamma}\ \ \ \mbox{with $\gamma>1$},$$ there exists a unique proper solution $(x_{t}^{(0)};t>0)$. Further, (Self-similarity) For every $t>0$, ${\mathcal{L}}(x_{t}^{(0)})\ =\ {\mathcal{L}}(\Upsilon/t^{\beta})$ where $\Upsilon:=x_{1}^{(0)}$. (Integrability) For every $t>0$,  ${\mathbb{E}}(x^{(0)}_{t})<\infty$. (Measurability) $\left(x^{(0)}_{t};t>0\right)$ can be constructed on the same space as the Brownian CPP, and under this coupling, it is measurable with respect to the $\sigma$-field generated by the CPP. Theorem 4.2 (Dust solutions). In the stable case $\psi(x)=cx^{\gamma}$ ($c>1$ and $\gamma>1$), there exist infinitely many dust solutions to the $\infty$-pop. MK-V equation. Since the law of a dust solution to MK-V is also a weak dust solution, there exist infinitely many dust solutions to the Smoluchowski equation. The rest of the section is mainly dedicated to the proof of those two theorems. In Section 4.1, we prove the existence of a proper solution and derive some of its properties (growth condition, measurability, self-similarity…). In Section 4.2 we show that any proper solution satisfies the growth condition (4.25). This is shown by introducing what we call the marking associated to an $\infty$-pop. solution. In Section 4.3, we use this full marking to prove uniqueness of a proper solution in the stable case, and we show all the required properties of the solution. (This will show part (ii) of Theorem 4.1). Finally, in Section 4.4, we show that the long-term behavior of finite pop. solutions can be described in terms of the $\infty$-pop. Finally, we close this section with the proof of Theorem 4.2. 4.1. Existence and construction of an $\infty$-pop. proper solution We start with an $\infty$-pop. analog of Theorem 3.4 that will be exploited repeatedly throughout this section. Theorem 4.3. Let $m$ be a full marking of the CPP. Assume that for every $t>0$, $\{m_{l}(t):(l,t)\in{\mathcal{T}}\}$ (assuming that the points are ranked according to the values of $l$ in increasing order) is a sequence of i.i.d. random variables distributed as $m_{0}(t)$. Then $(m_{0}(t);t>0)$ is an infinite solution to MK-V. Proof. By Theorem 3.4, for every $t>0$, $(\theta_{t}\circ m_{0}(u);u\geq 0)$ is solution of $$\forall u\geq 0,\ dy_{u}\ =\ -\psi(y_{u})du\ +\ \Delta J^{t}_{u}v_{u},\ \ % \mbox{with}\ {\mathcal{L}}(y_{0})=m_{0}(t),$$ where $\{v_{u}\}_{u>t}$ is an infinite collection of r.v. with ${\mathcal{L}}(y_{u})={\mathcal{L}}(v_{u})$ for every $u\geq t$. Equivalently, $(m_{0}(u);u\geq t)$ is identical in law to $$\forall u\geq t,\ dz_{u}\ =\ -\psi(z_{u})du\ +\ \Delta J^{0}_{u}v_{u},\ \ % \mbox{with}\ {\mathcal{L}}(z_{t})=m_{0}(t),$$ where $\{v_{u}\}_{u>t}$ is an infinite collection of r.v. distributed with ${\mathcal{L}}(z_{u})={\mathcal{L}}(v_{u})$ for every $u\geq 0$. Since this holds for any $t>0$, this $m_{0}$ is an $\infty$-pop. solution to MK-V. ∎ In order to construct a non-trivial $\infty$-pop. solution out of the CPP, we will consider $(m_{l}^{(\delta),+}(t);(l,t)\in{\mathcal{T}},\ t\geq\delta)$ the partial marking starting from level $\delta>0$ with initial measure $\nu^{+}(dx)\ =\ \delta_{\infty}(dx)$, i.e, we start with the initial condition $+\infty$ at level $\{t=\delta\}$. Proposition 4.4. Let us consider a positive non-increasing sequence $(\delta_{n})$ going to $0$. (i) For almost every realization of the CPP ${\mathcal{T}}$, for every $(l,t)\in{\mathcal{T}}$, the sequence $(m_{l}^{(\delta_{n}),+}(t))$ is non-increasing and if we define $m_{l}^{+}(t)$ its limit, then $$0<\phi(t)\leq m_{l}^{+}(t)<\infty.$$ (ii) $m^{+}$ is a full marking of ${\mathcal{T}}$ which is measurable with respect to the $\sigma$-field generated by ${\mathcal{T}}$ and does not depend on the sequence $(\delta_{n})$. (iii) This marking is maximal in the sense that for every full marking $m^{-}$ defined on ${\mathcal{T}}$, for every $(l,t)\in{\mathcal{T}},\ m_{l}^{+}(t)\geq m_{l}^{-}(t)$. (iv) $(m^{+}_{0}(t);t\geq 0)$ is an $\infty$-pop. proper solution to MK-V. (v) In the stable case, $\psi(x)=cx^{\gamma}$ (for $c>0$, $\gamma>1$), for every $u>0$, $${\mathcal{L}}\left(u^{\beta}m_{0}^{+}(u)\right)\ =\ {\mathcal{L}}\left(m_{0}^{% +}(1)\right).$$ Proof. We start with a monotonicity property of our marking of the Brownian CPP that we will use repeatedly throughout this proof. Let us first consider two partial markings $m,\bar{m}$ of the CPP above a given level $\delta>0$. It is clear that if for every $(l,\delta)\in{\mathcal{T}}$ we have $m_{l}(\delta)\geq\bar{m}_{l}(\delta)$, then for every $(l,t)\in{\mathcal{T}}$ with $t\geq\delta$, we must have $$m_{l}(t)\geq\bar{m}_{l}(t).$$ Now, since $(\delta_{n})$ is non-increasing, it follows that for every $(l_{n},\delta_{n})\in{\mathcal{T}}$ $$m_{l_{n}}^{(\delta_{n+1}),+}(\delta_{n})<m_{l_{n}}^{(\delta_{n}),+}(\delta_{n}% )=\infty$$ which implies that the sequence of marks $(m^{(\delta_{n}),+}_{l}(t))$ (for any $(l,t)\in{\mathcal{T}}$) is non-increasing and converges to a mark $m_{l}(t)<\infty$. (Note that since there are only finitely many coalescences in each compact time interval of $(0,\infty)$, Grey’s condition ensures that the limiting marking is finite.) Next, in the absence of coalescence along the vertical branch $[(0,t),(0,\delta_{n})]$ in ${\mathcal{T}}$ (for $\delta_{n}<t$), we have $$m_{0}^{(\delta_{n}),+}(t)\ =\ \phi(t-\delta_{n}).$$ Thus, ignoring all the coalescence events along the vertical branch $[(0,t),(0,\delta_{n})]$ ensures that $m_{0}^{(\delta_{n}),+}(t)$ is bounded from below by the RHS of the latter identity. Since coalescence events can only add extra mass to the eternal branch (and by uniqueness of the solution to the ODE), the inequality in (i) follows after taking the limit $n\to\infty$. Let us now show (ii) and (iii). By continuity of the flow, it is not hard to check that $m^{+}$ defines a full marking of the tree ${\mathcal{T}}$ in the sense prescribed in the beginning of Section 3.2. (In other words, the property “marks evolve according to $\dot{x}_{t}=-\psi(x_{t})$ along branches and marks add up upon coalescence” passes to the limit.) For details, we refer the reader to the proof of Lemma 4.15 below where we develop a similar argument in more detail. Let us show that $m^{+}$ is independent of the choice of the sequence $(\delta_{n})$. Let $\bar{\delta}_{n}$ be another non-increasing sequence going to $0$, and let $\bar{m}^{+}$ be the limit of $m^{(\bar{\delta}_{n}),+}$ as $n\to\infty$. Going to a subsequence of $(\bar{\delta}_{n})$ if necessary, one can always assume that $\bar{\delta}_{n}\leq\delta_{n}$ for every $n$. Under this assumption, one gets $\bar{m}^{(\bar{\delta}_{n}),+}_{l_{n}}(\delta_{n})\leq m^{(\delta_{n}),+}_{l_{% n}}(\delta_{n})$ for every $(l_{n},\delta_{n})\in{\mathcal{T}}$ and by similar monotonicity arguments as above this ensures that $\bar{m}^{+}\leq m^{+}$. Finally, assuming that $\bar{\delta}_{n}\geq\delta_{n}$ yields the reverse inequality. This shows that the limiting marking $m^{+}$ does not depend on the choice of the sequence $(\delta_{n})$. Further, by construction, for any $\delta_{n}\leq t$ and any $(l,t)\in{\mathcal{T}}$, we must have $m^{-}_{l}(t)\leq m_{l}^{(\delta_{n}),+}(t)$ since at time $\delta_{n}$, the initial condition of the the marking $m^{(\delta_{n}),+}$ dominates the one of $m^{-}$. This completes the proof of the first part of (ii) and (iii), i.e., that $m^{+}$ is a maximal full marking of the CPP. Finally, since the initial condition of $m_{0}^{(\delta_{n}),+}$ is deterministic and the marking $m^{(\delta_{n}),+}$ below level $\delta_{n}$ is determined deterministically from the tree ${\mathcal{T}}$ above level $\delta$, we deduce that the $(m_{0}^{+}(t);t>0)$ is measurable with respect to $\sigma({\mathcal{T}})$. Let us now proceed with the proof of (iv). By the branching structure of the CPP, at every $t>0$, all the marks at level $t$ are i.i.d. with law ${\mathcal{L}}(m^{+}_{0}(t))$. By Theorem 4.3, $m_{0}^{+}$ is an $\infty$-pop. solution to MK-V. By the inequality in (i), the solution is proper (and goes to $\infty$ as $t\to 0$). Let us now show the scaling identity (v). From Proposition 3.6 and the invariance of the initial condition $\delta_{\infty}(dx)$ under rescaling (i.e., $\mathcal{S}^{\tau,\gamma}(\delta_{\infty})=\delta_{\infty}$ for every $\tau>0$), we get for every $t\geq\delta_{n}\tau$ (4.26) $$m_{0}^{(\delta_{n}\tau),+}(t)\ =_{{\mathcal{L}}}\ \frac{1}{\tau^{\beta}}m^{(% \delta_{n}),+}_{0}(t/\tau).$$ Let $\tilde{m}^{+}$ be the limit of $m^{(\delta_{n}\tau),+}$. Since $m^{+}$ does not depend on the choice of the sequence $(\delta_{n})$ (in particular if we replace $(\delta_{n})$ by $(\tau\delta_{n})$), $$\ m_{0}^{+}(t)\ =\tilde{m}^{+}_{0}(t)\ \ =_{{\mathcal{L}}}\frac{1}{\tau^{\beta% }}m^{+}_{0}(t/\tau),$$ where the latter identity follows from (4.26). This completes the proof of the scaling identity after taking $\tau=t$. ∎ 4.2. Full marking associated to an $\infty$-pop. solution In this subsection, we construct a full marking of the CPP using an $\infty$-pop. solution to MK-V. Proposition 4.5. Let $(x_{t};t>0)$ be an $\infty$-pop. solution to MK-V. There exists a unique full marking $({\mathcal{T}},m)$ such that (4.27) $${\mathcal{L}}(\{m_{0}(t);t>0\})\ =\ {\mathcal{L}}(\{x_{t};t>0\}).$$ and at every level $s$, $\{m_{l}(s)\ :\ (l,s)\in{\mathcal{T}}\}$ is a sequence with i.i.d. rv’s with law ${\mathcal{L}}(x_{s})$. This marking $m$ will be referred to as the marking associated to the solution $x$. Proof. We start by proving existence. The proof goes along similar lines as Theorem 4.3. Let $(\delta_{n})$ a sequence decreasing to $0$. For every $n$, we can consider the partial marking $m^{(\delta_{n})}$ above level $\delta_{n}$ with initial marking ${\mathcal{L}}(x_{\delta_{n}})$. By Theorem 3.4, $(\theta_{\delta_{n}}\circ m^{(\delta_{n})}_{0}(t);t\geq 0)$ is identical in law with the solution of the MK-V equation with inverse population $\delta_{n}$ and initial measure ${\mathcal{L}}(x_{\delta_{n}})$. Equivalently, this amounts to saying that $(m^{(\delta_{n})}_{0}(t);t\geq\delta_{n})$ is identical in law to the solution of $$t\geq\delta_{n},\ dy_{t}=-\psi(y_{t})dt\ +\ \Delta J^{0}_{t}v_{t},\ y_{\delta_% {n}}=x_{\delta_{n}},\ \ \mbox{with ${\mathcal{L}}(v_{t})={\mathcal{L}}(y_{t})$},$$ and by uniqueness of the solution to the latter equation, $(m^{(\delta_{n})}_{0}(t);t\geq\delta_{n})$ is identical in law to $(x_{t};t\geq\delta_{n})$. Further, the independence of the marks at level $\delta_{n}$ easily implies (by the branching structure in the CPP) that for every $s\geq\delta_{n}$, the set of marks $\{m^{(\delta_{n})}_{l}(s)\ :\ (l,s)\in{\mathcal{T}}\}$ (labelled by increasing values of $l$) is a sequence of i.i.d. marks with law ${\mathcal{L}}(x_{s})$. Let ${\mathcal{T}}^{\delta_{n}}$ be the subset of the tree ${\mathcal{T}}$ consisting of all the points above time level $\delta_{n}$. By the previous argument, the sequence $\{({\mathcal{T}}^{\delta_{n}},m^{(\delta_{n})})\}_{n}$ is consistent in the sense that if we consider the marked tree $({\mathcal{T}}^{\delta_{n+1}},m^{(\delta_{n+1})})$ below level $\delta_{n}$, the resulting object is distributed as $({\mathcal{T}}^{\delta_{n}},m^{(\delta_{n})})$. By the Kolmogorov extension theorem, we can then construct a unique full marking $({\mathcal{T}},m)$ such that the restriction above level $\delta_{n}$ coincides with $({\mathcal{T}}^{\delta_{n}},m^{(\delta_{n})})$, and thus $m$ is such that $${\mathcal{L}}(\{m_{0}(t);t>0\})\ =\ {\mathcal{L}}(\{x_{t};t>0\}).$$ and at every level $s$, $\{m_{l}(s)\ :\ (l,s)\in{\mathcal{T}}\}$ is a sequence with i.i.d. marks with law ${\mathcal{L}}(x_{s})$. For uniqueness, it is enough to note that the second property (i.e., the distribution of the marks at level $s$) implies that any two full markings satisfying this property must be identical in law above level $s$ for every $s>0$. ∎ Lemma 4.6. Let $(x_{t};t>0)$ be a proper solution and let $m$ be the associated full marking. Then $(m_{0}(t);t>0)$ (and thus $(x_{t},;t>0)$) satisfies the growth condition (4.25). Proof. We will first need the following preliminary step that will be also used later in this manuscript. Step 1. Let us first consider a general rooted ultra-metric tree ${\bf t}$ whose leaves are denoted by $l_{1},\cdots,l_{n}$ and such that the distance between the root and the leaves is given by $\tau$. Consider a marking of the leaves $M_{l_{1}},\cdots,M_{l_{n}}\in{\mathbb{R}}^{+}$, and let us consider the marking $M_{0}$ of the root obtained by propagating the marks according to the dynamics $\dot{x}=-\psi(x)$ along the branches, and by adding the marks upon coalescence (as in the Brownian CPP). Recall that $t\mapsto\phi(t+q(x_{0}))$ is the unique solution to $\dot{u}=-\psi(u)$ with initial condition $x_{0}$ at time 0. We claim that (4.28) $$\phi(\tau+q(\sum_{i=1}^{n}M_{i}))\ <\ \tau\leq M_{0}\leq\sum_{i=1}^{n}\phi(% \tau+q(M_{i})).$$ On the one hand, the RHS of the inequality corresponds to the extreme case of the star tree i.e, when all the branches coalesce simultaneously at the root of the tree (in this case the marks evolve independently along each branch, and then add up at the root). On the other hand, the LHS corresponds to the degenerate situation where all the leaves coalesce instantaneously (in which case, the marks add up to $\sum_{i=1}^{n}M_{i}$ and then evolve along a single branch). Since $\psi\in\mathscr{H}$ and is convex, it is also super-linear in the sense that for $x,y>0$ $$\psi(x+y)\geq\psi(x)+\psi(y).$$ Recall that marks evolve according to the dynamics $\dot{x}=-\psi(x)$ along each branch, so that the latter super-linearity assumption implies that the marking decreases faster if we collapse two branches into a single branch, i.e., if we consider $$\dot{r}_{t}\ =\ -\psi(r_{t}),\ r_{0}=a_{0}+a_{1},\ \ \mbox{and }\forall i=1,2,% \ \ \dot{r}^{i}_{t}\ =\ -\psi(r_{t}^{i}),\ r_{0}=a_{i}.$$ then $r_{t}\leq r_{t}^{1}+r_{t}^{2}$. (4.28) can easily be deduced from there and a simple induction on the number of nodes of the ultrametric tree. Step 2. Let $D^{(\delta_{k})}_{{\mathcal{T}}}(l,t)$ be the set of descendants of $(l,t)$ in the tree ${\mathcal{T}}$ with time coordinate $\delta_{k}$. See Fig 3. $|D^{(\delta_{k})}_{{\mathcal{T}}}(l,t)|$ denotes the cardinality of the set $D^{(\delta_{k})}_{{\mathcal{T}}}(l,t)$. Let $m$ be the full marking constructed from the solution $x$. By construction, at a given level $\delta_{k}<t$, the set of marks $\{m_{l}(\delta_{k})\}_{(l,\delta_{k})\in D^{(\delta_{k})}_{{\mathcal{T}}}(0,t)}$ is identical in law with $\{y^{k}_{n}\}_{n\leq|D^{(\delta_{k})}_{{\mathcal{T}}}(0,t)|}$ where $\{y_{n}^{k}\}_{n}$ is an sequence of independent random variables with law ${\mathcal{L}}(x_{\delta_{k}})$, and independent of $|D^{(\delta_{k})}_{{\mathcal{T}}}({0,t})|$. Since $|D^{(\delta_{k})}_{{\mathcal{T}}}(0,t)|\to\infty$ a.s. as $t\to 0$, we have $$M^{k}(0,t)\ :=\ \sum_{(l,\delta_{k})\in D^{(\delta_{k})}_{{\mathcal{T}}}(0,t)}% m_{l}(\delta_{k})\ \to\infty\ \ \mbox{in probability.}$$ Here, we used the fact that ${\mathcal{L}}(m_{l}(\delta_{k}))={\mathcal{L}}(x_{\delta_{k}})$ and that the solution is proper. Indeed, since the solution is proper, there exist $\varepsilon>0$ and a sequence $\delta_{k}^{\prime}>0$ converging to 0 such that $x_{\delta_{k}^{\prime}}\geq\varepsilon$ with probability larger than $\varepsilon$. Substituting $\delta_{k}$ with $\delta_{k}^{\prime}$ yields the result. Finally, by Step 1, we have (4.29) $$m_{l}(t)\geq\phi(t-\delta_{k}+q(M^{k}(0,t))),$$ and the result follows by letting $k\to\infty$. ∎ Remark 4.7. Note that we only used the LHS of (4.28) in the proof of the lemma. The RHS will be useful later. 4.3. Uniqueness of a proper solution in the stable case In this section, we restrict our study to the stable case $\psi(x)=cx^{\gamma}$ with $\gamma>1$. Then $q(x)=\beta x^{1-\gamma}/c$ and $\phi(t)=(ct/\beta)^{-\beta}$. As a consequence, the ODE $\dot{z}_{t}=\ -\psi(z_{t})$ with initial condition $z_{0}$ at time 0 has the solution (4.30) $$\forall t\geq 0,\ \ z_{t}\ =\ \left(c(\gamma-1)t+z_{0}^{1-\gamma}\right)^{-% \beta}.$$ Proposition 4.8. Let $(x_{t};t>0)$ be a proper solution to MK-V. Let $m^{-}$ be the full marking of the CPP ${\mathcal{T}}$ associated to $x$ (as defined in Proposition 4.5). Let $m^{+}$ be maximal full marking of ${\mathcal{T}}$ (as defined in Proposition 4.4). Then $(m_{0}^{+}(t);t>0)=(m_{0}^{-}(t);t>0)$ a.s.. As a consequence, $x$ is identical in law to $m_{0}^{+}$. Let $T>0$. By right continuity of $m_{0}^{\pm}$ it is enough to show that (4.31) $$m^{+}_{0}(T)=m_{0}^{-}(T)\ \ \mbox{a.s..}$$ The rest of the proof is dedicated to this result. We note that in the following, we will use repeatedly the fact that $m^{\pm}$ satisfy the growth condition (4.32) $$\forall t>0,\ \ m_{0}^{\pm}(t)\ \geq\ (c(\gamma-1)t)^{-\beta},$$ (as a consequence of Lemma 4.6 and Proposition 4.4). Let us consider the set of descendants of $(0,T)$ belonging to the PPP ${\mathcal{P}}\subset{\mathcal{T}}$ (i.e. the branching points in the descendence of $(0,T)$) and let us denote this set by $\mathcal{D}_{0,T}$. ${\mathcal{D}}_{0,T}$ is a set of points endowed with a natural (a.s.) binary tree structure – See Fig. 3. We can index every point in $\mathcal{D}_{0,T}$ by a point in ${\bf t}=\cup_{n\in{\mathbb{N}}^{*}}\{0\}\otimes\{0,1\}^{n}$, i.e., we construct a bijection $G$ from ${\bf t}$ to $\mathcal{D}_{0,T}$ in such a way that • $G(0)\ =\ (0,T)$ • If $\kappa\in{\bf t}$, $G(\kappa,0)$ (resp., $G(\kappa,1)$) is the left-child (resp., right-child) of $G(\kappa)$ in ${\mathcal{D}}_{0,T}$. The binary (planar) tree ${\bf t}$ is naturally equipped with a triplet $(g_{\kappa}^{+},g^{-}_{\kappa},d_{\kappa})_{\kappa\in{\bf t}}$ where $g_{\kappa}^{\pm}$ is the mark $m^{\pm}_{\bar{l}}(\bar{t}-)$ where $(\bar{l},\bar{t})=G(\kappa)$ and $d_{\kappa}$ (the depth of the point $\kappa$) is the time coordinate of the point $G(\kappa)$ in ${\mathcal{P}}$. Note that for $\kappa\neq 0$, the point $G(\kappa)$ is a branching point of the CPP, and corresponds to a discontinuity point in the marking. In our notation, the marks $g_{\kappa}^{\pm}$ are considered right before the occurrence of the discontinuity (i.e., before we add up the marks at the branching point). We now fix $\kappa\in{\bf t}$. Our first goal is to show the “passage formula” (4.41) below, which will be achieved through Lemmas 4.9 and 4.10. This formula will allow us to go from an arbitrary mark $\kappa$ to the marks of its children. The desired formula (4.31) will be achieved from there by an induction on the nodes of the tree. First, from the definition of the marking $m^{\pm}$, we can deduce the marking at $\kappa$ from the marking of its two children $\left((\kappa,0),(\kappa,1)\right)$, namely, if we consider the dynamics (4.33) $$\dot{z}^{\pm}_{u}\ =\ -\psi(z^{\pm}_{u}),\ \ \ z^{\pm}_{0}=\sum_{i=0}^{1}g_{(% \kappa,i)}^{\pm}$$ then $g^{\pm}_{\kappa}\ =\ z^{\pm}_{d_{\kappa}-d_{\kappa,0}}\ =\ z^{\pm}_{d_{\kappa}% -d_{\kappa,1}}$ (since $d_{\kappa,0}=d_{\kappa,1}$). In other words, we sum up the marks carried by the two children of $G(\kappa)$ and let the marking evolve according to the dynamics $\dot{x}=-\psi(x)$ along the branch connecting $\kappa$ to its children $(\kappa,0)$ and $(\kappa,1)$. Alternatively, we will consider the dynamics (4.34) $$\mbox{for $i=0,1$ }\ \ \ \dot{z}^{\pm,i}_{u}\ =\ -\psi(z^{\pm,i}_{u}),\ \ z^{% \pm,i}_{0}\ =\ g^{\pm}_{(\kappa,i)}$$ In words, instead of merging the two branches at $G(\kappa,0)=G(\kappa,1)$, we treat the two branches as if the merging had not occurred. Lemma 4.9. Let $z^{\pm}$ and $z^{\pm,i}$ be defined as above, then for every $u\geq 0$, (4.35) $$\Delta z_{u}\ \leq\ \sum_{i=0}^{1}\ \Delta z_{u}^{i},\ \ \mbox{where }\ \Delta z% _{u}\ :=\ z^{+}_{u}-z^{-}_{u},\ \ \mbox{and for $i=0,1$, }\ \Delta z^{i}_{u}\ % :=\ z^{+,i}_{u}-z^{-,i}_{u}.$$ Proof. Define $$f_{t}(x,y)\ :=\ \left(c(\gamma-1)t+(x+y)^{1-\gamma}\right)^{-\beta}\ -\ \left(% c(\gamma-1)t+x^{1-\gamma}\right)^{-\beta}\ -\ \left(c(\gamma-1)t+y^{1-\gamma}% \right)^{-\beta}$$ so that according to (4.30) $$z_{t}^{\pm}-\sum_{i=0}^{1}z_{t}^{\pm,i}\ =\ f_{t}(z_{0}^{\pm,0},z_{0}^{\pm,1}).$$ We need to prove that $f_{t}(z_{0}^{+,0},z_{0}^{+,1})\leq f_{t}(z_{0}^{-,0},z_{0}^{-,1})$. Since $z^{+,i}_{0}\geq z^{-,i}_{0}$ for $i=0,1$, the problem boils down to proving that the coordinates of the gradient of $f_{t}$ are non-positive for $x,y>0$. We have $$\partial_{x}f_{t}(x,y)\ =\ g_{t}(1/(x+y))-g_{t}(1/x),\ \mbox{where }\ g_{t}(u)% =\left(c(\gamma-1)t+u^{\gamma-1}\right)^{-\frac{\gamma}{\gamma-1}}u^{\gamma},$$ and one can easily check that $g$ is increasing in $u$ (for $\gamma>1$), thus showing that $\partial_{x}f_{t}(x,y)\leq 0$. An analogous argument shows that the gradient is non-positive along the $y$ coordinate. This completes the proof of the lemma. ∎ Lemma 4.10. For any $\kappa\in{\bf t}$, and $i=0,1$ (4.36) $$\Delta z^{i}_{d_{\kappa}-d_{(\kappa,i)}}\leq\Delta g_{(\kappa,i)}\left(\frac{d% _{(\kappa,i)}}{d_{\kappa}}\right)^{\frac{\gamma}{\gamma-1}\left(1+F(\frac{d_{% \kappa,i,*}}{d_{\kappa}})\right)}$$ where $*=0,1$ (the value of $d_{\kappa,i,*}$does not change the value of $*$) and $$\forall x\in[0,1],\ \ F(x)\ =x\frac{(1-2^{1-\gamma})x}{1-(1-2^{1-\gamma})x}\ % \geq\ 0$$ Proof. Step 1. Recall from (4.34) that $\Delta z_{0}^{i}=\Delta g_{\kappa,i}$ where $\Delta g_{\kappa,i}=g_{\kappa,i}^{+}-g_{\kappa,i}^{-}$ and (4.37) $$d\Delta z_{u}^{i}\ =\ -(\psi(z_{u}^{+,i})-\psi(z_{u}^{-,i}))du.$$ The strategy will consists in bounding from above the RHS of the differential relation and then use Gronwald’s lemma. First, by convexity of $\psi$ and since $z^{+,i}_{u}\geq z^{-,i}_{u}$ $$\displaystyle\psi(z_{t}^{+,i})-\psi(z_{t}^{-,i})$$ $$\displaystyle\geq$$ $$\displaystyle\psi^{\prime}(z_{u}^{-,i})\Delta z^{i}_{t}\ =\ c\gamma(z_{u}^{-,i% })^{\gamma-1}\Delta z^{i}_{t}.$$ We will now bound $z_{u}^{-,i}$ on the interval $[0,d_{\kappa}-d_{(\kappa,i)}]$. Let $v$ be such that $$z_{0}^{-,i}\ =\ g^{-}_{\kappa,i}\ =\ (\frac{v}{c(\gamma-1)d_{\kappa,i}})^{\beta}$$ Note that since the depth of $g_{\kappa,i}$ is given by $d_{\kappa,i}$ and since we have $m^{\pm}_{l}(u)\geq(\frac{1}{c(\gamma-1)u)})^{\beta}$ (see (4.32)), we have $v\geq 1$. Solving the equation (4.34), we find that for any $u\in[0,d_{\kappa}-d_{\kappa,i}]$ $$\displaystyle z_{u}^{-,i}$$ $$\displaystyle=$$ $$\displaystyle\left(c(\gamma-1)u+c(\gamma-1)\frac{d_{\kappa,i}}{v}\right)^{-\beta}$$ $$\displaystyle=$$ $$\displaystyle\left(c(\gamma-1)(u+d_{\kappa,i})\right)^{-\beta}\left(1+\frac{d_% {\kappa,i}(1-1/v)}{(u+d_{\kappa,i})-d_{\kappa,i}(1-\frac{1}{v})}\right)^{\beta}$$ $$\displaystyle\geq$$ $$\displaystyle\left(c(\gamma-1)(u+d_{\kappa,i})\right)^{-\beta}\left(1+\frac{1-% \frac{1}{v}}{\frac{d_{\kappa}}{d_{\kappa,i}}-(1-\frac{1}{v})}\right)^{\beta}$$ where the LHS and RHS of the inequality are equal when $u=d_{\kappa}-d_{\kappa,i}$. Putting everything together, we get (4.38) $$\forall u\in[0,d_{\kappa}-d_{\kappa,i}],\ \ \psi(z_{u}^{+,i})-\psi(z_{u}^{-,i}% )\geq\frac{\Delta z_{t}^{i}}{u+d_{\kappa,i}}\ \frac{\gamma}{\gamma-1}\left(1+% \frac{1-\frac{1}{v}}{\frac{d_{\kappa}}{d_{\kappa,i}}-(1-\frac{1}{v})}\right)$$ Step 2. Since the RHS of the latter inequality increases in $v$, we produce a lower bound for $v$. By construction, the mark $g^{-}_{\kappa,i}$ is obtained by considering the dynamics (4.39) $$dy_{t}\ =\ -\psi(y_{t})dt,\ \ y_{0}\ =\ \sum_{j=0}^{1}g_{\kappa,i,j}^{-}$$ evaluated at time $d_{\kappa,i}-d_{\kappa,i,*}$ and where $*=0$ or $1$. Since (4.32) implies that $g_{\kappa,i,j}^{-}\geq(c(\gamma-1)d_{\kappa,i,*})^{-\beta}$, we get $$\displaystyle g^{-}_{\kappa,i}$$ $$\displaystyle\geq$$ $$\displaystyle\left(c(\gamma-1)(d_{\kappa,i}-d_{\kappa,i,*})+\frac{1}{2^{\gamma% -1}}c(\gamma-1)d_{\kappa,i,*}\right)^{-\beta}$$ $$\displaystyle=$$ $$\displaystyle\left(c(\gamma-1)d_{\kappa,i}\right)^{-\beta}\left(1-(1-\frac{1}{% 2^{\gamma-1}})\frac{d_{\kappa.i,*}}{d_{\kappa,i}}\right)^{-\beta}$$ where the RHS and LHS of the inequality are equal when the initial condition of $(y_{t};t\geq 0)$ is taken to be $2(c(\gamma-1)d_{\kappa,i,*})^{-\beta}$ (where the factor $2$ comes from the sum in the initial condition of (4.39)). This implies that (4.40) $$v\geq\frac{1}{1-(1-\frac{1}{2^{\gamma-1}})\frac{d_{\kappa.i,*}}{d_{\kappa,i}}}$$ Step 3. Combining the two previous steps yield that $$\forall u\in[0,d_{\kappa}-d_{\kappa,i}],\ \ \psi(z_{u}^{+,i})-\psi(z_{u}^{-,i}% )\geq\frac{\Delta z_{t}^{i}}{u+d_{\kappa,i}}\ \frac{\gamma}{\gamma-1}\left(1+G% (\frac{d_{\kappa,i}}{d_{\kappa}},\frac{d_{\kappa,i,*}}{d_{\kappa,i}})\right).$$ where $G(x,y)\ =\ x\frac{(1-2^{1-\gamma})y}{1-(1-2^{1-\gamma})y}$. Further since $G$ is increasing in $x$ and $y$ (since $\gamma>1$), and $\frac{d_{\kappa,i}}{d_{\kappa}},\frac{d_{\kappa,i,*}}{d_{\kappa,i}}\leq\frac{d% _{\kappa,i,*}}{d_{\kappa}}$ we have $$\forall u\in[0,d_{\kappa}-d_{\kappa,i}],\ \ \psi(z_{u}^{+,i})-\psi(z_{u}^{-,i}% )\geq\frac{\Delta z_{t}^{i}}{u+d_{\kappa,i}}\ \frac{\gamma}{\gamma-1}\left(1+F% (\frac{d_{\kappa,i,*}}{d_{\kappa}})\right).$$ Next, if we solve $$d\bar{y}_{u}\ =\ \frac{\bar{y}_{u}}{u+d_{\kappa,i}}\ \frac{\gamma}{\gamma-1}% \left(1+F(\frac{d_{\kappa,i,*}}{d_{\kappa}})\right)du,\ \bar{y}_{0}=g_{\kappa,i}$$ one finds the RHS of (4.36). The proof of the lemma is achieved by a direct application of Gronwald’s lemma. ∎ Recall that $\Delta g_{\kappa}=\Delta z_{d_{\kappa}-d_{\kappa,*}}$ and $\Delta z=z^{+}-z^{-}$ with $z^{+}$ and $z^{-}$ follow the dynamics defined in (4.33). From Lemma 4.9, we have $\Delta g_{\kappa}\leq\sum_{i=0}^{1}\Delta z^{i}_{d_{\kappa}-d_{(\kappa,i)}}$. By the previous lemma, this implies that $$\Delta g_{\kappa}\leq\sum_{i=0}^{1}\Delta g_{(\kappa,i)}\left(\frac{d_{(\kappa% ,i)}}{d_{\kappa}}\right)^{\frac{\gamma}{\gamma-1}\left(1+F(\frac{d_{\kappa,i,*% }}{d_{\kappa}})\right)}$$ or equivalently that (4.41) $$d_{\kappa}^{\beta}\Delta g_{\kappa}\leq\sum_{i=0}^{1}d_{(\kappa,i)}^{\beta}% \Delta g_{(\kappa,i)}\left(\frac{d_{(\kappa,i)}}{d_{\kappa}}\right)^{\left(1+% \frac{\gamma}{\gamma-1}F(\frac{d_{\kappa,i,*}}{d_{\kappa}})\right)}$$ where $F$ is defined in the previous lemma. Let us now proceed with the rest of the proof. For $n\geq 1$, let ${\bf t}_{n}=\{0\}\otimes\{0,1\}^{n}$ that can be thought of as the vertices in ${\bf t}$ which are at a distance $n$ from the root. For any $v\in{\bf t}_{n}$, let $v(i)\in{\bf t}_{i}$ the vertex obtained by only considering the first $i$ coordinates of $v$ (i.e., $v(i)$ is the ancestor of $v$ at a distance $i$ from the root). After a simple induction, we can generalize the previous inequality to the following one $$\forall n\geq 1,\ \ T^{\beta}\Delta g_{0}\leq\sum_{v\in{\bf t}_{2n}}d_{v}^{% \beta}\Delta g_{v}\ \Pi_{i=0}^{2n-1}\left(\frac{d_{v(i+1)}}{d_{v(i)}}\right)^{% 1+\frac{\gamma}{\gamma-1}F(\frac{d_{v(i+2)}}{d_{v(i)}})}$$ (using the fact that $d_{v(0)}=d_{0}=T$) which implies that (using the fact that $F(x)\geq 0$ on $[0,1]^{2}$) (4.42) $$\displaystyle\forall n\geq 1,\ \ T^{\beta}\Delta g_{0}$$ $$\displaystyle\leq$$ $$\displaystyle\sum_{v\in{\bf t}_{2n}}\ d_{v}^{\beta}\Delta g_{v}\Pi_{i=0}^{n-1}% \left(\frac{d_{v(2i+1)}}{d_{v(2i)}}\right)^{\left(1+\frac{\gamma}{\gamma-1}F(% \frac{d_{v(2i+2)}}{d_{v(2i)}})\right)}\frac{d_{v(2i+2)}}{d_{v(2i+1)}}$$ $$\displaystyle=$$ $$\displaystyle\sum_{v\in{\bf t}_{2n}}d_{v}^{\beta}\Delta g_{v}\ \Pi_{i=0}^{n-1}% \ \epsilon(\frac{d_{v(2i+1)}}{d_{v(2i)}},\frac{d_{v(2i+2)}}{d_{v(2i)}})$$ $$\displaystyle\leq$$ $$\displaystyle 2\sum_{v\in{\bf t}_{2n}}d_{v}^{\beta}g^{+}_{v}\ \Pi_{i=0}^{n-1}% \ \epsilon(\frac{d_{v(2i+1)}}{d_{v(2i)}},\frac{d_{v(2i+2)}}{d_{v(2i)}})$$ where $\epsilon(x,y)=yx^{\gamma/(\gamma-1)F(y)}$. We will now take advantage of the self-similarity in the Brownian CPP. Lemma 4.11. Let $v\in{\bf t}_{n}$. Then (1) $\{d_{v(i+1)}/d_{v(i)}\}_{i=0}^{n-1}$ is a sequence of i.i.d. random variables uniformly distributed on $[0,1]$. (2) $d_{v}^{\beta}g^{+}_{v}=T^{\beta}g^{+}_{0}=T^{\beta}m^{+}_{0}(T)$ in law. (3) $\{d_{v(i+1)}/d_{v(i)}\}_{i=0}^{n-1}$ and $d_{v}^{\beta}g^{+}_{v}$ are independent. Proof. W.l.o.g. we take $v=0_{n}$ and $T=1$, where $0_{n}$ is the vector of length $n$ filled up with $0$. Let $\tau>0$ be random or deterministic. Define the scaling operator $F_{1/\tau}(l,t)\ =\ (l/\tau,t/\tau)$. Finally, $$l_{\tau}\ :=\ \inf\{l>0\ :\ (l,t)\in{\mathcal{P}},\ \ t\geq\tau\},\ \ {% \mathcal{P}}_{\tau}:=\{(l,t)\in{\mathcal{P}}\ :\ l\leq l_{\tau}\},$$ $\{l_{\tau}\}\times{\mathbb{R}}_{*}^{+}$which is the left-most branch in the tree ${\mathcal{T}}$ alive at time $\tau$. In particular, we note that $m_{0}(\tau)$ is measurable with respect to ${\mathcal{P}}_{\tau}$ since the vertical branches of the CPP $l$-coordinates such that $l\geq l_{\tau}$ will not coalesce with the branch $\{0\}\times{\mathbb{R}}^{+}$ before time $\tau$. From the scale invariance properties of the CPP, it is not hard to show that (i) $\{d_{0_{n}(i+1)}/d_{0_{n}(i)}\}_{i=0}^{n-1}$ is a sequence of uniform random variables on $[0,1]$, and that if we take $\tau=d_{0_{n}}$ then (ii) $F_{1/d_{\tau}}({\mathcal{P}}_{d_{\tau}})$ is identical in law with ${\mathcal{P}}_{1}$, and (iii) that $\{d_{0_{n}(i+1)}/d_{0_{n}(i)}\}_{i=0}^{n-1}$ and $F_{1/d_{\tau}}({\mathcal{P}}_{d_{\tau}})$ are independent. Further, by reasoning along the exact same lines of Proposition 4.4(v), one can show that $d_{\tau}^{\beta}m^{+}_{\cdot}(\cdot d_{\tau})$ coincides with the marking $m^{+}$ on the rescaled CPP $F_{1/d_{\tau}}({\mathcal{P}}_{d_{\tau}})$. This completes the proof of the lemma. ∎ Passing to the expectation on both side of (4.42) and using the previous lemma, we get that (4.43) $$\displaystyle\forall n\geq 1,\ \ {\mathbb{E}}\left(T^{\beta}\Delta g_{0}\right)$$ $$\displaystyle\leq$$ $$\displaystyle 2\sum_{v\in{\bf t}_{2n}}{\mathbb{E}}\left(d_{v}^{\beta}g^{+}_{v}% \right)\Pi_{i=0}^{n-1}{\mathbb{E}}\left(\ \epsilon(\frac{d_{v(2i+1)}}{d_{v(2i)% }},\frac{d_{v(2i+2)}}{d_{v(2i)}})\right)$$ $$\displaystyle=$$ $$\displaystyle 2\times 4^{n}\times T^{\gamma-1}{\mathbb{E}}\left(g^{+}_{0}% \right)\ {\mathbb{E}}\left(\epsilon(U_{1},U_{1}U_{2})\right))^{n}$$ where $U_{1}$ and $U_{2}$ are two uniform random variables on $[0,1]$. From the definition of $\epsilon$, we have $$\epsilon(U_{1},U_{1}U_{2})<U_{1}U_{2}\ \mbox{a.s.}$$ Since ${\mathbb{E}}(U_{1}U_{2})=1/4$, we have ${\mathbb{E}}(\epsilon(U_{1},U_{1}U_{2}))<1/4$ and thus that the $\limsup$ of the RHS of (4.43) goes to $0$ as $n\to\infty$. Finally, the proof of (4.31) (and thus of Proposition 4.8) is completed by the following lemma. Lemma 4.12. ${\mathbb{E}}(g_{0}^{+})\ =\ {\mathbb{E}}\left(m^{+}_{0}(T)\right)<\infty.$ Proof. Let $n\geq 1$. For every $v\in{\bf t}_{n}$ we consider the dynamics (4.44) $$\dot{r}_{t}^{v}\ =\ -\psi(r_{t}^{v}),\ \ \ r_{0}^{v}=g_{v}.$$ From the RHS of (4.28), we get that $${\mathbb{E}}\left(m_{0}^{+}(T)\right)\ \leq 2^{n}{\mathbb{E}}\left(r^{0_{n}}_{% d_{0}-d_{0_{n}}}\right)\ \leq 2^{n}{\mathbb{E}}(\bar{r}_{T-d_{0_{n}}})$$ where $\bar{r}$ follows the dynamics $d\bar{r}=-\psi(\bar{r})dt$ with initial condition $+\infty$. Solving for $\bar{r}$ we get $$\bar{r}_{T-d_{0_{n}}}=\left(c(\gamma-1)(d_{0}-d_{0_{n}})\right)^{-\beta}\ =\ % \left(c(\gamma-1)T(1-\Pi_{i=0}^{n-1}\frac{d_{0_{n}(i+1)}}{d_{0_{n}(i)}})\right% )^{-\beta}$$ and from Lemma 4.11, it remains to show that the following integral is finite for $n$ large enough $$I_{n}^{\prime}\ =\ \int_{[0,1]^{n}}\left(1-\Pi_{i=1}^{n}u_{i}\right)^{-\beta}% du_{1}\cdots du_{n}.$$ Since the singularity of the integral is at $(1,\cdots,1)$ we can consider the integral $$I_{n}=\ \int_{[\frac{1}{2},1]^{n}}\left(1-\Pi_{i=1}^{n}u_{i}\right)^{-\beta}du% _{1}\cdots du_{n}.$$ Let us now make the change of variable $\forall i\in[n],\ w_{i}\ =\ \Pi_{j=i}^{n}u_{j}$ so that $$\displaystyle I_{n}$$ $$\displaystyle=$$ $$\displaystyle\int_{w_{1}\leq\cdots\leq w_{n},\forall i\in[n],w_{i}\in[\frac{1}% {2^{i}},1]}\left(1-w_{1}\right)^{-\beta}\frac{1}{\Pi_{i=2}^{n}w_{i}}dw_{1}% \cdots dw_{n}$$ $$\displaystyle\leq$$ $$\displaystyle\frac{1}{2^{n-1}}\int_{{w_{1}\leq\cdots\leq w_{n},\forall i\in[n]% ,w_{i}\in[\frac{1}{2^{i}},1]}}\left(1-w_{1}\right)^{-\beta}dw_{1}\cdots dw_{n}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2^{n-1}}\frac{1}{\beta-1}\int_{w_{2}\leq\cdots\leq w_{n}% ,\forall i\in\{2,\cdots,n\},w_{i}\in[\frac{1}{2^{i}},1]}\left(1-w_{2}\right)^{% 1-\beta}dw_{2}\cdots dw_{n}+K_{n,\gamma}$$ where $K_{n,\gamma}$ is a finite constant. Iterating the same calculation, we get $$I_{n}\leq C_{n,\gamma}\int_{w_{n}\in[\frac{1}{2},1]}\left(1-w_{n}\right)^{n-1-% \beta}dw_{n}+\bar{C}_{n,\gamma}$$ where $C_{\gamma,n},\bar{C}_{\gamma,n}<\infty$. Taking $n>\beta$ makes the integral $I_{n}$ finite. This completes the proof of the lemma. ∎ Proof of Theorem 4.1. The existence of a proper MK-V solution is provided by Proposition 4.4. We proved the uniqueness of the solution in the stable case in Section 4.3. The scaling and measurability properties follow directly from Proposition 4.4. The integrability property follows from Lemma 4.12. ∎ 4.4. Asymptotic behavior of finite population models In this section, we use Theorem 4.1 to deduce some asymptotical results on the MK-V equation (1.3). Theorem 4.13. Again, we assume that $\psi(x)=cx^{\gamma}$ with $c>0$ and $\gamma>1$. Let $\nu\in M_{P}({\mathbb{R}}^{+})$, with $\nu(\{0\})<1$. For every $\delta>0$, let $(x^{(\delta)}_{t};t\geq 0)$ be the MK-V solution with inverse population $\delta$ and initial measure $\nu$. Finally, let $(x_{t}^{(0)};t>0)$ be the unique proper solution to MK-V. • (Convergence to the $\infty$. pop. solution) For every $t>0$ $$\lim_{\delta\downarrow 0}x^{(\delta)}_{t}\ =\ x_{t}^{(0)}\ \ \mbox{in law.}$$ • (Long time behavior) For every $\delta>0$, $$\ \lim_{t\uparrow\infty}x_{t}^{(\delta)}t^{\beta}\ =\ \Upsilon=\ x_{1}^{(0)}\ % \ \mbox{in law.}$$ Remark 4.14. Let $\gamma\in(1,2]$ and let $(\mu_{t}^{(0)},t\geq 0)$ be the unique proper weak solution to the Smoluchowski equation (1.3). $({\mathcal{L}}(x^{(0)}_{t});t>0)$ coincides with the measure valued process $\mu^{(0)}$. (Using the fact that there is a unique proper Smoluchowski solution, and that $({\mathcal{L}}(x^{(0)}_{t});t>0)$ is a proper weak to the Smoluchowski solution.) As a direct corollary of Theorem 4.13, we obtain the following PDE result: $S^{t,\gamma}(\mu_{t}^{(\delta)})\Longrightarrow\mu_{1}^{0}$, where the convergence is meant in the weak topology. The proof of the previous theorem relies on the following lemmas, which is a corollary of the work carried out in the previous section. Lemma 4.15. Let $\nu\in M_{P}({\mathbb{R}}^{+})$ and consider a sequence $\{\nu^{(\delta)}\}_{\delta}$ in $M_{P}({\mathbb{R}}_{+})$ with $\nu^{(\delta)}\geq\nu$, where the domination is meant in the stochastic sense. Let $\left(X_{t}^{(\delta)};t\geq 0\right)$ be a solution to (1.8) with ${\mathcal{L}}(X_{0}^{(\delta)})=\nu^{(\delta)}$. Assume that $\nu(\{0\})<1$. Then for every fixed $t>0$, $\{X_{t}^{(\delta)}\}_{\delta}$ converges in law to the proper solution $x_{t}^{(0)}$ as $\delta\to 0$. Proof. Let $\{\delta_{n}\}$ be a sequence of positive numbers decreasing to $0$. According to Theorem 3.4, ${\mathcal{L}}(X^{(\delta_{n})})={\mathcal{L}}(\theta_{\delta_{n}}\circ m^{(% \delta_{n})}_{0})$ where $m^{(\delta_{n})}$ is the partial marking above level $\delta_{n}$ with initial condition $\nu^{(\delta_{n})}$. The strategy of the proof will consist in showing that the sequence of partial marking converges (up to a subsequence and in a sense specified below) to a full marking ${\bf m}$ (Step 1). Then, we show that ${\bf m}_{0}$ must be the (unique) proper solution of MK-V due to the condition $\nu^{(\delta)}\geq\delta$ (Step 2). Step 1. For $j<n$, define $a^{n}_{ij}$ to be the $i$th mark of $m^{(\delta_{n})}$ at level $\delta_{j}$ (where marks are ranked from left to right). By the branching structure of the CPP, the marks $\{a_{ij}^{n}\}_{i}$ are i.i.d.. We first claim that for every fixed $i,j\in{\mathbb{N}}$, the sequence $\{a_{ij}^{n}\}_{n}$ is tight. In order to see that, we first consider the case where $\nu^{(\delta_{n})}(dx)=\delta_{\infty}(dx)$ for every $n$. This exactly corresponds to the marking $m^{(\delta_{n}),+}$ introduced in Proposition 4.4, for which we showed that $\{a_{ij}^{n}\}_{n}$ converges. Since in general $\{a_{ij}^{n}\}_{n}$ is dominated by the previous case, it follows that $\{a_{ij}^{n}\}_{n}$ is tight. Now, $\{(a^{n}_{ij};i,j\in{\mathbb{N}})\}_{n}$ seen as a random infinite array (equipped with the product topology) is also tight. Going to a subsequence if necessary, there exists a limiting array $(a_{ij}^{\infty};i,j\in{\mathbb{N}})$ such that $$\{\left({\mathcal{T}},(a_{ij}^{n};i,j\in{\mathbb{N}})\right)\}_{n}% \Longrightarrow\left({\mathcal{T}},(a_{ij}^{\infty};i,j\in{\mathbb{N}}))\right% )\ \ \mbox{as $n\to\infty$},$$ where the convergence is meant in the product topology. Note that since for every fixed $j,n$, $\{a_{ij}^{n}\}_{i}$ is a sequence of i.i.d. random variables, the same holds for the sequence $\{a_{ij}^{\infty}\}_{i}$ for every $j$ (the marks at level $\delta_{j}$ are independent). Using the Skorohod representation theorem, one can assume w.l.o.g. that the convergence holds a.s.. Let us now fix $j$ and let us consider ${\bf m}^{j}$ the marking above level $\delta_{j}$ with initial marks $(a_{ij}^{\infty};i\in{\mathbb{N}})$ (where the initial marks are assigned from left to right). By continuity of the flow, for almost every realization of the CPP and the markings, for every $t>\delta_{j}$ and $(l,t)\in{\mathcal{T}}$, $\{m^{(\delta_{n})}_{l}(t)\}_{n>j}$ converges a.s. to ${\bf m}^{j}_{l}(t)$ under our coupling. (In other words, the convergence of the initial conditions induce the convergence of the partial marking with the limiting mark.) Let us now take $j>j^{\prime}$ so that $\delta_{j}<\delta_{j^{\prime}}$. The previous result at time $t=\delta_{j^{\prime}}$, together with the fact $\{(a_{ij^{\prime}}^{n};i\in{\mathbb{N}})\}_{n}$ converges to $\{(a_{ij^{\prime}}^{\infty};i\in{\mathbb{N}})\}_{n}$ readily implies that the marks of ${\bf m}^{j}$ at level $\delta_{j^{\prime}}$ coincides with $(a_{ij^{\prime}}^{\infty};i\in{\mathbb{N}})$ – the initial marking of ${\bf m}^{j}$ at level $\delta_{j^{\prime}}$. Equivalently, this guarantees that the sequence of markings $\{{\bf m}^{j}\}_{j}$ is consistent in the sense that for $j>j^{\prime}$, the marking ${\bf m}^{j}$ restricted to $\{t\geq\delta_{j}^{\prime}\}$ coincides ${\bf m}^{j^{\prime}}$. Thus, there exists a full marking ${\bf m}$ of the CPP such that $m$ coincides with ${\bf m}^{j}$ on $\{t\geq\delta_{j}\}$. Gathering the previous results, we showed that (i) for every $s>0$, $\{{\bf m}_{l}(s):\ (l,s)\in{\mathcal{T}}\}$ is a sequence of i.i.d. rv’s; and (ii) for every realization of the CPP and the markings, (4.45) $$\forall t>0,\ \ \{m^{(\delta_{n})}_{0}(t)\}_{n}\to{\bf m}_{0}(t)\ \ \ \mbox{a.% s.}.$$ As a consequence of (i), $({\bf m}_{0}(t);t>0)$ is an $\infty$-pop. solution of MK-V in virtue of Theorem 4.3. Step 2. Next, by reasoning along the same lines as Proposition 4.5 (see Step 2 therein), the condition $\nu^{(\delta)}\geq\nu$ implies that this solution satisfies the growth condition (4.25), and $({\bf m}_{0}(t);t>0)$ must be proper. By the uniqueness result of Theorem 4.1, we get that ${\mathcal{L}}\left({\bf m}_{0}(t)\right)\ =\ {\mathcal{L}}(x_{t}^{(0)})$. Finally, (4.45) together with the fact that ${\mathcal{L}}(X^{(\delta_{n})})={\mathcal{L}}(\theta_{\delta_{n}}\circ m^{(% \delta_{n})}_{0})$ (see Theorem 3.4) completes the proof the lemma. ∎ Proof of Theorem 4.13. The first item follows directly from Lemma 4.15 after taking $\nu^{(\delta)}=\nu$. For the second item, as a direct consequence of Proposition 3.6, (4.46) $$t^{\beta}x_{t}^{(\delta)}\ =\ \bar{x}_{1}^{(\delta/t)}\ \mbox{in law}$$ where $x^{(\delta)}$ is the solution of (1.8) with initial measure $\nu$ and inverse population size $\delta$, and where $\bar{x}^{\delta/t}$ is the solution of (1.8) with initial measure $\mathcal{S}^{1/t,\gamma}(\nu)$ and inverse population size $\delta/t$. Since for $t\geq 1$, $\mathcal{S}^{1/t,\gamma}(\nu)\geq\nu$ (in the stochastic sense), the second item follows again by a direct application of Lemma 4.15. ∎ 4.5. Dust solutions Proof of Theorem 4.2. The proof is very similar to the one of Lemma 4.15. Let $X$ be a positive rv with finite mean and let $\{\delta_{k}\}$ be a positive sequence going to $0$. Let $m^{(\delta_{k})}$ be the partial marking above level $\delta_{k}$ with initial law $\delta_{k}{\mathcal{L}}(X)$. Step 1. Up to a subsequence, one can construct a full marking ${\bf m}$ of the CPP and coupling between the $({\mathcal{T}},m^{(\delta_{k})})$’s and $({\mathcal{T}},{\bf m})$ such that for every $(l,t)\in{\mathcal{T}}$ $$m^{(\delta_{k})}_{l}(t)\to{\bf m}_{l}(t)\ \ \ \mbox{a.s.},$$ and such that ${\bf m}_{0}$ is an $\infty$-pop solution to MK-V. The proof goes along the exact same lines as Lemma 4.15 (Step 1). Step 2. To prove that ${\bf m}_{0}$ is a dust solution, we show that ${\bf m}_{0}(t)\Longrightarrow 0$ as $t\to 0$. Recall the definition of $|{\mathcal{D}}^{(\delta_{k})}(0,t)|$, the number of descendants of $(0,t)$ at time $\delta_{k}$ in the CPP. From (4.28) we get the following stochastic bounds $$\left(c(\gamma-1)t+\left(\sum_{i=1}^{|{D}^{(\delta_{k})}(0,t)|}\delta_{k}X_{i}% \right)^{1-\gamma}\right)^{-\beta}\leq\ m^{(\delta_{k})}_{0}(t)\ \leq\ \sum_{i% =1}^{|{D}^{(\delta_{k})}(0,t)|}\left(c(\gamma-1)t+\left(\delta_{k}X_{i}\right)% ^{1-\gamma}\right)^{-\beta}$$ where the $\{X_{i}\}$ is an infinite sequence of rv’s distributed as $X$ and independent of the CPP. From the definition of the CPP, one can show that $$\delta_{k}|{D}^{(\delta_{k})}(0,t)|\ \Longrightarrow\ {\mathcal{E}}(t)$$ where ${\mathcal{E}}(t)$ is an exponential rv with parameter $t$. From here, a direct application of the law of large number shows that the LHS (resp., RHS) of the latter inequality converges (in law) to $$\left(c(\gamma-1)t+({\mathbb{E}}(X){\mathcal{E}}(t))^{1-\gamma}\right)^{-\beta% }\ \ (\mbox{resp., }{\mathbb{E}}(X){\mathcal{E}}(t)).$$ As a consequence, $$\left(c(\gamma-1)t+({\mathbb{E}}(X){\mathcal{E}}(t))^{1-\gamma}\right)^{-\beta% }\ \leq\ {\bf m}_{0}(t)\ \leq\ {\mathbb{E}}(X){\mathcal{E}}(t).$$ The RHS shows that ${\bf m}_{0}(t)\Longrightarrow 0$ in law as $t\to 0$. Finally, the LHS of the inequality ensures that ${\mathbb{P}}({\bf m}_{0}(t)>0)=1$ for $t>0$. This shows that ${\bf m}_{0}$ is a non-trivial dust solution of the $\infty$-pop. MK-V equation. Following the same approach, we can construct $m^{1},m^{2},\cdots,m^{k}$, $k$ dust MK-V solutions using distinct positive rv’s $X^{1},X^{2},\cdots,X^{k}$ to initialize the underlying marking. Next, if we choose those random variables in such a way that for every $j<k$, $${\mathbb{E}}\left(m^{j}_{0}(t)\right)\ \leq\ {\mathbb{E}}(X^{j})t\ <\ {\mathbb% {E}}\left(\left(c(\gamma-1)t+({\mathbb{E}}(X^{j+1}){\mathcal{E}}(t))^{1-\gamma% }\right)^{-\beta}\right)\leq{\mathbb{E}}(m^{j+1}_{0}(t))$$ so that the law of the $m^{k}$’s must be distinct at time $t$. This shows that one can construct infinitely many dust solutions to MK-V. ∎ 5. Main convergence results 5.1. Notation We will typically consider a sequence of nested coalescents indexed by $n$, in such way that the initial number of species increase linearly with $n$ (see Theorem 5.1). Let $s_{t}^{n}$ be the number of blocks in the species coalescent (for the model indexed by $n$); Further $\tilde{s}_{t}^{n}\ =\ s^{n}_{t/n}$. We order the blocks of the species coalescent at any given time by their least element. $B_{t}^{n}(i)$ will denote the $i^{th}$ block at time $t$, and $V_{t}^{n}(i)$ the number of element in the block; $\tilde{B}_{t}^{n}\ =\ B_{t/n}^{n}$; $\tilde{V}_{t}^{n}\ =\ V_{t/n}^{n}$. Define $R^{n}$ the scaling operator acting on measure-valued process such for every $(\nu_{t};t\geq 0)$ valued in $M_{F}({\mathbb{R}}^{+})$, the process $(R^{n}(\nu)_{t};t\geq 0)$ is the only measure valued process such that for every bounded and continuous function $f$ $$\forall t>0,\ \ \int_{{\mathbb{R}}^{+}}f(x)R^{n}(\nu)_{t}(dx)\ =\ \int_{{% \mathbb{R}}^{+}}f(x/n)\nu_{t/n}(dx).$$ In words, space and time are both rescaled by $1/n$. For $i\leq s_{t}^{n}$, let us denote by $\Pi_{t}^{n}(i)$ the number of gene lineages in the species block with index $i$; further $\tilde{\Pi}_{t}^{n}\ =\ \frac{1}{n}\Pi^{n}_{t/n}$. We define $$g_{t}^{n}\ =\ \sum_{i=1}^{s_{t}^{n}}\delta_{\Pi^{n}_{t}},\ \mbox{and }\tilde{g% }_{t}^{n}=R^{n}\circ g_{t}^{n}\ =\ \frac{1}{\tilde{s}_{t}^{n}}\sum_{i=1}^{% \tilde{s}_{t}^{n}}\delta_{\tilde{\Pi}^{n}_{t}(i)}$$ ${\mathcal{C}}_{0}({\mathbb{R}}^{+})$ will denote the set of continuous functions vanishing $+\infty$; ${\mathcal{C}}_{b}^{\infty}({\mathbb{R}}^{+})$ will denote the set of infinitely many differentiable functions with bounded derivatives. The Stone-Weierstrass Theorem ensures that ${\mathcal{C}}_{b}^{\infty}({\mathbb{R}}^{+})$ is dense in ${\mathcal{C}}_{0}({\mathbb{R}}^{+})$, and is also dense in the set of test function (i.e., the ${\mathcal{C}}^{1}$ functions $f$ so that $f$ and $f^{\prime}\psi$ remain bounded). $(M_{F}({\mathbb{R}}^{+}),v)$ will refer to the set of Radon measure on ${\mathbb{R}}^{+}$ endowed with the vague topology (i.e., the smallest topology making the map $g\to\left<g,f\right>$ continuous for every function $f\in{\mathcal{C}}_{0}({\mathbb{R}}^{+})$); whereas $(M_{F}({\mathbb{R}}^{+}),w)$ will refer to $M_{F}({\mathbb{R}}^{+})$ equipped with the weak topology (i.e., the smallest topology making the map $g\to\left<g,f\right>$ continuous for every function $f\in{\mathcal{C}}_{b}({\mathbb{R}}^{+})$ – the set of continuous bounded functions). 5.2. Statement of the main results Theorem 5.1. Consider a sequence of nested Kingman coalescents $\{\Pi^{n},s^{n}\}_{n}$. Let $\{X_{i}^{n}\}_{i,n}$ be an infinite array of independent rvs such that $$\forall i,j,n,\ \ {\mathcal{L}}(X_{i}^{n})={\mathcal{L}}(X_{j}^{n}),\ \ \ % \limsup_{n}{\mathbb{E}}\left((X_{1}^{n})^{2}\right)\ <\ \infty,$$ and independent of the species coalescent $\left(\tilde{s}_{t}^{n};t\geq 0\right)$. Assume that $$\tilde{\Pi}^{n}_{0}\ =\ \left(X_{1}^{n},\cdots,X_{\tilde{s}_{0}^{n}}^{n}\right)$$ Assume further that (1) $s_{0}^{n}/n$ converges to $r\in(0,\infty)$ in $L^{2+\epsilon}$ for some $\epsilon>0$. (2) There exists $\nu\in M_{P}({\mathbb{R}}^{+})$ such that $\tilde{g}^{n}_{0}\ \Longrightarrow\ \nu$ in $(M_{F}({\mathbb{R}}^{+}),w)$. Then (5.47) $$\left((\tilde{g}^{n}_{t},\tilde{s}^{n}_{t});\ t\geq 0\right)\ \Longrightarrow% \ \left((\mu_{t},\frac{2}{\frac{2}{r}+t});\ t\geq 0\right)$$ where $\left(\mu_{t};t\geq 0\right)$ is the unique weak solution of (1.3) with initial condition $\nu$ and inverse population $\delta=\frac{2}{r}$. The convergence of $\{\tilde{g}^{n}\}_{n}$ is meant in the Skorohod toplogy on $D([0,T],(M_{F}({\mathbb{R}}^{+}),w))$ for every finite interval $[0,T]$. In the previous theorem, we considered a sequence of nested Kingman coalescents (indexed by $n)$ with finite initial populations going to $\infty$ with $n$. Next, we aim at investigating the case of a (single) nested Kingman coalescent where the size of the population at time $t=0$ is infinite. Definition 5.2. We say that $\left((\Pi_{t},s_{t});t>0\right)$ is an $\infty$-pop. nested coalescent iff (i) For every $t>0$, conditioned on $(\Pi_{t},s_{t})$, the shifted process $(\theta_{t}\circ(\Pi_{u},s_{u});u\geq 0)$ is a nested coalescent with initial condition $(\Pi_{t},s_{t})$. (ii) For every $t>0$, $\Pi_{t}(i)\geq 1$ for every $i\in[s_{t}]$. (iii) $s_{t}\to\infty$ a.s. as $t\downarrow 0$. Note that property (i) and (iii) immediately imply that $(s_{t};t>0)$ is distributed as the block counting process of a standard Kingman coalescent coming down from $\infty$. In particular, $\frac{t}{2}s_{t}\to 1$ a.s. as $t\to 0$. Lemma 5.3. There exists an $\infty$-pop. nested Kingman coalescent. Proof. The idea is very similar to the existence of a proper solution to the $\infty$-pop. MK-V equation, as described in Section 4.1. We omit the details and only give a brief outline of the construction. Let $\{\delta_{n}\}$ a sequence of positive number decreasing to $0$ and let $(s_{t};t>0)$ be a species coalescent (i.e., a standard coalescent with rate $1$) coming down from infinity. For every $n$, define $\left(\Pi^{(\delta_{n}),+}_{t},\theta_{\delta_{n}}\circ s_{t};t>0\right)$ be the nested coalescent starting with $s_{\delta_{n}}$ species and infinitely many genes per species. Finally, for every $t>\delta_{n}$, define $\Pi^{n,+}_{t}\ =\ \theta_{-\delta_{n}}\circ\Pi^{(\delta_{n}),+}_{t}$. It is easy to find a coupling such that for every $t>0$, the sequence $\{\Pi^{n,+}_{t}\}_{n}$ is decreasing (i.e, each of the coordinates decreases in $n$) and converges to a limit $\Pi^{+}_{t}$ a.s., whereas $\theta_{\delta_{n}}\circ s_{t}$ obviously converges to $s_{t}$. Finally, one can easily check that $(\Pi^{+}_{t},s_{t})$ is an $\infty$-pop. nested Kingman coalescent at $\infty$. ∎ Theorem 5.4. Let $(g_{t};t>0)$ be the empirical measure of an $\infty$-pop. nested Kingman coalescent. Then $$\{\left((R^{n}\circ g_{t},s_{t});t>0\right)\}_{n}\Longrightarrow\left(\left(% \mu^{(0)}_{t},\frac{2}{t}\right);t>0\right)$$ where $\mu^{(0)}$ is the unique proper $\infty$-pop. solution of the Smoluchowski equation (as described in Theorem 2.7) for $\psi(x)=\frac{c}{2}x^{2}$ and the convergence is meant in $D([\tau,T],(M_{F}({\mathbb{R}}^{+}),w))$ for any pair such that $0<\tau<T<\infty$. In particular, for every $t>0$, $\mu_{t}^{(0)}$ is the law of $\frac{1}{t}\Upsilon$ where $\Upsilon$ is the r.v. defined in Theorem 2.8. Remark 5.5. In the proof of Lemma 5.3, we outlined the construction of the “maximal” $\infty$-pop. nested coalescent by starting from the $\infty$-initial condition at level $\delta_{n}$ (and letting $\delta_{n}\to 0$). A “minimal” $\infty$-pop. nested Kingman coalescent would consist in setting the initial condition to $1$ for every species. Our next result suggests that those two extremal nested coalescent are actually identical since their asymptotical empirical measures are undistinguishable. By a simple coupling argument, this should also imply that all the nested coalescents coming down from $\infty$ are identical. In other words, we conjecture that there is a single entrance law for the nested Kingman coalescent. As a corollary of the previous result, we will deduce the speed of coming down from infinity in the nested Kingman coalescent. Theorem 5.6 (Speed of coming down from $\infty$). Let $\rho_{t}:=s_{t}\left<g_{t},x\right>$ be the number of gene lineages at time $t$. Then (5.48) $$\frac{1}{n^{2}}\rho_{t/n}\ \Longrightarrow_{n}\ \frac{2}{t}\int_{0}^{\infty}x% \mu^{0}_{t}(dx)\ =\frac{2}{t^{2}}{\mathbb{E}}\left(\Upsilon\right)<\infty.$$ 6. Some useful estimates In this section, we establish some estimates which will be useful in due time. This section can be skipped at first reading. Lemma 6.1 (Large deviations). Let $z^{n}$ be the block counting process associated to a Kingman coalescent with rate $c>0$ such that $\{z^{n}_{0}\}$ is a deterministic sequence in ${\mathbb{R}}^{+}\cup\{+\infty\}$ such that $z_{0}^{n}/n\to r\in(0,+\infty]$. There exists two functions $I_{+}$ and $I_{-}$ and two constants $K,\bar{K}$ $$\displaystyle\forall\gamma>0,\ \ {\mathbb{P}}\left(\frac{1}{n}z({t/n})>(1+% \gamma)\frac{2r}{2+rct}\right)\leq K_{+}\exp(-nI_{+}(\gamma))$$ $$\displaystyle\forall\gamma\in(0,1),\ \ {\mathbb{P}}\left(\frac{1}{n}z({t/n})<(% 1-\gamma)\frac{2r}{2+rct}\right)\leq\bar{K}_{-}\exp(-nI_{-}(\gamma))$$ such that $I_{\pm}(\gamma)>0$ for $\gamma>0$ and $\liminf_{\gamma\to\infty}I_{+}(\gamma)/\gamma>0$. Proof. The case $z_{0}^{n}=+\infty$ was treated in [18]; see proof of Lemma 4.6 therein. The general case can be handled by a straightforward extension of our method. ∎ Corollary 6.2. Let $k\in{\mathbb{N}}$ and $r\in(0,\infty)$. Let us assume that $\{s_{0}^{n}/n\}$ converges to a deterministic $r\in(0,\infty)$ in $L^{k+2+\epsilon}$ for some $\epsilon>0$. Then $$\limsup_{n}\ {\mathbb{E}}\left((\frac{\tilde{s}_{0}^{n}}{n})^{k}\ \frac{1}{% \tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s}_{T}^{n}}\ \tilde{V}_{T}^{n}(i)^{2}% \right)\ <\ \infty.$$ Proof. Take $\alpha=\frac{1}{2+rcT}$. Then $$\displaystyle{\mathbb{E}}\left((\frac{\tilde{s}_{0}^{n}}{n})^{k}\frac{1}{% \tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s}_{T}^{n}}\tilde{V}_{T}^{n}(i)^{2}\ |\ % \tilde{s}_{0}^{n}\right)$$ $$\displaystyle\leq$$ $$\displaystyle\frac{(\tilde{s}_{0}^{n})^{k+2}}{n^{k}}{\mathbb{P}}\left(\frac{% \tilde{s}_{T}^{n}}{\tilde{s}_{0}^{n}}<\alpha\ \ |\ \tilde{s}_{0}^{n}\right)\ +% \ \frac{({\tilde{s}_{0}^{n}})^{k+1}}{\alpha n^{k}}{\mathbb{E}}\left(\sum_{i=1}% ^{\tilde{s}_{T}^{n}}\frac{\tilde{V}_{T}^{n}(i)^{2}}{(\tilde{s}_{0}^{n})^{2}}\ % |\ \tilde{s}_{0}^{n}\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{(\tilde{s}_{0}^{n})^{k+2}}{n^{k}}{\mathbb{P}}\left(\frac{% \tilde{s}_{T}^{n}}{\tilde{s}_{0}^{n}}<\alpha\ |\ \tilde{s}_{0}^{n}\right)\ +\ % \frac{(\tilde{s}_{0}^{n})^{k+1}}{\alpha n^{k}}\left((1-\exp(-\frac{T}{n}))(1-% \frac{1}{\tilde{s}_{0}^{n}})+\frac{1}{\tilde{s}_{0}^{n}}\right),$$ where the equality simply comes from the fact that the expectation on the RHS of the inequality is the probability that two elements sampled in $\{1,\cdots,s_{0}^{n}\}$ (with replacement) are in the same block of a standard Kingman coalescent at time $T/n$. (Note that our choice of $\alpha=\frac{1}{2}\frac{2}{2+rcT}$ is motivated by the previous large deviation estimates, so that when $s_{0}^{n}/n\sim r$ the first term on the RHS will be negligible.) First, since $s_{0}^{n}/n$ converges in $L^{k+1}$, the second term of the RHS remains bounded in $L^{1}$. Let us now deal with the second term and show that it remains bounded in $L^{1}$. Let us assume by contradiction that (6.49) $$\limsup_{n}\ {\mathbb{E}}\left(\frac{(\tilde{s}_{0}^{n})^{k+2}}{n^{k}}1_{\frac% {\tilde{s}_{T}^{n}}{\tilde{s}_{0}^{n}}<\alpha}\right)\ =\ \infty.$$ Then, up to a subsequence, the latter expectation goes to $\infty$. Our aim is a extract a further bounded subsequence in $L^{1}$, thus yielding a contradiction. Let us now consider $p,q>1$ such that $1/p+1/q=1$ and $(2+k)p<2+k+\epsilon$. By Lemma 6.1 and our choice of $\alpha$, we can take $\gamma$ small enough such that $${\mathbb{P}}\left(\tilde{s}_{T}^{n}<\alpha r(1+\gamma)n\ |\ s_{0}^{n}\ =\ [r(1% -\gamma)n]\right)$$ goes to $0$ exponentially fast in $n$. For this choice of $\gamma\in(0,1)$, one can extract a further subsequence such (6.50) $$\limsup_{n}\ n^{2q}{\mathbb{P}}(s_{0}^{n}/n\notin[r(1-\gamma),r(1+\gamma)])<\infty.$$ For the rest of the proof, we will work under this subsequence. Next, $$\displaystyle\ {\mathbb{E}}\left(\frac{(\tilde{s}_{0}^{n})^{k+2}}{n^{k}}1_{% \frac{\tilde{s}_{T}^{n}}{\tilde{s}_{0}^{n}}<\alpha}\right)$$ $$\displaystyle\leq$$ $$\displaystyle{\mathbb{E}}\left(\frac{(\tilde{s}_{0}^{n})^{k+2}}{n^{k}},\frac{% \tilde{s}_{T}^{n}}{\tilde{s}_{0}^{n}}<\alpha\ \mbox{and}\ s_{0}^{n}/n\in[r(1-% \gamma),r(1+\gamma)]\right)$$ $$\displaystyle\ +\ {\mathbb{E}}\left(\frac{(\tilde{s}_{0}^{n})^{k+2}}{n^{k}},s_% {0}^{n}/n\notin[r(1-\gamma),r(1+\gamma)]\right).$$ We first deal with the first term on the RHS of the inequality that we call (i). $$\displaystyle(i)$$ $$\displaystyle\leq$$ $$\displaystyle r^{2+k}(1+\gamma)^{2+k}n^{2}{\mathbb{P}}\left(\tilde{s}_{T}^{n}<% \alpha r(1+\gamma)n,\ s_{0}^{n}/n\in[r(1-\gamma),r(1+\gamma)]\right)$$ $$\displaystyle\leq$$ $$\displaystyle r^{2+k}(1+\gamma)^{2+k}n^{2}{\mathbb{P}}\left(\tilde{s}_{T}^{n}<% \alpha r(1+\gamma)n\ |\ s_{0}^{n}\ =\ [r(1-\gamma)n]\right),$$ where the RHS goes to $0$ by our choice of $\alpha$ and $\gamma$. We now deal with the second term. By Hölder’s inequality, we have $${\mathbb{E}}\left(\frac{(\tilde{s}_{0}^{n})^{k+2}}{n^{k}},s_{0}^{n}\notin[r(1-% \gamma),r(1+\gamma)]\right)\leq{\mathbb{E}}\left((\frac{s_{0}^{n}}{n})^{(k+2)p% }\right)^{\frac{1}{p}}\left(n^{2q}{\mathbb{P}}(s_{0}^{n}\notin[r(1-\gamma),r(1% +\gamma)])\right)^{1/q}$$ Since $\frac{s_{0}^{n}}{n}$ remains bounded in $L^{2+k+\epsilon}$, and because of (6.50), this shows that $\limsup{\mathbb{E}}\left((\tilde{s}_{0}^{n})^{2}1_{\frac{\tilde{s}_{T}^{n}}{% \tilde{s}_{0}^{n}}<\alpha}\right)<\infty$, which is the desired contradiction. This completes the proof of the lemma. ∎ Lemma 6.3. Assume that $\{s_{0}^{n}/n\}$ converges to a deterministic $r\in(0,\infty)$ in $L^{3+\epsilon}$ for some $\epsilon>0$. Let $\left((\xi_{i};i\in{\mathbb{N}}\right)$ be i.i.d. block counting processes of Kingman coalescent with rate $c>0$ coming down from infinity and independent of the species coalescent. Then for every $0<\tau<T$: $$\limsup_{n\to\infty}{\mathbb{E}}\left(\sup_{[\tau,T]}\ \beta^{n}_{t}\right)\ <% \ \infty\ \ \mbox{where}\ \ \beta_{t}^{n}\ :=\ \frac{1}{\tilde{s}_{t}^{n}}\ % \sum_{i=1}^{\tilde{s}_{t}^{n}}\left(\sum_{j\in\tilde{B}_{t}^{n}(i)}\frac{1}{n}% \xi_{j}({t/n})\right)^{2}.$$ and where $\tilde{B}_{t}^{n}$ was defined in Section 5.1. Proof. For every $k\in{\mathbb{N}}$, we define the event $\tilde{A}_{t}^{k,n}\ \ =\ \{\frac{2kn}{ct}\leq\max_{i\in[\tilde{s}_{0}^{n}]}\ % \xi_{i}(t/n)\leq\frac{2(1+k)n}{ct}\}$. Take $k_{0}\in{\mathbb{N}}$ such that $k_{0}\frac{\tau}{T}>1$ $$\displaystyle\beta_{t}^{n}$$ $$\displaystyle\leq$$ $$\displaystyle\left((\frac{2k_{0}}{ct})^{2}\ +\ \sum_{k=k_{0}}^{\infty}(\frac{2% (k+1)}{ct})^{2}1_{\tilde{A}_{t}^{k,n}}\right)\ \frac{1}{\tilde{s}_{t}^{n}}\sum% _{i=1}^{\tilde{s}_{t}^{n}}\tilde{V}_{t}^{n}(i)^{2}$$ $$\displaystyle\leq$$ $$\displaystyle\left((\frac{2k_{0}}{ct})^{2}\ +\ \sum_{k=k_{0}}^{\infty}(\frac{2% (k+1)}{ct})^{2}1_{\bar{A}_{t,T}^{k,n}}\right)\ \frac{1}{\tilde{s}_{t}^{n}}\sum% _{i=1}^{\tilde{s}_{t}^{n}}\tilde{V}_{t}^{n}(i)^{2}.$$ where $\bar{A}_{t,T}^{k,n}\ \ =\ \{\ \frac{2kn}{cT}\leq\max_{i\in[\tilde{s}^{n}_{0}]}% \xi_{i}(t/n)\}$. Next, for every $t\leq T$, let us denote by $\tilde{C}_{t,T}^{n}(i)$ be the indices of the blocks at time $t/n$ partionning the block $i$ at time $T/n$. (In particular, $\tilde{C}_{0,T}^{n}(i)=\tilde{B}_{T}^{n}(i)$.) Then (6.51) $$\displaystyle\frac{1}{\tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s}_{T}^{n}}\tilde{V% }_{T}^{n}(i)^{2}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s}_{T}^{n}}(\sum_{j% \in\tilde{C}^{n}_{t,T}(i)}\tilde{V}_{t}^{n}(j))^{2}$$ $$\displaystyle\geq$$ $$\displaystyle\frac{1}{\tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s}_{T}^{n}}\sum_{j% \in\tilde{C}^{n}_{t,T}(i)}\tilde{V}_{t}^{n}(j)^{2}\ =\ \frac{1}{\tilde{s}_{T}^% {n}}\sum_{k=1}^{\tilde{s}_{t}^{n}}\tilde{V}_{t}^{n}(k)^{2}$$ $$\displaystyle\geq$$ $$\displaystyle\frac{1}{\tilde{s}_{t}^{n}}\sum_{k=1}^{\tilde{s}_{t}^{n}}\tilde{V% }_{t}^{n}(k)^{2}$$ which implies that $\sup_{[\tau,T]}\frac{1}{\tilde{s}_{t}^{n}}\sum_{i=1}^{\tilde{s}_{t}^{n}}\tilde% {V}_{t}^{n}(i)^{2}\ =\ \frac{1}{\tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s}_{T}^{n% }}\tilde{V}_{T}^{n}(i)^{2}$. Further, since $\xi_{i}$ is non-increasing $$\sup_{[\tau,T]}\beta_{t}^{n}\leq\ \left((\frac{2k_{0}}{c\tau})^{2}\ +\ \sum_{k% =k_{0}}^{\infty}(\frac{2(k+1)}{c\tau})^{2}1_{\bar{A}_{\tau,T}^{k,n}}\right)\ % \frac{1}{\tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s}_{T}^{n}}\tilde{V}_{T}^{n}(i)^% {2}.$$ Since the $\xi_{i}$’s are independent of the coalescent, this yields $$\displaystyle{\mathbb{E}}(\sup_{[\tau,T]}\beta_{t}^{n}\ |\ \tilde{s}_{0}^{n})$$ $$\displaystyle\leq$$ $$\displaystyle\left((\frac{2k_{0}}{c\tau})^{2}\ +\ \sum_{k=k_{0}}^{\infty}(% \frac{2(k+1)}{c\tau})^{2}{\mathbb{P}}\left(\bar{A}_{\tau,T}^{k,n}\ |\ \tilde{s% }_{0}^{n}\right)\right)\ {\mathbb{E}}\left(\frac{1}{\tilde{s}_{T}^{n}}\sum_{i=% 1}^{\tilde{s}_{T}^{n}}\tilde{V}_{T}^{n}(i)^{2}\ |\ \tilde{s}_{0}^{n}\right)$$ $$\displaystyle\leq$$ $$\displaystyle\left((\frac{2k_{0}}{c\tau})^{2}\ +\ \sum_{k=k_{0}}^{\infty}(% \frac{2(k+1)}{c\tau})^{2}{\tilde{s}_{0}^{n}}\ {\mathbb{P}}\left(\frac{1}{n}\xi% (\tau/n)\geq\frac{2k}{cT}\right)\right)\ {\mathbb{E}}\left(\frac{1}{\tilde{s}_% {T}^{n}}\sum_{i=1}^{\tilde{s}_{T}^{n}}\tilde{V}_{T}^{n}(i)^{2}\ |\ \tilde{s}_{% 0}^{n}\right)$$ Using Lemma 6.1 $$\displaystyle{\mathbb{E}}(\sup_{[\tau,T]}\beta_{t}^{n}\ |\ \tilde{s}_{0}^{n})$$ $$\displaystyle\leq$$ $$\displaystyle\left((\frac{2k_{0}}{c\tau})^{2}\ +\ \frac{\tilde{s}_{0}^{n}}{n}% \sum_{k=k_{0}}^{\infty}(\frac{2(k+1)}{c\tau})^{2}nK_{+}\exp(-nI_{+}(k\frac{% \tau}{T}-1)\right)\ {\mathbb{E}}\left(\frac{1}{\tilde{s}_{T}^{n}}\sum_{i=1}^{% \tilde{s}_{T}^{n}}\tilde{V}_{T}^{n}(i)^{2}\ |\ \tilde{s}_{0}^{n}\right)$$ Recall that for every $k\geq k_{0}$, we have $k\frac{\tau}{T}>1$. Since $\liminf_{\gamma\to\infty}I_{+}(\gamma)/\gamma>0$, a straightforward application of the dominated convergence theorem shows that the sum on the RHS of the inequality goes to $0$ as $n\to\infty$. Thus, there exists a constant $C$ (independent of $n$) such that $$\displaystyle{\mathbb{E}}(\sup_{[\tau,T]}\beta_{t}^{n}\ |\ \tilde{s}_{0}^{n})$$ $$\displaystyle\leq$$ $$\displaystyle\left((\frac{2k_{0}}{c\tau})^{2}\ +\ C\frac{\tilde{s}_{0}^{n}}{n}% \right)\ {\mathbb{E}}\left(\frac{1}{\tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s}_{T% }^{n}}\tilde{V}_{T}^{n}(i)^{2}\ |\ \tilde{s}_{0}^{n}\right)$$ The fact that the RHS of the inequality is uniformly bounded in $L^{1}$ is handled by Corollary 6.2 (with $k=0,1$). ∎ 7. Convergence of the empirical measure In this section, we assume a sequence of nested coalescents $\{(\Pi^{n}_{t},s_{t}^{n});t\geq 0\}$ indexed by $n$. Assume that that there exists $r\in(0,\infty)$ such that (7.52) $$\tilde{s}_{0}^{n}/n\ \to\ r\ \mbox{in $L^{1}$}.$$ We will also assume that (7.53) $$\forall T>0,\ \ \ \limsup_{n}{\mathbb{E}}\left(\ \max_{[0,T]}\left<\tilde{g}^{% n}_{t},x^{2}\right>\right)<\infty.$$ As we shall see later, this condition will be satisfied under the initial conditions specified in Theorem 5.1, and will also appear naturally in the infinite population nested Kingman coalescent. 7.1. Generators We start with some definition. The process of genetic composition $\left(\tilde{\Pi}_{t}^{n};t\geq 0\right)$ defines a Markov process valued in the space $$E\ :=\ \ \cup_{k\in{\mathbb{N}}_{*}}{\mathbb{R}}^{k}_{+}.$$ We define $E_{n}$ to be the subspace of $E$ such that every coordinate of $\Pi\in E$ is such that $n\Pi(i)\in{\mathbb{N}}_{*}$. For every $\Pi\in E$, we define $|\Pi|$ as the number of entries in $\Pi$. In particular, we have $|\tilde{\Pi}_{t}^{n}|\ =\tilde{s}_{t}^{n}$. Finally, when $|\Pi|>1$, for every $i<j\leq|\Pi|$, $\theta_{ij}(\Pi)$ is the only element $\Pi^{\prime}\in E$ such that $\Pi^{\prime}$ is obtained by coagulating coordinates $i$ and $j$. More precisely, $\theta_{ij}(\Pi)$ is a vector of size $|\Pi|-1$ with coordinates $$\displaystyle\forall k<|\Pi|,\ \ \theta_{i,j}(\Pi)(k)$$ $$\displaystyle=$$ $$\displaystyle\left\{\begin{array}[]{cc}\Pi(i)+\Pi(j)&\mbox{if $k=i$}\\ \Pi(k+1)&\mbox{if $k\geq j$}\\ \Pi(k)&\mbox{otherwise}\end{array}\right.$$ For instance, $$\mbox{if}\ \ \Pi\ =\ \left(X^{1},X^{2},X^{3},X^{4}\right),\ \ \mbox{then }\ % \theta_{1,3}(\Pi)\ =\ \left(X^{1}+X^{3},X^{2},X^{4}\right).$$ Finally, for every $\Pi\in E$, we define $g_{\Pi}=\frac{1}{|\Pi|}\sum_{i=1}^{|\Pi|}\delta_{\Pi(i)}$, the empirical measure associated to the genetic composition $\Pi$. Let us now describe the generator of $\left(\tilde{\Pi}^{n}_{t};t\geq 0\right)$ describing the evolution of the number of gene lineages per species. Define $e_{i}^{k}$ to be the vector of size $k$ filled with zeros except for the $i^{th}$ coordinate which is equal to $1$. Then for every bounded function from $E$ to ${\mathbb{R}}$, and every $\Pi\in E_{n}$ we have $$\displaystyle G^{n}h(\Pi)\ :=\ \underbrace{\frac{c}{n}\sum_{i=1}^{n}\frac{n\Pi% (i)(n\Pi(i)-1)}{2}\left(h(\Pi-\frac{1}{n}e_{i}^{|\Pi|})-h(\Pi)\right)}_{\mbox{% term I}}+\underbrace{\frac{1}{n}1_{|\Pi|>1}\sum_{i<j\leq|\Pi|}\left(h\circ% \theta_{ij}(\Pi)\ -\ h(\Pi)\right)}_{\mbox{term II}}$$ where the first term corresponds to a coalescence of two gene lineages (belonging to the same species), and the second term corresponds to a coalescence of two species lineages. Finally, (7.55) $$\tilde{g}^{n}_{t}\ =\ \frac{1}{|\tilde{\Pi}^{n}_{t}|}\sum_{i=1}^{|\tilde{\Pi}^% {n}_{t}|}\delta_{\tilde{\Pi}^{n}_{t}(i)},$$ corresponds to the empirical measure of the block masses, where the mass of a block is measured in terms of its (renormalized) number of gene lineages. Before going to the convergence of the empirical measure $\tilde{g}^{n}$, we will need to establish a few technical lemmas related to the generator of the process $(\tilde{\Pi}^{n}_{t};t\geq 0)$. For every $f\in C_{b}^{\infty}({\mathbb{R}}^{+})$, we define (7.56) $$X^{n,f}_{t}\ :=\ \left<\tilde{g}^{n}_{t},f\right>$$ Note that $X^{n,f}_{t}$ can be regarded as a function of $\tilde{\Pi}^{n}_{t}$. We call $h^{f}$ this function ($h^{f}(\Pi)\ =\ \left<g_{\Pi},f\right>$), in such a way that $X^{n,f}_{t}\ :=\ h^{f}(\tilde{\Pi}^{n}_{t})$, Lemma 7.1 (Generator approximation). Assume that conditions (7.52) and (7.53) hold. For every $f\in{\mathcal{C}}_{b}^{\infty}({\mathbb{R}}^{+})$, $\Pi\in E_{n}$, define $$\bar{G}h^{f}(\Pi)\ =\ -\left<g_{\Pi},\frac{cx^{2}}{2}f^{\prime}\right>\ +\ % \frac{r}{2+rs}\int_{{\mathbb{R}}^{2}}g_{\Pi}(dx)g_{\Pi}(dy)\left(f(x+y)-f(x)% \right).$$ Then for every $t\geq 0$ $${\mathbb{E}}\left(\int_{0}^{t}(G^{n}-\bar{G})h^{f}(\tilde{\Pi}^{n}_{s})ds% \right)\to 0\ \ \mbox{as $n\to\infty$.}$$ Proof. We first note that $${\mathbb{E}}\left(\int_{0}^{t}|\frac{|\tilde{\Pi}^{n}_{s}|}{2n}-\frac{r}{2+rs}% |\ \int_{{\mathbb{R}}^{2}}g_{\tilde{\Pi}^{n}_{s}}(dx)g_{\tilde{\Pi}^{n}_{s}}(% dy)\left(f(x+y)-f(x)\ \right)ds\right)\ \leq\ 2||f||_{\infty}{\mathbb{E}}(\int% _{0}^{t}|\frac{|\tilde{\Pi}^{n}_{s}|}{2n}-\frac{r}{2+rs}|ds).$$ From Lemma 6.1 and using the fact that $(|\tilde{\Pi}^{n}_{t}|/n;t\geq 0)$ is non-increasing $$(|\tilde{\Pi}^{n}_{t}|/n;t\geq 0)\to(\frac{2r}{2+rt};t\geq 0)\ \ \mbox{in probability}$$ (where the convergence is meant in the Skorohod topology on every interval $[0,T]$) so that the integrand on the RHS of the latter inequality goes to $0$ in probability. Further, by assumption $\{|\tilde{\Pi}^{n}_{0}|/n\}$ is uniformly integrable, and since $|{\tilde{\Pi}^{n}_{0}|}\geq|\tilde{\Pi}^{n}_{t}|$ it easily follows (by uniform integrability in $(\Omega\times[0,t],{\mathbb{P}}\times dt)$) that $${\mathbb{E}}(\int_{0}^{t}|\frac{|\tilde{\Pi}^{n}_{s}|}{2n}-\frac{r}{2+rs}|ds)\to 0$$ so that the LHS of the latter inequality vanishes. From our assumptions, it is then sufficient to show the existence of a constant $K$ such that for every $\Pi\in E_{n}$ and $f\in{\mathcal{C}}_{b}^{\infty}({\mathbb{R}}^{+})$ $$\displaystyle\left|G^{n}h^{f}(\Pi)\ -\left(\ -\left<g_{\Pi},\frac{cx^{2}}{2}f^% {\prime}\right>\ +\ \frac{|\Pi|}{2n}\int_{{\mathbb{R}}^{2}}g_{\Pi}(dx)g_{\Pi}(% dy)\left(f(x+y)-f(x)\ \right)\right)\right|$$ (7.57) $$\displaystyle\leq\frac{K}{n}\left({||f^{\prime}||_{\infty}}\left<g_{\Pi},x% \right>\ +\ {||f^{\prime\prime}||_{\infty}}\left<g_{\Pi},x^{2}\right>\right)\ % +\ K\times 1_{|\Pi|>1}\frac{1}{|\Pi|-1}{||f||_{\infty}}.$$ We start by approximating term I of the generator $G^{n}$ (as defined in (7.1)). For every $\Pi\in E_{n}$, we have $$\displaystyle I$$ $$\displaystyle=$$ $$\displaystyle\frac{c}{n}\sum_{i=1}^{|\Pi|}\frac{n\Pi(i)(n\Pi(i)-1)}{2}\left(h^% {f}(\Pi-\frac{1}{n}e_{i}^{|\Pi|})-h^{f}(\Pi)\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{c}{|\Pi|n}\sum_{i=1}^{|\Pi|}\frac{n\Pi(i)(n\Pi(i)-1)}{2}% \left(f(\Pi(i)-\frac{1}{n})-f(\Pi(i))\right)$$ and by a simple Taylor expansion, it follows that there exists a constant $c_{1}$ such that $$\displaystyle|\ I\ +\ \left<g_{\Pi},\frac{cx^{2}}{2}f^{\prime}\right>\ |$$ $$\displaystyle\leq$$ $$\displaystyle c_{1}\left(\frac{||f^{\prime}||_{\infty}}{n}\left<g_{\Pi},x% \right>\ +\ \frac{||f^{\prime\prime}||_{\infty}}{n}\left<g_{\Pi},x^{2}\right>% \right).$$ Let us now deal with term $II$ of the generator $G^{n}$ (again as defined in (7.1)). For $\Pi$ such that $|\Pi|>1$, consider the measure (7.58) $$\nu_{\Pi}(dxdy)\ =\ g_{\Pi}(dx)\frac{|\Pi|}{|\Pi|-1}\left(g_{\Pi}(dy)-\frac{1}% {|\Pi|}\delta_{x}(dy)\right),$$ i.e., $\nu_{\Pi}$ is the measure that consists in sampling two elements with no replacement according to the measure $g_{\Pi}$. Then $$\displaystyle II$$ $$\displaystyle=$$ $$\displaystyle 1_{|\Pi|>1}\frac{|\Pi|(|\Pi|-1)}{2n}\int_{({\mathbb{R}}^{+})^{2}% }\nu_{\Pi}(dxdy)$$ $$\displaystyle\times\left(\ \frac{|\Pi|}{|\Pi|-1}\left<g_{\Pi},f\right>+\frac{1% }{|\Pi|-1}f(x+y)-\frac{1}{|\Pi|-1}f(x)-\frac{1}{|\Pi|-1}f(y)\ -\ \left<g_{\Pi}% ,f\right>\right)$$ $$\displaystyle=$$ $$\displaystyle 1_{|\Pi|>1}\frac{|\Pi|(|\Pi|-1)}{2n}\int_{({\mathbb{R}}^{+})^{2}% }\nu_{\Pi}(dxdy)\left(\ \frac{1}{|\Pi|-1}\left<g_{\Pi},f\right>+\frac{1}{|\Pi|% -1}f(x+y)-\frac{1}{|\Pi|-1}f(x)-\frac{1}{|\Pi|-1}f(y)\right)$$ $$\displaystyle=$$ $$\displaystyle 1_{|\Pi|>1}\frac{|\Pi|}{2n}\int_{({\mathbb{R}}^{+})^{2}}\nu_{\Pi% }(dxdy)\left(\left<g_{\Pi},f\right>+f(x+y)-f(x)-f(y)\ \right)$$ In words, we coalesce two species lineages at rate $\frac{|\Pi|(|\Pi|-1)}{2n}$. Conditional on a coalescence event, we pick two species lineages according to the measure $\nu_{\Pi}(dxdy)$. If we pick two lineages with coordinate $x$ and $y$ respectively, then the change in $\left<g_{\Pi},f\right>$ is readily given by the term between parenthesis. By using (7.58), one gets the existence of a constant $c_{2}$ such that $$\displaystyle|\ II-\ 1_{|\Pi|>1}\frac{|\Pi|}{2n}\int_{({\mathbb{R}}^{+})^{2}}g% _{\Pi}(dx)g_{\Pi}(dy)\left(\left<g_{\Pi},f\right>+f(x+y)-f(x)-f(y)\ \right)\ |$$ $$\displaystyle\leq$$ $$\displaystyle 1_{|\Pi|>1}c_{2}\frac{||f||_{\infty}}{|\Pi|-1}$$ Using the fact that $$\int_{({\mathbb{R}}^{+})^{2}}g_{\Pi}(dx)g_{\Pi}(dy)\left(\left<g_{\Pi},f\right% >+f(x+y)-f(x)-f(y)\ \right)\ =\ \int_{({\mathbb{R}}^{+})^{2}}g_{\Pi}(dx)g_{\Pi% }(dy)\left(f(x+y)-f(x)\ \right)$$ this yields $$\displaystyle|\ II\ -\ 1_{|\Pi|>1}\frac{|\Pi|}{2n}\int_{({\mathbb{R}}^{+})^{2}% }g_{\Pi}(dx)g_{\Pi}(dy)\left(f(x+y)-f(x)\ \right)|$$ $$\displaystyle\leq$$ $$\displaystyle c_{2}1_{|\Pi|>1}\frac{||f||_{\infty}}{|\Pi|-1}$$ which is the desired inequality (7.57). ∎ Lemma 7.2. For every $0\leq u\leq t\leq T$ $$|\left<X^{n,f}\right>_{t}-\left<X^{n,f}\right>_{u}|\ \leq\frac{C}{n}\int_{u}^{% t}\left(||f^{\prime}||^{2}_{\infty}\left<\tilde{g}^{n}_{t},x^{2}\right>\ +\ ||% f||^{2}_{\infty}\right)ds$$ Proof. $h^{f}(\tilde{\Pi}^{n})$ is a pure jump process and its bracket term $\left<X^{n,f}\right>_{t}-\left<X^{n,f}\right>_{u}$ can be decomposed into two terms, i.e. $\left<X^{n,f}\right>_{t}-\left<X^{n,f}\right>_{u}\ =\ \int_{u}^{t}I^{\prime}_{% s}\ ds\ +\ \int_{u}^{t}II^{\prime}_{s}\ ds$ where $$\displaystyle I_{t}^{\prime}$$ $$\displaystyle=$$ $$\displaystyle\frac{c}{n}\sum_{i=1}^{\tilde{s}_{t}^{n}}\frac{n\tilde{\Pi}^{n}_{% t}(i)(n\tilde{\Pi}^{n}_{t}(i)-1)}{2}\left(h^{f}(\tilde{\Pi}^{n}_{t}-\frac{1}{n% }e_{i}^{\tilde{s}_{t}^{n}})-h^{f}(\tilde{\Pi}^{n}_{t})\right)^{2}$$ $$\displaystyle=$$ $$\displaystyle\frac{c}{n|\tilde{\Pi}^{n}_{t}|^{2}}\sum_{i=1}^{|\tilde{\Pi}^{n}_% {t}|}\frac{n\tilde{\Pi}^{n}_{t}(i)(n\tilde{\Pi}^{n}_{t}(i)-1)}{2}\left(f(% \tilde{\Pi}^{n}_{t}(i)-1/n)-f(\tilde{\Pi}^{n}_{t}(i))\right)^{2}$$ and $$\displaystyle II_{t}^{\prime}$$ $$\displaystyle=$$ $$\displaystyle 1_{|\tilde{\Pi}^{n}_{t}|>1}$$ $$\displaystyle\times$$ $$\displaystyle\frac{|\tilde{\Pi}^{n}_{t}|(|\tilde{\Pi}^{n}_{t}|-1)}{2n}\int_{({% \mathbb{R}}^{+})^{2}}\nu_{\tilde{\Pi}^{n}_{t}}(dxdy)\left(\ \frac{1}{|\tilde{% \Pi}^{n}_{t}|-1}\left<g_{\tilde{\Pi}^{n}_{t}},f\right>+\frac{1}{|\tilde{\Pi}^{% n}_{t}|-1}f(x+y)-\frac{1}{|\tilde{\Pi}^{n}_{t}|-1}f(x)-\frac{1}{|\tilde{\Pi}^{% n}_{t}|-1}f(y)\ \right)^{2}$$ where the sampling measure $\nu_{\Pi}$ is defined as in (7.58), and the expression of $II_{t}^{\prime}$ is obtained by an argument analogous to the one for obtaining (7.1). Straightforward estimates yield that $$|\ I_{t}^{\prime}\ |\ \leq\frac{c||f^{\prime}||^{2}_{\infty}}{n}\left<\tilde{g% }^{n}_{t},x^{2}\right>$$ and $$\displaystyle|\ II_{t}^{\prime}\ |$$ $$\displaystyle\leq$$ $$\displaystyle 16\times 1_{|\tilde{\Pi}^{n}_{t}|>1}||f||_{\infty}\frac{|\tilde{% \Pi}^{n}_{t}|}{n(|\tilde{\Pi}^{n}_{t}|-1)}\leq 32\frac{||f||_{\infty}}{n}$$ which is the desired result. ∎ 7.2. Tightness result The aim of this section is to show the following tightness result. This will be the key ingredient to the proof of our convergence results. Proposition 7.3. Assume that conditions (7.52) and (7.53) hold. For every $T>0$, the sequence processes $\{\tilde{g}^{n}\}_{n\geq 0}$ is tight in $D([0,T],(M_{F}({\mathbb{R}}^{+}),w))$ and any converging subsequence belongs to ${\mathcal{C}}([0,T],(M_{F}({\mathbb{R}}^{+}),w))$, the space of continuous functions from $[0,T]$ to $(M_{F}({\mathbb{R}}^{+}),w)$. Further (i) Any accumulation point $g^{\infty}$ is a weak solution of the Smoluchowski (1.3) with inverse population $2/r$ in the sense that for every $t\geq 0$ and every test function $f$ $$0=\ \left<g^{\infty}_{t},f\right>-\left<g^{\infty}_{0},f\right>+\int_{0}^{t}% \left<g^{\infty}_{s},c\frac{x^{2}}{2}f^{\prime}\right>ds\ -\ \int_{0}^{t}\frac% {1}{s+\frac{2}{r}}\int_{({\mathbb{R}}^{+})^{2}}g^{\infty}_{s}(dx)g^{\infty}_{s% }(dy)\left(f(x+y)-f(x)\ \right)ds\ $$ (ii) For every $t\in[0,T]$, $\left<g_{t}^{\infty},x\right><\infty$ and $$\left<\tilde{g}^{n}_{t},x\right>\to\left<g^{\infty}_{t},x\right>\ \ \mbox{in % probability.}$$ In order to prove tightness, we follow a standard line of thoughts (see e.g., [13, 24, 25]. The approach is condensed in the statement of Theorem 7.4 which is cited from Tran [25] (Theorem 1.1.8). This Theorem can be obtained by concatenating the so-called Roelly criterium [22] (which states that the tightness of $D([0,T],(M_{F}({\mathbb{R}}^{+}),v))$ boils down to proving that $\{\left<g_{n},f\right>\}_{n\geq 0}$ for $f\in{\mathcal{C}}_{b}^{\infty}({\mathbb{R}}^{+})$ is tight in $D([0,T],{\mathbb{R}})$), and a criterium due Méléard and Roelly [ Méléard, Roelly], allowing to go from vague to weak convergence by checking that no mass is lost at $\infty$. Theorem 7.4. Let $\{\tilde{g}^{n}\}$ be a sequence in $D([0,T],(M_{F}({\mathbb{R}}^{+}),w)$. Then the three following conditions are sufficient for the tightness of $\{\tilde{g}^{n}\}$ in $D([0,T],(M_{F}({\mathbb{R}}^{+}),w))$. (i) For every $f\in{\mathcal{C}}_{b}^{\infty}({\mathbb{R}}^{+})$, the sequence $\{\left<g^{n},f\right>\}_{n}$ is tight in $D([0,T],{\mathbb{R}})$. (ii) $\limsup_{n}{\mathbb{E}}(\sup_{[0,T]}\left<\tilde{g}^{n}_{t},x^{2}\right>)<\infty$. (iii) Any accumulation point $g^{\infty}$ of $\{\tilde{g}^{n}\}$ (in $D([0,T],(M_{F}({\mathbb{R}}^{+}),v)$) belongs to ${\mathcal{C}}([0,T],(M_{F}({\mathbb{R}}^{+}),w))$. Proof of Proposition 7.3. Step 1. We first show that for every $f\in{\mathcal{C}}_{b}^{\infty}({\mathbb{R}}^{+})$, the sequence of processes $\{X^{n,f}\}_{n}$ (as defined in (7.56)) is tight. In order to do so, we use the classical Aldous and Rebolledo criterium [1, 15]. We first note that for every $t\geq 0$ $$|X^{n,f}_{t}|\leq||f||_{\infty}$$ so that the first requirement of Aldous criterion (i.e., for every deterministic $t$, $\{X^{n,f}_{t}\}_{n}$ is tight) is satisfied. Next, let $\gamma>0$ be an arbitrary small number and let us consider two stopping times $(\tau,\sigma)$ such that $$0\leq\tau\leq\sigma\leq\tau+\gamma\leq T.$$ First, we decompose the semi-martingale $X^{n,f}$ into its martingale part and its drift part, namely, (7.59) $$X^{n,f}_{t}\ =\ M^{n,f}_{t}\ +\ B_{t}^{n,f},\ \mbox{where $B_{t}^{n,f}\ :=\ % \int_{0}^{t}G^{n}h^{f}(\tilde{\Pi}^{n}_{s})ds$ and $M_{t}^{n,f}\ :=\ X^{n,f}_{% t}-\int_{0}^{t}G^{n}h^{f}(\tilde{\Pi}^{n}_{s})ds$}.$$ It remains to show that the quantities $${\mathbb{E}}(|B^{n,f}_{\sigma}-B^{n,f}_{\tau}|)\ \ \mbox{and}\ \ {\mathbb{E}}(% |M^{n,f}_{\sigma}-M^{n,f}_{\tau}|)$$ are bounded from above by a function of $\gamma$ (uniformly in the choice of the two stopping times $\tau$ and $\sigma$ and $n$) going to $0$ as $\gamma$ goes to $0$. (This is the second part of Aldous and Robolledo criterium). In order to prove this result, we now make use of some of the technical results established earlier. First, from (7.1), we note that there exists a constant $\bar{K}$ such that for every $\Pi\in E_{n}$ and $f\in C_{b}^{\infty}({\mathbb{R}}^{+})$ $$|G^{n}h^{f}(\Pi)|\ \leq\ \bar{K}\left(||f^{\prime}||_{\infty}\left<g_{\Pi},x^{% 2}\right>\ +\ \frac{|\Pi|}{n}||f||_{\infty}\right).$$ This implies that $$\displaystyle{\mathbb{E}}\left(|B^{n,f}_{\sigma}-B^{n,f}_{\tau}|\right)$$ $$\displaystyle\leq$$ $$\displaystyle{\mathbb{E}}\left(\int_{\tau}^{\sigma}|G^{n}h^{f}(\tilde{\Pi}^{n}% _{s})|ds\right)$$ $$\displaystyle\leq$$ $$\displaystyle\bar{K}{\mathbb{E}}\left(\int_{\tau}^{\sigma}\left(||f^{\prime}||% _{\infty}\left<g_{\tilde{\Pi}^{n}_{s}},x^{2}\right>\ +\ \frac{s_{0}^{n}}{n}||f% ||_{\infty}\right)ds\right)$$ $$\displaystyle\leq$$ $$\displaystyle\bar{K}\gamma\left(||f^{\prime}||_{\infty}{\mathbb{E}}\left(\sup_% {[0,T]}\left<g_{\tilde{\Pi}^{n}_{s}},x^{2}\right>\right)+\frac{s_{0}^{n}}{n}||% f||_{\infty}\right).$$ Further, $$\displaystyle{\mathbb{E}}(|M^{n,f}_{\sigma}-M^{n,f}_{\tau}|)^{2}$$ $$\displaystyle\leq$$ $$\displaystyle{\mathbb{E}}(|M^{n,f}_{\sigma}-M^{n,f}_{\tau}|^{2})$$ $$\displaystyle=$$ $$\displaystyle{\mathbb{E}}\left(\left<X^{n,f}\right>_{\sigma}-\left<X^{n,f}% \right>_{\tau}\right)$$ $$\displaystyle\leq$$ $$\displaystyle\frac{C}{n}{\mathbb{E}}\left(\int_{\tau}^{\sigma}\left(||f^{% \prime}||^{2}_{\infty}\left<g_{\tilde{\Pi}^{n}_{s}},x^{2}\right>\ +\ ||f||^{2}% _{\infty}\right)ds\right)$$ $$\displaystyle\leq$$ $$\displaystyle\frac{C}{n}\gamma\left(||f^{\prime}||^{2}_{\infty}{\mathbb{E}}% \left(\sup_{[0,T]}\left<g_{\tilde{\Pi}^{n}_{s}},x^{2}\right>\right)\ +\ ||f||^% {2}_{\infty}\right)$$ where the second inequality follows from Lemma 7.2. Combining the two previous inequalities with (7.52) and (7.53) shows the tightness of $\{X^{n,f}\}_{n\geq 0}$. Step 2. Let $g^{\infty}$ be an accumulation point of the sequence $\{\tilde{g}^{n}\}$ in $D([0,T],(M_{F}({\mathbb{R}}^{+}),v))$. Since $\tilde{g}^{n}_{t}=\frac{1}{\tilde{s}_{t}^{n}}\sum_{i=1}^{\tilde{s}_{t}^{n}}f(% \tilde{\Pi}^{n}_{t})$ and a transition can only affect two coordinate of $\tilde{\Pi}^{n}$ at a time, it is not hard to show that $$\sup_{t\in[0,T]}\sup_{f\in L^{\infty}([0,T]),||f||_{\infty}\leq 1}|\left<% \tilde{g}^{n}_{t},f\right>-\left<\tilde{g}^{n}_{t-},f\right>|\leq\frac{4}{% \tilde{s}_{T}^{n}}$$ where $L^{\infty}([0,T])$ is the set of bounded functions from $[0,T]$ to ${\mathbb{R}}$. Since $s_{T}^{n}$ goes to $\infty$ (in probability) as $n\to\infty$, this implies that $g^{\infty}$ belongs to ${\mathcal{C}}([0,T],(M_{F}([0,T]),w))$. The tightness of $\{\tilde{g}^{n}\}$ in $D([0,T],(M_{F}({\mathbb{R}}^{+}),w))$ then follows by a direct application of Theorem 7.4 (using the second moment assumption $\limsup_{n}{\mathbb{E}}\left(\sup_{[0,T]}\left<\tilde{g}^{n},x^{2}\right>% \right)<\infty$). Step 3. Next, let $f$ be an arbitrary test function in ${\mathcal{C}}_{b}^{\infty}({\mathbb{R}}^{+})$. For every $m\in D([0,T],(M_{F}({\mathbb{R}}^{+}),w))$, define $$\varphi_{f,t}(m)\ =\ \left<m_{t},f\right>-\left<m_{0},f\right>+\int_{0}^{t}% \left<m_{s},c\frac{x^{2}}{2}f^{\prime}\right>ds\ -\ \int_{0}^{t}\frac{1}{s+% \frac{2}{r}}\int_{({\mathbb{R}}^{+})^{2}}m_{s}(dx)m_{s}(dy)\left(f(x+y)-f(x)\ % \right)ds.$$ In this step, we show that $\varphi_{f,t}(g^{\infty})=0$, for every $t\in[0,T]$ and any choice of test function $f$ in ${\mathcal{C}}_{b}^{\infty}({\mathbb{R}}^{+})$. We first observe $$\displaystyle{\mathbb{E}}\left(|\varphi_{f,t}(\tilde{g}^{n})|\right)$$ $$\displaystyle\leq$$ $$\displaystyle{\mathbb{E}}(|M^{n,f}_{t}-M^{n,f}_{0}|)\ +\ {\mathbb{E}}\left(% \int_{0}^{t}\left|(G^{n}-\bar{G})h^{f}(\tilde{\Pi}^{n}_{s})\right|ds\ \right)$$ where $M^{n,f}$ is the Martingale defined in (7.59) and $\bar{G}$ is the generator approximation defined in Lemma 7.1. When we let $n\to\infty$, the second term vanishes by Lemma 7.1. For the first term, we have $$\displaystyle\left({\mathbb{E}}|M^{n,f}_{t}-M^{n,f}_{0}|\right)^{2}$$ $$\displaystyle\leq$$ $$\displaystyle{\mathbb{E}}\left(M^{n,f}_{t}-M^{n,f}_{0}\right)^{2}$$ $$\displaystyle=$$ $$\displaystyle{\mathbb{E}}\left(\left<X^{n,f}\right>_{t}-\left<X^{n,f}\right>_{% 0}\right)$$ and the RHS can be handled by Lemma 7.2 and our second moment assumption (7.53). This implies $$\lim_{n\to\infty}{\mathbb{E}}\left(|\varphi_{f,t}(\tilde{g}^{n})|\right)\ =\ 0.$$ On the other hand, since any accumulation point $g^{\infty}$ must be in ${\mathcal{C}}([0,T],(M_{F}({\mathbb{R}}^{+}),w))$ and since $f$ and its derivative $f^{\prime}$ are continuous, we must have $$\varphi_{f,t}(\tilde{g}^{n})\ \Longrightarrow\ \varphi_{f,t}(g^{\infty}).$$ (Here we use the fact that $f$ is a test function so that $f$ and $\psi f^{\prime}$ remain bounded, and further, if $\{(\tilde{m}^{n}_{t};t\geq 0)\}$ converges to a continuous $(m^{\infty}_{t};t\geq 0)$, then for every continuous and bounded in $u$, the process $\left(\left<m^{n}_{t},u\right>;t\geq 0\right)$ converges to $\left(\left<m_{t},u\right>;t\geq 0\right)$ in the uniform norm on every finite interval) we get that ${\mathbb{E}}(|\varphi_{f,t}(g^{\infty})|)=0$ by a direct application of the bounded convergence theorem. Step 4. In the previous step, we showed that $\varphi_{f,t}(g^{\infty})=0$ for any test function in ${\mathcal{C}}_{b}^{\infty}({\mathbb{R}}^{+})$. By a standard density argument, the result also holds for any test function, thus showing that $g^{\infty}$ is a weak solution of the Smoluchowski equation (1.3) with inverse population $\delta$. Step 5. Let us now show the convergence of the mean. The argument is quite standard and goes by approximating the function $x$ by a bounded and continuous function to make use of the weak convergence. Define $$f^{(k)}(x)\ =x\ \mbox{if $x\leq k$},\ f^{(k)}(x)=k\ \mbox{otherwise},$$ and note that (7.60) $$\left<\tilde{g}^{n}_{t},x\right>\ =\ \left<\tilde{g}^{n}_{t},f^{(k)}(x)\right>% \ +\ \left<\tilde{g}^{n}_{t},x-f^{(k)}(x)\right>.$$ We now let $n$ and then $k$ go to $0$ sequentially. By using the Cauchy-Schwarz and Markov inequality, for any $k\geq 1$, we get (7.61) $$\displaystyle\left<\tilde{g}^{n}_{t},x-f^{(k)}(x)\right>^{2}$$ $$\displaystyle\leq$$ $$\displaystyle\left<\tilde{g}^{n}_{t},1_{x\geq k}\right>\left<\tilde{g}^{n}_{t}% ,(x-k)^{+}\right>$$ $$\displaystyle\leq$$ $$\displaystyle\frac{1}{k^{2}}\left<\tilde{g}^{n}_{t},x^{2}\right>^{3/2}$$ and using (7.53), the RHS of the inequality goes to $0$ (in probability) as $n$ and then $k$, go sequentially to $0$. On the other hand, since $\{\tilde{g}^{n}_{t}\}_{n}$ converges to $g^{\infty}_{t}$ as $n\to\infty$ in the weak topology, the first term on the RHS of (7.60) converges to $\left<g^{\infty}_{t},f^{(k)}\right>$. Finally, as $k\to\infty$, $\left<g^{\infty}_{t},f^{(k)}\right>$ goes to to $\left<g^{\infty}_{t},x\right>$ by the monotone convergence theorem. This completes the proof for the convergence of the mean. ∎ 7.3. Proof of Theorem 5.1 We start by showing the convergence of $\{\left(\tilde{g}^{n}_{t};\ t\in[0,T]\right)\}_{n}$. By Proposition 7.3 (and the unicity of (1.3) with initial condition $\nu$ and $\delta=2/r$), it is enough to show that $\limsup_{n}{\mathbb{E}}\left(\sup_{[0,T]}\left<\tilde{g}^{n}_{t},x^{2}\right>% \right)\ <\ \infty$. For every $t\leq T$, denote by $\tilde{C}_{t,T}^{n}(i)$ be the indices of the blocks at time $t/n$ partionning the block $i$ at time $T/n$. (In particular, $\tilde{C}_{0,T}^{n}(i)=\tilde{B}_{T}^{n}(i)$.) We have (7.62) $$\displaystyle\frac{1}{\tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s}_{T}^{n}}\ \left(% \sum_{k\in\tilde{B}_{T}^{n}(i)}\tilde{\Pi}^{n}_{0}(k)\right)^{2}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s}_{T}^{n}}\ \left(% \sum_{j\in\tilde{C}_{t,T}^{n}(i)}\sum_{k\in\tilde{B}_{t}^{n}(j)}\tilde{\Pi}^{n% }_{0}(k)\right)^{2}$$ $$\displaystyle\geq$$ $$\displaystyle\frac{1}{\tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s}_{T}^{n}}\ \sum_{% j\in\tilde{C}_{t,T}^{n}(i)}\left(\sum_{k\in\tilde{B}_{t}^{n}(j)}\tilde{\Pi}^{n% }_{0}(k)\right)^{2}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s}_{t}^{n}}\ \left(% \sum_{k\in\tilde{B}_{t}^{n}(i)}\tilde{\Pi}^{n}_{0}(k)\right)^{2}$$ $$\displaystyle\geq$$ $$\displaystyle\frac{1}{\tilde{s}_{t}^{n}}\sum_{i=1}^{\tilde{s}_{t}^{n}}\ \left(% \sum_{k\in\tilde{B}_{t}^{n}(i)}\tilde{\Pi}^{n}_{0}(k)\right)^{2}.$$ Thus, for every $t\leq T$, this yields $$\displaystyle\left<\tilde{g}^{n}_{t},x^{2}\right>$$ $$\displaystyle\leq$$ $$\displaystyle\ \frac{1}{\tilde{s}^{n}_{t}}\sum_{i=1}^{\tilde{s}_{t}^{n}}(\sum_% {j\in\tilde{B}_{t}^{n}(i)}\tilde{\Pi}^{n}_{0}(j))^{2}\leq\frac{1}{\tilde{s}_{T% }^{n}}\sum_{i=1}^{\tilde{s}_{T}^{n}}\ \left(\sum_{j\in\tilde{B}_{T}^{n}(i)}% \tilde{\Pi}^{n}_{0}(j)\right)^{2}$$ where the first inequality is obtained by ignoring the coalescence events between gene lineages (in particular, the first inequality becomes an equality when $c=0$). Let $\{{\mathcal{G}}_{t};t\geq 0\}$ be the natural filtration generated by the species coalescent. From the previous arguments, we get that $$\displaystyle{\mathbb{E}}\left(\sup_{[0,T]}\left<\tilde{g}^{n}_{t},x^{2}\right% >\right)$$ $$\displaystyle\leq$$ $$\displaystyle{\mathbb{E}}\left(\frac{1}{\tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s% }_{T}^{n}}\ \left(\sum_{j\in\tilde{B}_{T}^{n}(i)}\tilde{\Pi}^{n}_{0}(j)\right)% ^{2}\right)$$ $$\displaystyle\leq$$ $$\displaystyle{\mathbb{E}}\left(\frac{1}{\tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s% }_{T}^{n}}\ \tilde{V}_{T}^{n}(i)\sum_{j\in\tilde{B}_{T}^{n}(i)}(\tilde{\Pi}_{0% }^{n}(i))^{2}\right)$$ $$\displaystyle=$$ $$\displaystyle{\mathbb{E}}\left(\frac{1}{\tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s% }_{T}^{n}}\ \tilde{V}_{T}^{n}(i)\sum_{j\in\tilde{B}_{T}^{n}(i)}{\mathbb{E}}% \left((X_{i}^{n})^{2}\ |\ {\mathcal{G}}_{t}\right)\right)$$ $$\displaystyle=$$ $$\displaystyle{\mathbb{E}}\left((X_{1}^{n})^{2}\right)\ {\mathbb{E}}\left(\frac% {1}{\tilde{s}_{T}^{n}}\sum_{i=1}^{\tilde{s}_{T}^{n}}\ \tilde{V}_{T}^{n}(i)^{2}% \right),$$ and the RHS remains bounded by assumptions and Corollary 6.2. It remains to show the joint convergence statement (5.47). The convergence of $\{\tilde{s}^{n}\}$ was already stated in Lemma 6.1. The joint convergence follows from the fact that the limit of the marginals are both deterministic. 8. Coming down from infinity in the nested Kingman coalescent In the following $(s_{t},\Pi_{t})$ will denote an $\infty$-pop. nested Kingman coalescent, and $g_{t}$ will denote the associated empirical measure. Proposition 8.1. For every $t>0$, we have $\liminf_{n}\ R^{n}\circ g_{t}\geq\delta_{\frac{2}{ct}}$ in the sense that for every continuous, bounded and non-decreasing function $f$ $${\mathbb{E}}\left(\ \liminf_{n}\left<R^{n}\circ g_{t},f\right>\right)\geq f({2% /ct}).$$ Proof. First, note that $g_{t}$ stochastically dominates the case where each species lineage carries a single gene lineage at time $0$. Hence, we can assume w.l.o.g. this particular initial condition. Secondly, since the species constraint forbids coalescence events between gene lineages belonging to different species, $(g_{t};t\geq 0)$ dominates $(K_{t};t\geq 0)$, where $K$ is the block counting process of a Kingman coalescent with rate $c$. (In other words, in $K$, we allow gene lineages to coalesce even if they belong to different species.) Finally, since $\frac{ct}{2}K_{t}\to 0$ a.s., the result follows. ∎ Our next aim is to show the following result. Proposition 8.2. For every $0<\tau<T$, $$\limsup_{n}\ {\mathbb{E}}\left(\sup_{[\tau,T]}\left<R^{n}\circ g_{t},x^{2}% \right>\right)\ <\ \infty$$ Proof. We have (8.63) $$\sup_{t\in[\tau,T]}\ \left<R^{n}\circ g_{t},x^{2}\right>\ =\ \sup_{t\in[\frac{% \tau}{2},T-\frac{\tau}{2}]}\ \left<R^{n}\circ\hat{g}^{n}_{t},x^{2}\right>\ \ % \mbox{in law,}$$ where $\hat{g}^{n}$ is the empirical measure associated to the nested coalescent with the initial number of species being equal to $\hat{s}_{0}^{n}=s_{\tau/2n}$ and genetic composition vector $\Pi_{\tau/2n}$. Further, using the large deviation estimates of Lemma 6.1, we get (8.64) $$\hat{s}_{0}^{n}/n=s_{\tau/2n}/n\ \to\frac{4}{\tau}\in(0,\infty)\ \ \mbox{in $L% ^{p}$,}\ \ \forall p>1.$$ The RHS of (8.63) is always bounded from above by the same quantity if we replace $\hat{g}^{n}_{t}$ by the empirical measure associated to the nested coalescent starting with $\hat{s}_{0}^{n}$ species and infinitely many gene lineages in each species. In turn, the latter model is bounded by the model starting from the infinite initial condition, but where gene lineages can only coalesce if they belong to the same species at time $0$, i.e., even if species 1 and 2 coalesce, their respective gene lineages are forbidden to merge afterwards. The empirical measure associated to the process is identical in law to $$m^{n}_{t}\ :=\ \frac{1}{\hat{s}_{t}^{n}}\sum_{i=1}^{\hat{s}_{t}^{n}}\delta_{% \sum_{j\in\hat{B}_{t}^{n}(i)}\xi_{j}(t)},$$ where $\hat{B}_{t}^{n}(i)$ (w.r.t. to the species coalescent $\hat{s}^{n}$) and $\xi_{j}^{\prime}s$ are defined analogously to Lemma 6.3. This yields $$\forall t\in[\tau/2,T-\tau/2],\ \ \left<R^{n}\circ\hat{g}^{n}_{t},x^{2}\right>% \leq\left<R^{n}\circ m^{n}_{t},x^{2}\right>=_{{\mathcal{L}}}\beta^{n}_{t}$$ (where the domination is meant in the stochastic sense) and thus $${\mathbb{E}}\left(\sup_{t\in[\tau,T]}\left<R^{n}\circ g_{t},x^{2}\right>\right% )\leq{\mathbb{E}}\left(\sup_{t\in[\tau/2,T-\tau/2]}\beta_{t}^{n}\right)$$ Proposition 8.2 then follows by a direct application of Lemma 6.3 (and (8.64)).  ∎ Proof of Theorem 5.4. Step 1. Let us fix $\tau>0$. Define $\hat{g}^{n,(\tau)}=\theta_{\tau}\circ R^{n}\circ g_{t}$ and let $\hat{s}^{n,(\tau)}=\theta_{\tau}\circ s_{t}^{n}$. Proposition 8.2 impies that $$\limsup_{n}{\mathbb{E}}\left(\sup_{t\in[0,T]}\left<\hat{g}^{n,(\tau)}_{t},x^{2% }\right>\right)\ \ =\ {\mathbb{E}}\left(\sup_{t\in[\tau,T+\tau]}\left<R^{n}% \circ g_{t},x^{2}\right>\right)<\ \infty$$ Further, by Lemma 6.1, $$\underbrace{\frac{1}{n}}_{\mbox{time scaling}}\times\underbrace{\hat{s}^{n,(% \tau)}_{0}(={s}_{\tau/n})}_{\mbox{number of blocks in the species coalescent % at time $0$}}\to r=\frac{2}{\tau}\ \ \mbox{in $L^{p}$ for every $p>1$}.$$ By Proposition 7.3, it follows that the sequence $\{\hat{g}^{n,(\tau)}\}_{n}$ is tight and that any sub-sequential limit $\hat{g}^{\infty,(\tau)}$ is a weak solution of the Smoluchowski equation with inverse population size $\tau/2$. The continuous mapping theorem implies that $\{R^{n}\circ g_{t}=\theta_{-\tau}\circ\hat{g}^{n,(\tau)};t\geq\tau\}$ converges to the limit $(\theta_{-\tau}\circ g^{\infty,(\tau)};t\geq\tau)$ where the latter process has a Laplace process satisfying the equation (8.65) $$\forall t\geq\tau,\ \left<\nu_{t},f\right>-\left<\nu_{\tau},f\right>+\int_{% \tau}^{t}\left<\nu_{s},c\frac{x^{2}}{2}f^{\prime}\right>ds\ -\ \int_{\tau}^{t}% \frac{1}{s}\int_{({\mathbb{R}}^{+})^{2}}\nu_{s}(dx)\nu_{s}(dy)\left(f(x+y)-f(x% )\ \right)ds=0$$ Note that the coefficients of the IPDE do not depend on the value of $\tau$. Step 2. Let us now take a sequence of positive numbers $\{\tau_{m}\}_{m}$ going to $0$. For every $m$, there exists a subsequence of $\{(R^{n}\circ g_{t};t\geq\tau_{m})\}_{n}$ converging to $\mu^{(0),m}$ satisfying (8.65). By a standard diagonalization argument, this ensures the existence of a subsequence of $\{R^{n}\circ g\}$ converging to a process $\mu^{(0)}$ defined on $(0,\infty)$ (in comparaison with step 1 where the process was defined on $[\tau,\infty)$) and satisfying (8.66) $$\forall t,\tau>0,\ \left<\nu_{t},f\right>-\left<\nu_{\tau},f\right>+\int_{\tau% }^{t}\left<\nu_{s},c\frac{x^{2}}{2}f^{\prime}\right>ds\ -\ \int_{\tau}^{t}% \frac{1}{s}\int_{({\mathbb{R}}^{+})^{2}}\nu_{s}(dx)\nu_{s}(dy)\left(f(x+y)-f(x% )\ \right)ds\ =\ 0,$$ i.e., $\mu^{(0)}$ is a weak solution of the $\infty$-pop. Smoluchowski equation. In order to prove Theorem 5.4, it remains to show that $\mu^{(0)}$ is the only proper solution. This follows directly from the stochastic domination of Proposition 8.1. Finally, the joint convergence with $(\tilde{g}^{n},\tilde{s})$ follows form the fact that both marginals are deterministic at the limit. ∎ Proof of Theorem 5.6. Let $\rho_{t}$ be the number of gene lineages at time $t$. We need to show that $$\frac{1}{n^{2}}\rho_{t/n}\ \Longrightarrow\ \frac{2}{t^{2}}\ \int_{0}^{\infty}% x\mu^{(0)}_{t}(x)dx$$ where $\mu^{(0)}$ is the proper solution of the Smoluchowski equation. By applying Proposition 7.3 and Theorem 5.4, $\left(\frac{1}{n}s_{t/n},\left<R^{n}\circ g_{t},x\right>\right)$ converges to $\left(\frac{2}{t};\frac{1}{t}\int_{0}^{\infty}x\mu_{t}^{(0)}(x)dx\right)$. The result follows from the observation that $$\frac{1}{n^{2}}\rho_{t/n}\ =\ \frac{1}{n}s_{t/n}\left<R^{n}\circ g_{t},x\right>.$$ ∎ Appendix A Here, we complete the proof of Theorem 2.4 (ii) by showing that $\mu_{T}$ defined as $F({\bf T},(W_{i});1\leq i\leq N_{T}(\bf T))$ indeed is solution to (1.3). Let $f$ be a test-function as defined before Definition 1.6, i.e., $f\in{\mathcal{C}}^{1}({\mathbb{R}}^{+})$ such that $f$ and $f^{\prime}\psi$ are bounded. Hereafter, we continue to denote by $\mathbb{P}_{T}$ the joint law of the pure-birth tree ${\bf T}$ started with one particle at time 0, birth rate $a(T-t)$, stopped at time $T$, and of the iid rvs $(W_{i};1\leq i\leq N_{T}(\bf T))$ with law $\nu$. We will abbreviate $F({\bf T},(W_{i});1\leq i\leq N_{T}(\bf T))$ into $F({\bf T})$. In particular, denoting $\mu_{T}$ as the law of $F({\bf T},(W_{i});1\leq i\leq N_{T}(\bf T))$ under $\mathbb{P}_{T}$, we have $$\mu_{T}(f):=\int_{{\mathbb{R}}^{+}}f(x)\,\mu_{T}(dx)=\mathbb{E}_{T}(f\circ F({% \bf T})),$$ so that $$\mu_{T+\varepsilon}(f)=\mathbb{E}_{T+\varepsilon}(f\circ F({\bf T}),N_{% \varepsilon}=1)+a(T)\varepsilon\,\mathbb{E}_{T}^{\otimes 2}(f\circ F({\bf T}+{% \bf T}^{\prime}))+o(\varepsilon),$$ where ${\bf T}^{\prime}$ is an independent copy of ${\bf T}$ and ${\bf T}+{\bf T}^{\prime}$ denotes the tree splitting at time 0 into the two subtrees ${\bf T}$ and ${\bf T}^{\prime}$. First recall that $$F(\mathbbm{t})=\lim_{x\downarrow 0}x^{-1}\left(1-e^{-xM^{\mathbbm{t}}\left(1-% \exp\left(-\sum_{i=1}^{N_{T}(\mathbbm{t})}w_{i}Z_{T}^{i}\right)\right)}\right)% =\lim_{x\downarrow 0}x^{-1}\left(1-Q_{x}^{\mathbbm{t}}\left(e^{-\sum_{i=1}^{N_% {T}(\mathbbm{t})}w_{i}Z_{T}^{i}}\right)\right),$$ so that $$\displaystyle F(\mathbbm{t}+\mathbbm{t}^{\prime})$$ $$\displaystyle=$$ $$\displaystyle\lim_{x\downarrow 0}x^{-1}\left(1-Q_{x}^{\mathbbm{t}+\mathbbm{t}^% {\prime}}\left(e^{-(\sum_{i=1}^{N_{T}(\mathbbm{t})}w_{i}Z_{T}^{i}+\sum_{i=1}^{% N_{T}(\mathbbm{t}^{\prime})}w_{i}^{\prime}Z_{T}^{i^{\prime}})}\right)\right)$$ $$\displaystyle=$$ $$\displaystyle\lim_{x\downarrow 0}x^{-1}\left(1-Q_{x}^{\mathbbm{t}}\left(e^{-% \sum_{i=1}^{N_{T}(\mathbbm{t})}w_{i}Z_{T}^{i}}\right)Q_{x}^{\mathbbm{t}^{% \prime}}\left(e^{-\sum_{i=1}^{N_{T}(\mathbbm{t}^{\prime})}w_{i}^{\prime}Z_{T}^% {i}}\right)\right)$$ $$\displaystyle=$$ $$\displaystyle\lim_{x\downarrow 0}x^{-1}\left(1-e^{-xM^{\mathbbm{t}}\left(1-% \exp\left(-\sum_{i=1}^{N_{T}(\mathbbm{t})}w_{i}Z_{T}^{i}\right)\right)}e^{-xM^% {\mathbbm{t}^{\prime}}\left(1-\exp\left(-\sum_{i=1}^{N_{T}(\mathbbm{t}^{\prime% })}w_{i}^{\prime}Z_{T}^{i}\right)\right)}\right)$$ $$\displaystyle=$$ $$\displaystyle F(\mathbbm{t})+F(\mathbbm{t}^{\prime}).$$ Second, if we denote by $\mathbbm{t}+\varepsilon$ the tree obtained from $\mathbbm{t}$ by merely adding a length $\varepsilon$ to its root edge, then by the Markov property of the entrance measure of the CSBP at 0, $$\displaystyle F(\mathbbm{t}+\varepsilon)$$ $$\displaystyle=$$ $$\displaystyle M^{\mathbbm{t}+\varepsilon}\left(1-\exp\left(-\sum_{i=1}^{N_{T}(% \mathbbm{t})}w_{i}Z_{T+\varepsilon}^{i}\right)\right)$$ $$\displaystyle=$$ $$\displaystyle\int_{(0,\infty)}N(Z_{\varepsilon}\in dx)\,Q_{x}^{\mathbbm{t}}% \left(1-\exp\left(-\sum_{i=1}^{N_{T}(\mathbbm{t})}w_{i}Z_{T}^{i}\right)\right)$$ $$\displaystyle=$$ $$\displaystyle\int_{(0,\infty)}N(Z_{\varepsilon}\in dx)\left(1-\exp\left(-xM^{% \mathbbm{t}}\left(1-\sum_{i=1}^{N_{T}(\mathbbm{t})}w_{i}Z_{T}^{i}\right)\right% )\right)$$ $$\displaystyle=$$ $$\displaystyle N\left(1-\exp\left(-Z_{\varepsilon}F(\mathbbm{t})\right)\right).$$ Now as specified at the end of Subsection 2.3, for each fixed $\lambda$, $N\left(1-\exp\left(-\lambda Z_{t}\right)\right)$ is solution to $\dot{x}=-\psi(x)$ with initial condition $x(0)=\lambda$. As a consequence, $$\lim_{\varepsilon\downarrow 0}\varepsilon^{-1}\left(F(\mathbbm{t}+\varepsilon)% -F(\mathbbm{t})\right)=\lim_{\varepsilon\downarrow 0}\varepsilon^{-1}\left(N% \left(1-\exp\left(-Z_{\varepsilon}F(\mathbbm{t})\right)\right)-F(\mathbbm{t})% \right)=-\psi(F(\mathbbm{t})).$$ Combining the last two results, we obtain $$\displaystyle\mu_{T+\varepsilon}(f)$$ $$\displaystyle=$$ $$\displaystyle\mathbb{E}_{T+\varepsilon}(f\circ F({\bf T}),N_{\varepsilon}=1)+a% (T)\varepsilon\,\mathbb{E}_{T}^{\otimes 2}(f\circ F({\bf T}+{\bf T}^{\prime}))% +o(\varepsilon)$$ $$\displaystyle=$$ $$\displaystyle(1-a(T)\varepsilon)\,\mathbb{E}_{T}(f\circ F({\bf T}+\varepsilon)% )+a(T)\varepsilon\,\mathbb{E}_{T}^{\otimes 2}(f\circ(F({\bf T})+F({\bf T}^{% \prime})))+o(\varepsilon)$$ $$\displaystyle=$$ $$\displaystyle\mu_{T}(f)+(1-a(T)\varepsilon)\,\mathbb{E}_{T}(f\circ F({\bf T}+% \varepsilon)-f\circ F({\bf T}))+a(T)\varepsilon\,(\mu_{T}^{\star 2}(f)-\mu_{T}% (f))+o(\varepsilon).$$ Next, since $\psi f^{\prime}$ is bounded, by dominated convergence, we get $$\lim_{\varepsilon\downarrow 0}\varepsilon^{-1}\mathbb{E}_{T}(f\circ F({\bf T}+% \varepsilon)-f\circ F({\bf T}))=-\mathbb{E}_{T}\big{(}\psi(F({\bf T}))\,f^{% \prime}(F({\bf T}))\big{)}=-\mu_{T}(\psi f^{\prime}).$$ As a consequence, $$\lim_{\varepsilon\downarrow 0}\varepsilon^{-1}(\mu_{T+\varepsilon}(f)-\mu_{T}(% f))=-\mu_{T}(\psi f^{\prime})+a(T)\,(\mu_{T}^{\star 2}(f)-\mu_{T}(f)).$$ So $t\mapsto\mu_{t}(f)$ is right-differentiable with continuous right-derivative equal to $$\partial_{t}\mu_{t}(f)=-\mu_{t}(\psi f^{\prime})+a(t)\,(\mu_{t}^{\star 2}(f)-% \mu_{t}(f))\qquad t\geq 0.$$ Also note that $$F(\varnothing+\varepsilon)=M^{\varnothing+\varepsilon}\left(1-\exp\left(-w_{1}% Z_{\varepsilon}\right)\right)=N\left(1-\exp\left(-w_{1}Z_{\varepsilon}\right)% \right),$$ so that $F(\varnothing)=w_{1}$ and $\mu_{0}(f)=\mathbb{E}_{0}(f(F({\bf T})))=\mathbb{E}_{0}(f(W))=\nu(f)$. This shows that $\mu_{0}=\nu$ so that $(\mu_{t}(f);t\geq 0)$ satisfies (1.6). References [1] D. Aldous. Stopping times and tightness. The Annals of Probability, 6(2):335–340, 1978. [2] D. J. Aldous. Deterministic and stochastic models for coalescence (aggregation and coagulation): a review of the mean-field theory for probabilists. Bernoulli, 5(1):3–48, 1999. [3] J. Berestycki, N. Berestycki, and V. Limic. The $\Lambda$-coalescent speed of coming down from infinity. The Annals of Probability, 38(1):207–233, 2010. [4] J. Berestycki, N. Berestycki, and J. Schweinsberg. Beta-coalescents and continuous stable random trees. The Annals of Probability, pages 1835–1887, 2007. [5] N. Berestycki. Recent progress in coalescent theory. Ensaios Matematicos, 16(1):1–193, 2009. [6] J. Bertoin. Random fragmentation and coagulation processes, volume 102. Cambridge University Press, 2006. [7] J. Bertoin and J.-F. Le Gall. Stochastic flows associated to coalescent processes. III. Limit theorems. Illinois Journal of Mathematics, 50(1-4):147–181, 2006. [8] A. Blancas, J.-J. Duchamps, A. Lambert, and A. Siri-Jégousse. Trees within trees: Simple nested coalescents. arXiv preprint arXiv:1803.02133, 2018. [9] A. Blancas, T. Rogers, J. Schweinsberg, and A. Siri-Jégousse. The nested kingman coalescent: speed of coming down from infinity. arXiv preprint arXiv:1803.08973, 2018. [10] M. E. Caballero, A. Lambert, and G. Uribe Bravo. Proof (s) of the Lamperti representation of continuous-state branching processes. Probability Surveys, 6:62–89, 2009. [11] T. Duquesne and J.-F. Le Gall. Random trees, Lévy processes and spatial branching processes. Astérisque, (281):vi+147, 2002. [12] W. Feller. Two singular diffusion problems. Annals of mathematics, pages 173–182, 1951. [13] N. Fournier and S. Méléard. A microscopic probabilistic description of a locally regulated population and macroscopic approximations. Annals of applied probability, pages 1880–1919, 2004. [14] D. Grey. Asymptotic behaviour of continuous time, continuous state-space branching processes. Journal of Applied Probability, 11(4):669–677, 1974. [15] A. Joffe and M. Métivier. Weak convergence of sequences of semimartingales with applications to multitype branching processes. Advances in Applied Probability, 18(1):20–65, 1986. [16] J. F. C. Kingman. The coalescent. Stochastic processes and their applications, 13(3):235–248, 1982. [17] A. Lambert. Population dynamics and random genealogies. Stoch. Models, 24(suppl. 1):45–163, 2008. [18] A. Lambert and E. Schertzer. Recovering the Brownian coalescent point process from the Kingman coalescent by conditional sampling. To appear in Bernoulli, 2016. [19] J. Lamperti. Continuous state branching processes. Bulletin of the American Mathematical Society, 73(3):382–386, 1967. [20] J. R. Norris. Smoluchowski’s coagulation equation: Uniqueness, nonuniqueness and a hydrodynamic limit for the stochastic coalescent. Annals of Applied Probability, pages 78–109, 1999. [21] L. Popovic. Asymptotic genealogy of a critical branching process. Annals of Applied Probability, 14(4):2120–2148, 2004. [22] S. Roelly-Coppoletta. A criterion of convergence of measure-valued processes: application to measure branching processes. Stochastics: An International Journal of Probability and Stochastic Processes, 17(1-2):43–65, 1986. [23] A.-S. Sznitman. Topics in propagation of chaos. In Ecole d’été de probabilités de Saint-Flour XIX—1989, pages 165–251. Springer, 1991. [24] V. C. Tran. Large population limit and time behaviour of a stochastic particle model describing an age-structured population. ESAIM: Probability and Statistics, 12:345–386, 2008. [25] V. C. Tran. Une ballade en forêts aléatoires. Technical report, 2014. [26] B. Wennberg. An example of nonuniqueness for solutions to the homogeneous boltzmann equation. Journal of statistical physics, 95(1-2):469–477, 1999.
Etch-Tuning and Design of Silicon Nitride Photonic Crystal Reflectors Simon Bernard,${}^{1}$ Christoph Reinhardt,${}^{1}$ Vincent Dumont,${}^{1}$ Yves-Alain Peter,${}^{2}$ Jack C. Sankey${}^{1}$ ${}^{1}$Department of Physics, McGill University, Montréal, Québec, H3A 2T8, Canada ${}^{2}$Department of Engineering Physics, Polytechnique, Montréal, Québec, H3C 3A7, Canada jack.sankey@mcgill.ca Abstract By patterning a freestanding dielectric membrane into a photonic crystal reflector (PCR), it is possible to resonantly enhance its normal-incidence reflectivity, thereby realizing a thin, single-material mirror. In many PCR applications, the operating wavelength (e.g. that of a low-noise laser or emitter) is not tunable, imposing tolerances on crystal geometry that are not reliably achieved with standard nanolithography. Here we present a gentle technique to finely tune the resonant wavelength of a Si${}_{3}$N${}_{4}$ PCR using iterative hydrofluoric acid etches. With little optimization, we achieve a 57-nm-thick photonic crystal having an operating wavelength within 0.15 nm (0.04 resonance linewidths) of our target (1550 nm). Our thin structure exhibits a broader and less pronounced transmission dip than is predicted by plane wave simulations, and we identify two effects leading to these discrepancies, both related to the divergence angle of a collimated laser beam. To overcome this limitation in future devices, we distill a series of simulations into a set of general design considerations for realizing robust, high-reflectivity resonances. Two-dimensional (2D) photonic crystals provide a powerful means of controlling the flow of light Joannopoulos et al. (1995), and can be engineered so that “leaky” guided modes produce strong, resonant interactions with waves propagating out of the crystal plane Zhou et al. (2014). On resonance, these structures can be highly reflective, enabling (among other things) compact etalon filters Suh et al. (2003); Ho et al. (2015), lightweight laser mirrors Huang et al. (2007); Yang et al. (2015), nonlinear optical elements Yacomotti et al. (2006) and biochemical sensors Shi et al. (2009); Magnusson et al. (2011). Free-standing photonic crystal reflectors (PCRs) Fan and Joannopoulos (2002); Lousse et al. (2004); Kilic et al. (2004) represent a particularly advantageous technology in the field of optomechanics Aspelmeyer et al. (2014), wherein it is desirable to create a lightweight mass that strongly reflects incident light, thereby maximizing the influence of each photon. With this aim, single-layer reflectors have been fabricated from InP Antoni et al. (2011, 2012); Makles et al. (2015); Yang et al. (2015) and SiN Bui et al. (2012); Kemiktarak et al. (2012a, b); Stambaugh et al. (2014); Norte et al. (2016); Chen et al. (2016). High-stress Si${}_{3}$N${}_{4}$ is of particular interest, since it combines ease of fabrication, low optical loss Wilson et al. (2009); Sankey et al. (2010), and the lowest mechanical dissipation and force noise Reinhardt et al. (2016); Norte et al. (2016). Furthermore, incorporating a PCR does not introduce significant mechanical losses Bui et al. (2012); Norte et al. (2016). An outstanding goal is to fabricate PCRs with the high-reflectivity resonance at a specific design wavelength (e.g., to match that of a low-noise laser, an atomic transition, or another reflector). However, the width of a typical 2D SiN PCR resonance is measured in nanometers, imposing geometrical tolerances that cannot be reliably achieved with standard nanolithography. Inspired by similar work on microdisks resonators and photonic crystal defect cavities in other materials Barclay et al. (2006); White et al. (2005); Henze et al. (2013); Dalacu et al. (2005), we present a fabrication technique capable of achieving the desired precision through iterative tuning of the resonance with hydrofluoric (HF) acid dips. In this proof-of-concept study, we systematically tune the resonance of a 60-nm-thick photonic crystal by 16 nm, such that it resides within 0.15 nm (0.04 linewidths) of our targeted wavelength $\lambda_{0}$=$1550$ nm. This is achieved with a 2D square lattice of holes – chosen to eliminate any polarization-dependent response – but should be readily applicable to 1D gratings Kemiktarak et al. (2012a, b); Stambaugh et al. (2014) and other related structures. A second question is whether there exists a fundamental and/or practical limitation on how thin (lightweight) these structures can be made while still maintaining high reflectivity. For example, despite our comparatively low feature roughness, our thin structures exhibits a resonant dip in the transmission spectrum that is broader and not as deep as predicted by simple plane-wave simulations, reaching a minimal value of only 0.32. This discrepancy has been observed in similar crystals of thickness at or below 100 nm Bui et al. (2012); Norte et al. (2016), and has been attributed to fabricated disorder not included in periodic-crystal simulations Bui et al. (2012). Since then, however, thicker square-lattice PCRs – having comparable disorder to our own – achieve a transmission dip well below 1% Norte et al. (2016); Chen et al. (2016). Building upon previous investigations of angular- and collimation-induced changes in the transmission spectrum of optically thick structures Crozier et al. (2006), we present simulations illustrating two effects leading to collimation-induced resonance broadening: (i) the previously identified angular dependence of the “primary” resonance wavelength Lousse et al. (2004); Crozier et al. (2006), and (ii) “parasitic” crystal modes that couple only to off-normal plane waves Lousse et al. (2004); Crozier et al. (2006); Bui et al. (2012); Chen et al. (2016), which can strongly interfere with the primary resonance. Finally, we perform a series of simulations for crystals of varied geometry, and distill the results into a guide for reliably minimizing (or balancing) these effects to realize a maximally robust high-reflectivity resonance. I Etch-Tuning the Resonance Our primary goal is to tune the resonance of a thin Si${}_{3}$N${}_{4}$ PCR to a convenient (telecom) laser wavelength $\lambda_{0}$=\SI1550nm. Since electron beam lithography cannot achieve the required tolerances on its own, we first fabricate a structure intentionally having too much material, then iteratively measure the transmission spectrum $\mathcal{T}$ and etch with HF until the resonant wavelength $\lambda_{r}$ (here defined at the minimal value of $\mathcal{T}$) is close to $\lambda_{0}$. The crystal is initially fabricated from a commercial Si wafer coated on both sides with 100-nm-thick stoichiometric Si${}_{3}$N${}_{4}$ deposited by low-pressure chemical vapor deposition (LPCVD, University Wafers). The unpatterned membrane is defined by opening a square window in the back-side nitride using optical lithography and reactive ion etching (RIE), then etching through the exposed silicon in a $45\%$ KOH solution at $75\,^{\circ}$C ($\sim$ 35 $\upmu\text{m}/$hr etch rate) to release the front-side nitride membrane. The top surface of the nitride is then coated with a 150-nm-thick resist (Zeon Chemicals ZEP520A diluted in anisole with a 1:2 ratio, spun at 3000 rpm and baked for 40 min in an oven at $180\,^{\circ}$C), and exposed in an electron-beam writer (50 $\upmu\text{C}/\text{cm}^{2}$ dose at 10 kV) to define the crystal’s etch mask. The resist is developed for 60 s (Zeon Chemicals ZED-N50), and rinsed for 15 s in isopropyl alcohol. The pattern is then transfered to the membrane via RIE in a mixture of CHF${}_{3}$ (30 sccm), CF${}_{4}$ (70 sccm), and Ar (7 sccm) with a 30 mTorr chamber pressure and 100 W power. The remaining resist is stripped with Microposit Remover 1165 for 30 min at $60\,^{\circ}$C and a 3:1 (H${}_{2}$SO${}_{4}$:H${}_{2}$O${}_{2}$) piranha solution for 15 min. Finally, the sample is cleaned and partially etched for 10 min in 10:1 HF acid at room temperature,111Note we generally do not etch through the full nitride on the RIE step, allowing the final HF dip to finish the job. This reduces crystal breakage during the RIE step. We also use a carrier that holds chips rigidly and vertically in solution, as in Ref. Reinhardt et al. (2016) (designs available upon request). then rinsed with water and methanol. This initial structure has nominal thickness $h$=$66\pm 1$ nm (measured by a reflectometer) and holes of diameter $d$=$614$ nm ($\pm$7 nm hole-to-hole with edge roughness $\sim 15$ nm), in a square lattice spaced by $a$=$1500\pm 6$ nm (Fig. 1(a)). We can locate the crystal resonance with the swept-laser transmission measurement sketched in Fig. 1(b). A collimated beam from a fiber-coupled tunable laser (New Focus Velocity 6328) is focused to a Gaussian spot having $1/e^{2}$ intensity diameter $D$=$60\pm 1$ $\upmu\text{m}$ at the crystal (relative spot size indicated by a red circle in Fig. 1(a)) and collected by a photodiode (PD). Fluctuations in the incident power are monitored by a second photodiode (“reference arm”). A rotatable half-wave plate (HWP) and linear polarizer (LP) define the input polarization to avoid any polarization-dependence of the beam splitter (BS) and other optics (the PCR response is found to be polarization-independent, however). Figure 1(c) shows transmission $\mathcal{T}$ versus wavelength $\lambda$ for the crystal (i) as fabricated and after (ii) 130 s, (iii) 165 s, and (iv) 195 s of HF etching (total). Immersing the structure in HF simultaneously decreases its thickness (at a rate $\sim$ 3 nm/min) and increases the hole diameters (rate $\sim$ 1 nm/min), displacing the resonance toward shorter wavelengths. Using this technique, the location of the resonance is tuned within 0.15 nm of the target wavelength (red curve). Here the shift in $\lambda_{r}$ per minute in HF is found to fluctuate somewhat ($-4.4$ nm/min (i $\rightarrow$ ii), $-3.2$ nm/min (ii $\rightarrow$ iii), and $-7.9$ nm/min (iii $\rightarrow$ iv)), which can be mitigated by using a slower, buffered solution and introducing gentle fluid circulation. The gray curves in Fig. 1(c) show the simulated response for a normal-incidence infinite plane wave. The hole diameters, which have a comparatively weak effect on the resonant wavelength $\lambda_{r}$, are set to (i) 614 nm and (iv) 618 nm to match the values extracted from SEM images (e.g. Fig. 1(a,inset)), and the thicknesses are set to (i) 62.5 nm and (iv) 54.5 nm to match the observed $\lambda_{r}$. These thickness values are within 6% of the those measured independently by a reflectometer, and the simulated thickness change of 8 nm is consistent with the measured change of $9\pm 1$ nm. The linewidth of the measured resonance is also reduced by 20% as the device is thinned, qualitatively consistent with a 30% decrease predicted by the simulations, though the measured resonance is also a factor of $\sim$ 1.6 broader than the simulations suggest. Moreover, the depth of the resonance reaches a minimum value $\mathcal{T}_{\text{min}}$ between 0.32 ($h$=66 nm) and 0.38 ($h$=57 nm), placing an upper bound of 0.68 ($h$=66 nm) and 0.62 ($h$=57 nm) on the reflectivity. As discussed below, the vast majority of the resonance broadening in this case (and other thin crystals) likely arises from $\mathcal{T}$’s sensitivity to incident angle and the superposition of incident plane waves present in a collimated beam. II Collimation-Induced Broadening A practical limitation in the performance of photonic crystals is the sensitivity of the resonance to the radiation’s angle of incidence Lousse et al. (2004); Crozier et al. (2006): since collimated beams comprise a weighted superposition of plane waves from all angles $\theta$ (e.g. spanning $\theta_{D}\sim 1^{\circ}$ for our $60$-$\upmu\text{m}$ spot; see inset of Fig. 1(c)), this sensitivity can average away the effect of crystal resonances. One can compensate by increasing $D$ to reduce $\theta_{D}$, but this in turn requires a larger-area crystal, which is disadvantageous for many applications, and furthermore difficult to fabricate: the crystal in Fig. 1 uses the full $200\times 200$ $\upmu\text{m}^{2}$ high-resolution write field of our electron-beam writer – using a larger field reduces precision, and using multiple fields introduces detrimental dislocations between adjacent fields. Our beam diameter of 60 $\upmu\text{m}$ was chosen to be large but still safely contained within the high-resolution field of our electron-beam writer. To get a sense of when this broadening is important, Fig. 2(a) shows a set of illustrative simulations for thin Si${}_{3}$N${}_{4}$ crystals of varied geometry and plane wave incidence angles $\theta$ between 0${}^{\circ}$ and 1${}^{\circ}$. The transverse-electric (TE) polarization (inset) is chosen in (a)-(c) to maximize the effect of $\theta$ on $\mathcal{T}$; for the transverse-magnetic (TM) polarization, $\mathcal{T}$ is comparatively unaffected (see (d)). The 200-nm-thick crystal in (a) exhibits a single, broad “primary” resonance near 1550 nm for $\theta$=$0$ (blue curve), and as $\theta$ is increased, a second “parasitic” resonance becomes increasingly pronounced. The existence of these additional resonances is well-known: they do not appear in normal-incidence transmission spectra because symmetry precludes coupling to such modes Fan and Joannopoulos (2002); Crozier et al. (2006); Bui et al. (2012); Chen et al. (2016). For $\theta\neq 0$, however, its presence has a modest impact on the spectrum, shifting the minimum in $\mathcal{T}$ to lower $\lambda$. Figure 2(b) shows the angular dependence for a 100-nm-thick crystal. In this case, the separation $\Delta$ between the primary and parasitic resonance is only a few nanometers, and the effect is profound over the full range of angles. One would not expect high reflectivity for any reasonably collimated beam. Figure 2(c) illustrates another cause of collimation-induced broadening Crozier et al. (2006) that can occur in geometries where $\Delta$ is large: $\lambda_{r}$ in general varies quadratically with $\theta$. The geometry of Fig. 2(c) is close to that of our devices in Fig. 1, implying this is likely the dominant mechanism in our system. Broadening has been observed at normal incidence for similarly thin Si${}_{3}$N${}_{4}$ crystals Bui et al. (2012); Norte et al. (2016), and in Ref. Bui et al. (2012), the resonance furthermore was found to split, broaden, and shallow when tilting the crystal away from normal incidence Bui et al. (2012), providing evidence for the influence of a second mode. III Design Considerations This motivates two basic PCR design goals: maximize the separation $\Delta$ from the parasitic resonances, and minimize the resonant wavelength’s sensitivity $\partial_{\theta}^{2}\lambda_{r}$ to incidence angle $\theta$. Figure 3 shows a summary of simulations for varied thickness $h$, hole diameter $d$, and lattice constant $a$, chosen so that the resonance wavelength $\lambda_{r}$ remains near a “design” wavelength $\lambda_{0}$. All values are scaled by $\lambda_{0}$ to facilitate application to other systems having a dielectric refractive index $n$ near $2$. For each thickness, the desired resonance can be achieved over a wide range of ($a$,$d$) combinations, as shown in (a). Figure 3(b) shows the detuning $\Delta$ of the nearest parasitic resonance, which can be made largest in the thickest devices (especially near $a$=$0.78\lambda_{0}$). The angular sensitivity $\partial_{\theta}^{2}\lambda_{r}$ plotted in (c) shows a general tendency toward lower values for thicker devices, but is strongly influenced by parasitic modes whenever $\Delta$ approaches 0, leading to complex behavior. Interestingly, the sign of $\partial_{\theta}^{2}\lambda_{r}$ often flips as a function of $a$, implying the existence of an intermediate geometry in which all of these effects balance, thereby eliminating collimation broadening to lowest order. The three thickest devices exhibit a well-resolved global minimum in $\partial_{\theta}^{2}\lambda_{r}$ near a sign flip, as expected. Presumably the other geometries having sign flips also exhibit such minima, but they are not resolved at this level. This will remain the subject of future study, but we will note three caveats at the outset: (i) $\partial_{\theta}^{2}\lambda_{r}$ is particularly sensitive to the geometry near these points (especially in thin devices), which tightens their fabrication requirements, (ii) it will be challenging to precisely tune both $\Delta$ and $\lambda_{r}$ with existing fabrication techniques (including HF etching), and (iii) we expect higher-order $\theta$-dependencies to play a role for any reasonably collimated beam (even $\theta_{D}\sim 1^{\circ}$), and especially in the thinnest devices where $\partial_{\theta}^{2}\lambda_{r}$ is largest. While a small value of $\partial_{\theta}^{2}\lambda_{r}$ is certainly desirable, this quantity alone is not a useful figure of merit for achieving high reflectivity, since the impact on narrower resonances – characterized by a larger curvature $\partial_{\lambda}^{2}\mathcal{T}$ at $\lambda_{r}$; see Fig. 3(d) – is larger than for wider resonances. With this in mind, we note that, to lowest order, $\mathcal{T}\approx\frac{1}{2}(\partial^{2}_{\lambda}\mathcal{T})(\lambda-% \lambda_{r})^{2}$ near resonance, where $\lambda_{r}$ varies with $\theta$ as $\approx\frac{1}{2}(\partial^{2}_{\theta}\lambda_{r})\theta^{2}$. Averaging this approximate $\mathcal{T}$ over all incident angles of the collimated beam222Note this treatment (inspired by the numerical approach in Ref. Crozier et al. (2006)) is only valid for an infinite crystal and small $\theta_{D}$, such that the transmitted field from each incident plane wave comprises a discrete set of outbound plane waves that are orthogonal to those produced by the other incidence waves. (i.e. weighted by the Gaussian distribution $\propto e^{-2\theta^{2}/\theta_{D}^{2}}$), produces a shift in the minimum’s wavelength $\delta\lambda_{r}\approx\frac{1}{8}\left(\partial_{\theta}^{2}\lambda_{r}% \right)\theta_{D}^{2}$ and a new minimum value $$\displaystyle\mathcal{T}_{\text{min}}$$ $$\displaystyle\approx$$ $$\displaystyle\frac{1}{64}\left(\partial_{\lambda}^{2}\mathcal{T}\right)\left(% \partial_{\theta}^{2}\lambda_{r}\right)^{2}\theta_{D}^{4}$$ (1) $$\displaystyle=$$ $$\displaystyle\frac{1}{192}\left(\partial_{\theta}^{4}\mathcal{T}\right)\theta_% {d}^{4}.$$ (2) A more reasonable figure of merit for these structures is therefore the prefactor $\frac{1}{192}\partial_{\theta}^{4}\mathcal{T}$ (the approximate value of which is plotted in Fig. 3(e)), which can be used to directly compare the relative performance of different geometries, and to roughly estimate $\mathcal{T}_{\text{min}}$ for sufficiently small $\theta_{D}$ (i.e. provided $\mathcal{T}_{\text{min}}\ll 1$). The thickest crystals simulated here (crystal “A” having $a\sim 0.78\lambda_{0}$) should be the most robust against these effects, with a collimation-limited $\mathcal{T}_{\text{min}}\lesssim 10^{-6}$ for a $\theta_{D}$=$1^{\circ}$ beam. Additionally, the broader resonances (Fig. 3(d)) and $a$-dependence of $\partial_{\theta}^{4}\mathcal{T}$ (Fig. 3(e)) will further relax fabrication tolerances, while the added thickness reduces breakage during release (these and thicker structures may require low-stress nitride to fabricate, however, due to the stress-induced cracking in thick layers of LPCVD nitride on Si). Note that these plotted values represent approximate lowest-order dependences, and real devices will exhibit other nonidealities not included here: disorder, scattering, and absorption. Finally, we estimate the etch tuning rate $\partial_{h}\lambda_{r}$ near the “optimal” crystal parameters labeled A, B, C, D, and E in (a), finding values $\sim$ 0.6, 1.0, 1.3, 1.5, and 0.9, respectively. This quantity does not vary wildly with thickness, but it does suggest that etch tuning will also be marginally more controlled for the thickest optimal device. Away from these points, $\partial_{h}\lambda_{r}$ similarly varies by $\sim$ 50%. IV Summary and Discussion We have introduced a simple technique for iteratively tuning the resonance wavelength of a photonic crystal reflector, achieving a value within 0.15 nm (0.04 linewidths) of our target. Consistent with literature, we observe higher transmission than is predicted by simple plane wave simulations, and identify two fundamental limitations on reflectivity imposed by collimated light. Table 1 lists the parameters reported in previous work on SiN PCRs, which shows some of the expected trends in $\mathcal{T}(\lambda_{r})$ with $h$ and $\theta_{D}$, despite the varied differences in $\lambda_{r}$, $(a,d)$ combinations, and index (material composition). We observe a $\mathcal{T}(\lambda_{r})$ that is somewhat improved over a similarly thin structure (Ref. Bui et al. (2012)), but our hole diameter is also significantly larger, resulting in a wider resonance that is less sensitive to $\theta$, and our operating wavelength is $\sim 50$% longer, relaxing our fabrication tolerances and reducing absorption. A larger deviant from the trend is Ref. Crozier et al. (2006), but in that case the crystal parameters are quite far from the optimal point identified in Fig.  3, and they use a shorter wavelength on non-stoichiometric SiN, both of which lead to higher absorption. Within the context of optomechanics and force sensing, these results illustrate an important trade-off between mechanical mass and attainable reflectivity. In particular, it is the thinnest Si${}_{3}$N${}_{4}$ structures (i.e. well below $100$ nm) Reinhardt et al. (2016); Norte et al. (2016) that exhibit the lowest force noise, leading to an additional trade-off between reflectivity and mechanical performance, or requiring a more complicated structure with regions of different thickness Norte et al. (2016). Acknowledgments We thank Alireza H. Mesgar for initial help, Alex Bourassa for data acquisition help, Don Berry, Paul Blondé, Jun Li, Lino Eugene, and Mattieu Nannini for fabrication help, and Tina Müller, Max Ruf, and Abeer Barasheed for fruitful discussions. The group also gratefully acknowledges computational support from CLUMEQ and financial support from NSERC, FRQNT, the Alfred P. Sloan Foundation, CFI, INTRIQ, RQMP, CMC Microsystems, and the Centre for the Physics of Materials at McGill. References Joannopoulos et al. (1995) J. D. Joannopoulos, R. D. Meade,  and J. N. Winn, Photonic crystals : molding the flow of light (Princeton University Press, 1995) p. 137. Zhou et al. (2014) W. Zhou, D. Zhao, Y.-C. Shuai, H. Yang, S. Chuwongin, A. Chadha, J.-H. Seo, K. X. Wang, V. Liu, Z. Ma,  and S. Fan, “Progress in 2D photonic crystal Fano resonance photonics,” Progress in Quantum Electronics 38, 1–74 (2014). Suh et al. (2003) W. Suh, M. F. Yanik, O. Solgaard,  and S. Fan, “Displacement-sensitive photonic crystal structures based on guided resonance in photonic crystal slabs,” Applied Physics Letters 82, 1999 (2003). Ho et al. (2015) C. P. Ho, P. Pitchappa, P. Kropelnicki, J. Wang, H. Cai, Y. Gu,  and C. Lee, “Two-dimensional photonic-crystal-based Fabry–Perot etalon,” Optics Letters 40, 2743 (2015). Huang et al. (2007) M. C. Huang, Y. Zhou,  and C. J. Chang-Hasnain, “A surface-emitting laser incorporating a high-index-contrast subwavelength grating,” Nature Photonics 1, 119–122 (2007). Yang et al. (2015) W. Yang, S. A. Gerke, K. W. Ng, Y. Rao, C. Chase,  and C. J. Chang-Hasnain, “Laser optomechanics,” Scientific Reports 5, 13700 (2015). Yacomotti et al. (2006) A. M. Yacomotti, F. Raineri, G. Vecchi, P. Monnier, R. Raj, A. Levenson, B. Ben Bakir, C. Seassal, X. Letartre, P. Viktorovitch, L. Di Cioccio,  and J. M. Fedeli, “All-optical bistable band-edge Bloch modes in a two-dimensional photonic crystal,” Applied Physics Letters 88, 231107 (2006). Shi et al. (2009) L. Shi, P. Pottier, M. Skorobogatiy,  and Y.-A. Peter, “Tunable structures comprising two photonic crystal slabs – optical study in view of multi-analyte enhanced detection,” Optics Express 17, 10623 (2009). Magnusson et al. (2011) R. Magnusson, D. Wawro, S. Zimmerman,  and Y. Ding, “Resonant photonic biosensors with polarization-based multiparametric discrimination in each channel.” Sensors (Basel, Switzerland) 11, 1476–88 (2011). Fan and Joannopoulos (2002) S. Fan and J. Joannopoulos, “Analysis of guided resonances in photonic crystal slabs,” Physical Review B 65, 235112 (2002). Lousse et al. (2004) V. Lousse, W. Suh, O. Kilic, S. Kim, O. Solgaard,  and S. Fan, “Angular and polarization properties of a photonic crystal slab mirror.” Optics express 12, 1575–1582 (2004). Kilic et al. (2004) O. Kilic, S. Kim, W. Suh, Y.-A. Peter, A. S. Sudbcc, M. F. Yanik, S. Fan,  and O. Solgaard, “Photonic crystal slabs demonstrating strong broadband suppression of transmission in the presence of disorders,” Optics Letters 29, 2782 (2004). Aspelmeyer et al. (2014) M. Aspelmeyer, T. J. Kippenberg,  and F. Marquardt, “Cavity optomechanics,” Reviews of Modern Physics 86, 1391–1452 (2014). Antoni et al. (2011) T. Antoni, A. G. Kuhn, T. Briant, P.-F. Cohadon, A. Heidmann, R. Braive, A. Beveratos, I. Abram, L. L. Gratiet, I. Sagnes,  and I. Robert-Philip, “Deformable two-dimensional photonic crystal slab for cavity optomechanics,” Optics Letters 36, 3434 (2011). Antoni et al. (2012) T. Antoni, K. Makles, R. Braive, T. Briant, P.-F. Cohadon, I. Sagnes, I. Robert-Philip,  and A. Heidmann, “Nonlinear mechanics with suspended nanomembranes,” EPL (Europhysics Letters) 100, 68005 (2012). Makles et al. (2015) K. Makles, T. Antoni, A. G. Kuhn, S. Deleglise, T. Briant, P.-F. Cohadon, R. Braive, G. Beaudoin, L. Pinard, C. Michel, V. Dolique, R. Flaminio, G. Cagnoli, I. Robert-Philip,  and A. Heidmann, “2D photonic-crystal optomechanical nanoresonator,” arXiv:1410.6303 40 (2015), 10.1364/OL.40.000174. Bui et al. (2012) C. H. Bui, J. Zheng, S. W. Hoch, L. Y. T. Lee, J. G. E. Harris,  and C. Wei Wong, “High-reflectivity, high-Q micromechanical membranes via guided resonances for enhanced optomechanical coupling,” Applied Physics Letters 100, 21110 (2012). Kemiktarak et al. (2012a) U. Kemiktarak, M. Durand, M. Metcalfe,  and J. Lawall, “Cavity optomechanics with sub-wavelength grating mirrors,” New Journal of Physics 14 (2012a), 10.1088/1367-2630/14/12/125010. Kemiktarak et al. (2012b) U. Kemiktarak, M. Metcalfe, M. Durand,  and J. Lawall, “Mechanically compliant grating reflectors for optomechanics,” Applied Physics Letters 100, 061124 (2012b). Stambaugh et al. (2014) C. Stambaugh, H. Xu, U. Kemiktarak, J. Taylor,  and J. Lawall, “From membrane-in-the-middle to mirror-in-the-middle with a high-reflectivity sub-wavelength grating,” Annalen der Physik 2014, 201400142 (2014). Norte et al. (2016) R. Norte, J. Moura,  and S. Groblacher, ‘‘Mechanical Resonators for Quantum Optomechanics Experiments at Room Temperature,” Physical Review Letters 116, 147202 (2016). Chen et al. (2016) X. Chen, C. Chardin, K. Makles, C. Caer, S. Chua, R. Braive, I. Robert-Philip, T. Briant, P.-F. Cohadon, A. Heidmann, T. Jacqmin,  and S. Deleglise, “High-finesse Fabry-Perot cavities with bidimensional Si3N4 photonic-crystal slabs,” arXiv:1603.07200  (2016). Wilson et al. (2009) D. J. Wilson, C. A. Regal, S. B. Papp,  and H. J. Kimble, “Cavity Optomechanics with Stoichiometric SiN Films,” Physical Review Letters 103, 207204+ (2009). Sankey et al. (2010) J. C. Sankey, C. Yang, B. M. Zwickl, A. M. Jayich,  and J. G. E. Harris, ‘‘Strong and tunable nonlinear optomechanical coupling in a low-loss system,” Nature Physics 6, 707–712 (2010). Reinhardt et al. (2016) C. Reinhardt, T. Muller, A. Bourassa,  and J. C. Sankey, “Ultralow-Noise SiN Trampoline Resonators for Sensing and Optomechanics,” Physical Review X 6, 021001 (2016). Barclay et al. (2006) P. E. Barclay, K. Srinivasan, O. Painter, B. Lev,  and H. Mabuchi, “Integration of fiber-coupled high-Q SiN x microdisks with atom chips,” Applied Physics Letters 89 (2006), 10.1063/1.2356892. White et al. (2005) I. M. White, N. M. Hanumegowda, H. Oveys,  and X. Fan, “Tuning whispering gallery modes in optical microspheres with chemical etching.” Optics express 13, 10754–9 (2005). Henze et al. (2013) R. Henze, C. Pyrlik, A. Thies, J. M. Ward, A. Wicht,  and O. Benson, “Fine-tuning of whispering gallery modes in on-chip silica microdisk resonators within a full spectral range,” Applied Physics Letters 102 (2013), 10.1063/1.4789755. Dalacu et al. (2005) D. Dalacu, S. Frederick, P. J. Poole, G. C. Aers,  and R. L. Williams, “Postfabrication fine-tuning of photonic crystal microcavities in InAs∕InP quantum dot membranes,” Applied Physics Letters 87, 151107 (2005). Crozier et al. (2006) K. B. Crozier, V. Lousse, O. Kilic, S. Kim, S. Fan,  and O. Solgaard, “Air-bridged photonic crystal slabs at visible and near-infrared wavelengths,” Physical Review B 73, 115126+ (2006). Oskooi et al. (2010) A. F. Oskooi, D. Roundy, M. Ibanescu, P. Bermel, J. D. Joannopoulos,  and S. G. Johnson, “Meep: A flexible free-software package for electromagnetic simulations by the FDTD method,” Computer Physics Communications 181, 687–702 (2010).
Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data Gorka Abad Radboud University Ikerlan Research Centre    Oğuzhan Ersoy Radboud University    Stjepan Picek Radboud University    Aitor Urbieta Ikerlan Research Centre Abstract Deep neural networks (DNNs) have achieved excellent results in various tasks, including image and speech recognition. However, optimizing the performance of DNNs requires careful tuning of multiple hyperparameters and network parameters via training. High-performance DNNs utilize a large number of parameters, corresponding to high energy consumption during training. To address these limitations, researchers have developed spiking neural networks (SNNs), which are more energy-efficient and can process data in a biologically plausible manner, making them well-suited for tasks involving sensory data processing, i.e., neuromorphic data. Like DNNs, SNNs are vulnerable to various threats, such as adversarial examples and backdoor attacks. Yet, the attacks and countermeasures for SNNs have been almost fully unexplored. This paper investigates the application of backdoor attacks in SNNs using neuromorphic datasets and different triggers. More precisely, backdoor triggers in neuromorphic data can change their position and color, allowing a larger range of possibilities than common triggers in, e.g., the image domain. We propose different attacks achieving up to 100% attack success rate without noticeable clean accuracy degradation. We also evaluate the stealthiness of the attacks via the structural similarity metric, showing our most powerful attacks being also stealthy. Finally, we adapt the state-of-the-art defenses from the image domain, demonstrating they are not necessarily effective for neuromorphic data resulting in inaccurate performance. 1 Introduction Deep neural networks (DNNs) have achieved top performance in machine learning tasks in different domains, like computer vision [26], speech recognition [19], and text generation [5]. One key aspect of DNNs that has contributed to their success is their ability to learn from large amounts of data and discover complex patterns. This is achieved through multiple layers of interconnected neurons. The connections between these nodes are weighted, and the weights are adjusted during training to minimize error and improve the model’s accuracy. DNNs have many hyperparameters that can be tuned to achieve top performance on a given task, but careful optimization of these hyperparameters is crucial. Training a well-performing DNN can be time and energy expensive as it requires tuning an enormous number of parameters with large training data. For example, training the GPT-3 model consumed about 190 000 kWh of electricity [9]. These models’ increasing complexity and computational requirements have led researchers to explore alternative approaches, such as spiking neural networks (SNNs) [16, 43, 11, 13]. SNNs can significantly reduce the energy consumption of neural networks. For instance, Kundu et al. achieved better compute energy efficiency (up to $12.2\times$) compared to DNNs with a similar number of parameters [27]. In addition to their energy efficiency, SNNs have several other benefits. SNNs can be more robust to noise and perturbations, making them more reliable in real-world situations [29]. More precisely, data captured by a Dynamic Vision Sensor (DVS) [41]—which SNNs can process—captures per pixel brightness changes asynchronously, instead of the absolute brightness in a constant rate—as in images. Compared to standard cameras, DVS cameras are low power consumption and capture low latency data, i.e., neuromorphic data, which also has high temporal resolution [8, 32]. Thus, SNNs can process data in a more biologically plausible manner, such as neuromorphic data (see Figure 1), making them well-suited for tasks involving sensory data processing. For example, the computer vision domain has achieved top performance in autonomous driving. A camera (or several) placed in a car captures the surroundings, processed by a DNN. The DNN decisions allow the car to drive autonomously. Recent investigations suggested using event-based neuromorphic vision to achieve the same goal [8, 46]. Event-based data allows solving challenging scenarios where regular cameras cannot perform well [36, 56], such as high-speed scenes, low latency, and low power consumption scenarios. Moreover, SNNs are used in domains like medical diagnosis [15] and computer vision [52, 21]. SNNs are also implemented on neurosynaptic system platforms like TrueNorth [2]. Finally, while DNNs are often considered to perform better than SNNs in terms of accuracy, recent results show this performance gap is reducing or even disappearing [44]. Neural networks, including DNNs and SNNs, are generally subject to privacy and security attacks, e.g., adversarial examples [42], inference attacks [6], model stealing [24], and backdoor attacks [20]. Backdoor attacks111We only investigate data poisoning-based backdoor attacks. are a threat where malicious samples containing a trigger are included in the dataset at training time. After training, a backdoor model correctly performs the main task at test time while achieving misclassification when the input contains the trigger. Backdoor attacks on DNNs are well studied with several improvements such as stealthy triggers [35, 55] and dynamic triggers unique per data sample [37, 39]. Moreover, there are multiple works considering the backdoor defenses trying to detect and prevent backdoor attacks by inspecting DNN models [47, 33]. The existing backdoor attacks and defenses on DNNs do not directly apply to SNNs because of the different structures of SNNs and their usage of neuromorphic data. Unlike DNNs, SNNs do not have activation functions but spiking neurons, which could reduce or even disable the usage of existing attacks and defenses in DNNs that rely on them, as discussed in Section 4. Additionally, the time-encoded behavior of neuromorphic data allows a broader range of possibilities when generating input perturbations. At the same time, the captured data is encoded in much smaller pixel space (2-bit pixel space) than in regular images, which can handle up to 255-pixel possibilities per channel. The challenges regarding the application of backdoor attacks in SNNs are detailed in Section 3. To the best of our knowledge, there is only one work exploring backdoor attacks on SNNs [1]. The triggers used in the attack are static and moving square, which is not stealthy and is easily visible by human inspection. Furthermore, their attack setup is limited, only considering three different poisoning rates and a single trigger size. Finally, the authors did not consider any backdoor defense mechanisms. This paper thoroughly investigates the viability of backdoor attacks in SNNs, the stealthiness of the trigger, and the robustness against the defenses. First, we improve the performance of the static and moving triggers proposed in [1]. Next, we propose two new trigger methods: smart and dynamic triggers. Our novel methods significantly outperform the existing backdoor attacks in SNNs. Our main contributions can be summarized as follows: • We explore different backdoor injecting methods on SNNs, and all of them achieve (at least) up to 99% accuracy in both main and backdoor tasks. We thoroughly explored static and moving triggers, which led to the development of a smart attack that selects the best location and color for the trigger. • Motivated by the knowledge gained from the smart trigger, we present, to the best of our knowledge, the first dynamic backdoor attack on the neuromorphic dataset, which is invisible to human inspection while reaching top performance. • We analyze the stealthiness of backdoor attacks concerning the structural similarity index (SSIM) metric and show that our dynamic trigger achieves the best stealthiness, up to 99.9 SSIM, while other methods, like static or moving triggers, achieve 98.5% SSIM. • We adapt state-of-the-art defenses from the image domain into the domain of SNNs and neuromorphic data and compare the effectiveness of each attack method. We observe that existing (adapted) defenses can be ineffective against backdoor attacks in SNNs. Still, we show that retraining the model for a few epochs on clean data helps reduce the backdoor effect while keeping the clean accuracy high. • We evaluate four trigger methods and three datasets. For each method, we evaluate (i) the success ratio of the attack, (ii) the degradation of the clean model accuracy, (iii) the stealthiness of the trigger, and (iv) the robustness of the attack against the state-of-the-art backdoor defenses. We share our code to allow the reproducibility of our results.222The code will become publicly available after acceptance. Moreover, we show the dynamic motion and the stealthiness of our triggers in a demo in the repository. 2 Background 2.1 Backdoor Attacks Backdoor attacks modify the behavior of a model during training, so at test time, it behaves abnormally [20]. A backdoored model misclassifies the inputs with a trigger while behaving normally on clean inputs. In the image domain, the trigger can be a pixel pattern in a specific part of the image. When the algorithm is trained on a mixture of clean and backdoor data, the model learns only to misclassify the inputs containing the pixel pattern, i.e., the trigger, to a particular target label. Formally, an algorithm $f_{\theta}(\cdot)$ is trained on a mixture dataset containing clean and backdoor data, which rate is controlled by $\epsilon=\frac{m}{n}$ where $n$ is the size of the clean dataset, $m$ is the size of the backdoor dataset, and $m\ll n$. The backdoor dataset is composed of backdoor samples $\{(\hat{\textbf{x}},\hat{y})\}^{m}\in\mathcal{D}_{bk}$, where $\hat{\textbf{x}}$ is the sample containing the trigger and $\hat{y}$ is the target label. For a clean dataset of size $n$, the training procedure aims to find $\theta$ by minimizing the loss function $\mathcal{L}$: $$\theta^{\prime}=\operatorname*{argmin}_{\theta}\sum_{i=0}^{n}\mathcal{L}(f_{\theta}(\textbf{x}_{i}),y_{i}),$$ (1) where x is input and $y$ is label. During the training with backdoor data, Equation 1 is modified to include the backdoor behavior expressed as: $$\theta^{\prime}=\operatorname*{argmin}_{\theta}\sum_{i=0}^{n}\mathcal{L}(f_{\theta}(\textbf{x}_{i}),y_{i})+\sum_{j=0}^{m}\mathcal{L}(f_{\theta}(\hat{\textbf{x}}_{j}),\hat{y}_{j}).$$ 2.2 Spiking Neural Networks & Neuromorphic Data To explain the differences between DNNs and SNNs, let us consider an example with a multilayer perceptron (MLP), which consists of layers containing neurons. The neurons in a layer are connected to all the neurons in the next layer. Each neuron is defined by its weight and bias. Therefore, an input is transformed by each neuron at different layers. Often the bias is set to 1, so the incoming value always propagates to the next layer, i.e., all neurons send a value to the next layer. The explained architecture is linear, making it difficult to train. However, including activation functions between neurons converts the algorithm into a piece-wise linear function. Commonly, the Rectified Linear Unit (ReLU) activation function is used: $$h(x)=\begin{cases}x,&\text{if }x\geq 0\\ 0,&\text{otherwise}\end{cases}$$ In SNNs, instead of neurons, spiking neurons are used. SNNs have the same structure as DNNs; however, spiking neurons require excitation up to some threshold to fire, allowing the inputs of a spiking neuron to go through. Spiking neurons are excited by the incoming inputs, which excitation adds up over time to some threshold that lets the transformed input go to the next spiking neuron(s). Once the threshold $(\Theta)$ is reached, the spiking neuron is fired, and the excitation value gets reset; otherwise, the neuron’s activation decays over time. Therefore, SNNs are non-linear by design due to the fire-and-reset process: $$h(x)=\begin{cases}1,&\text{if }x\geq\Theta\\ 0,&\text{otherwise.}\end{cases}$$ SNNs commonly work with neuromorphic data, a time-encoded representation of the illumination changes of an object/subject captured by a DVS camera. A DVS camera captures a flow of spiking events that dynamically represents the changing visual scene. An advantage of DVS cameras is that they provide an almost instantaneous representation of the visual scene in a very compressed form, which facilitates posterior data processing. Precisely, neuromorphic data is encoded in $T$ frames and $p$ polarities, i.e., ON polarity and OFF polarity, which represent the changes in the visual scene. As an example, we divide the data points into $T=16$ frames, where each polarity generates a color, as seen in Figure 1. 3 Backdoor Attacks to SNNs 3.1 Threat Model We consider a white-box attack where the attacker has access to the entire training procedure and the training dataset. We assume a dirty-label backdoor where the attacker can alter the training samples and their labels. Even though this threat model is weaker than its counterpart, i.e., clean-label attack [45], it is the most common in the literature [20, 37, 39]. Finally, we target only scratch-trained models. As a use case, we assume that a client wants to train an SNN on an owned dataset but does not have the resources, e.g., GPU cards, to train it. Therefore, the client outsources the training to a third party that provides on-cloud training services, such as Google Cloud or Amazon Web Services, by sharing the model architecture and the training dataset. We assume that the attacker is the third-party provider, thus having access to the training procedure, the model, and the dataset. The attacker then injects the backdoor during training and shares the model with the client. The client can check the model’s performance using a holdout dataset. 3.2 Challenges in SNNs SNNs have a different network structure than DNNs and utilize neuromorphic datasets. However, the information propagation approach is the key difference between “classical” DNNs and SNNs. SNNs do not work with continuously changing time values (like DNNs) but operate with discrete events that occur at certain points in time. Thus, the training is different, making the attacks happening in the training phase (potentially) challenging to deploy. For instance, the triggers used in the image domain are encoded in 255 possibilities per channel, which gives many combinations of color to be. In neuromorphic data, however, the trigger space is reduced to 4 possibilities encoded by the two different polarities. Furthermore, in the image domain, the trigger is commonly static, i.e., no time-encoded data is used. In neuromorphic data, we encode the trigger using the time window, allowing us to create triggers that change the location through time. Next, we list the main challenges: C.1: Designing and optimizing the trigger. Because of having several frames per data point, the triggers can vary their position, shape, polarities, etc., through time. The large set of possibilities leads to the following questions: How can we efficiently select the trigger that would maximize the attack performance while having minimal influence on the clean accuracy drop? How can we decide on the most important parameters for a neuromorphic backdoor trigger in SNNs? C.2: Generating stealthy triggers. Since each pixel of a neuromorphic trigger can only take four different values, smoothing the trigger over the clean data is challenging compared to images with, e.g., $3\times 256$ values. The limitation yields the following questions: How can we design a backdoor trigger that exploits the time-encoded data to generate an invisible trigger unique per input sample and frame? How will the chosen parameters affect the trigger stealthiness? C.3: Backdoor defenses. Since SNNs do not have activation functions, which are commonly utilized in backdoor defenses, the state-of-the-art defenses may not apply to SNNs. Moreover, the defenses are commonly designed for images, whereas neuromorphic data consists of several frames per data point. These differences lead to the following questions: How successful are the existing defenses against SNNs? How can we adapt the defenses for datasets with multiple frames? C.4: Assessing stealthiness. It is non-trivial to assess the stealthiness of a backdoor trigger via a subjective human perspective, which leads to the following question: How can we objectively assess the stealthiness of a trigger for neuromorphic data? Knowing the limitations in color and the flexibility in changes through time, we propose different techniques for injecting a backdoor in SNNs. With this information, we improve two attacks: static and moving backdoors, and we propose two novel attacks: smart and dynamic backdoors. 3.3 Static Backdoor Backdoor attacks in SNNs or neuromorphic data were not explored before the work of Abad et al. [1]. Inspired by backdoor attacks in the image domain [20], the authors replicated the square trigger used in neuromorphic data. By completely discarding the time-encoded advantages neuromorphic data have, the authors included the same trigger (same position and polarity) in all the frames, thus making a static trigger. In this section, we also investigate this trigger type as a baseline for subsequent ones, thoroughly investigating the trigger position, polarity, and size in a wide range of cases. We follow the same intuition of a pixel-squared trigger of a given color, which is now set by the polarity for neuromorphic data. The data samples contain two polarity values, either ON polarity or OFF polarity corresponding to the black and light blue. However, when pixels from different polarities are overlapped, it generates another two color polarities, i.e., dark blue and green. The polarity $p$ in neuromorphic datasets is a two-bit discrete value, creating up to four different combinations; we rename the polarities for simplicity to $p_{0}$, $p_{1}$, $p_{2}$, and $p_{3}$. Thus, the trigger gets a different color for different polarity combinations, i.e., black, dark blue, green, or light blue. The trigger can be placed in a specific location of the input: top-right, middle, bottom-left, random, or any other desired location $l$, see Figure 2 as an example. For our attacks, we also consider the trigger size, $s$, as the percentage of the input size, for constructing the trigger. Still, the input samples are divided into $T$ frames, so the trigger $k$ is replicated for each frame and sample, i.e., the trigger does not change the location. Consequently, it is static. We consider all discussed parameters in the backdoor creation function $\mathcal{A}(\textbf{x},p,s,l)$ for creating a set of backdoor samples $\mathcal{D}_{bk}:\hat{\textbf{x}}\in\mathcal{D}_{bk}$ containing the trigger $k$. By controlling $\epsilon$ value $\epsilon=\frac{m}{n};m\ll n$ with $n$ the size of $\mathcal{D}_{clean}$ and $m$ the size of $\mathcal{D}_{bk}$, the attacker controls the amount of backdoored data during training. 3.4 Moving Backdoor As previously seen, static backdoors replicate the trigger from backdoor triggers in the image domain. However, a unique characteristic of neuromorphic data allows the attacker to develop a better version of the trigger. To this end, moving triggers inject a trigger per frame in different locations, exploiting the time-encoded nature of neuromorphic data. The nature of neuromorphic data is driven by polarity, i.e., movement, which contradicts the static behavior of the naïve static attack. Driven by this discrepancy and the aim of creating a more stealthy attack that cannot be detected easily by human inspection (more about attack stealthiness is in Section 5), we consider a more robust approach, named moving backdoor, improving previous work [1]. The moving backdoor primary leverages the “motion” nature of neuromorphic datasets for creating moving triggers that move along the input. Precisely, for a given polarity $p$, a location $l$, and size $s$, the trigger $k$ smoothly changes from frame to frame, creating a moving effect. Formally, the backdoor creation function takes the parameters mentioned above $\mathcal{A}(\textbf{x},p,s,l,T)$ for creating a backdoor set of inputs $\mathcal{D}_{bk}$, where $T$ is the total amount of frames that the input is divided. The backdoor creation function also considers the number of frames $T$, such that for each frame, $\mathcal{A}(\cdot)$ calculates a location at $t+1\in T$ close to the previous frame $t$. This allows the trigger to simulate a smooth movement in the input space. Unlike the static trigger, the moving trigger can be placed on top of the input “activity area” for better stealthiness. Additional information about stealthiness can be found in Section 5. In our investigation and contrary to [1], we consider a wide experimental setup. Additionally, we analytically measure and compare the stealthiness of static a moving backdoor. Finally, we also compare these attacks against the implemented defenses. 3.5 Smart Backdoor So far, the proposed techniques, i.e., static and moving backdoors, inject the backdoor in the model correctly, even improving the attack stealthiness in the latter case. With the smart backdoor approach, we aim to optimize the backdoor performance, simplicity, stealthiness, and understanding by removing the two hyperparameters, namely the polarity $p$ and trigger location $l$. For a better understanding of the effects of the trigger location, we split the image by drawing $n$ vertical and horizontal lines (see 1(e)). These will divide the image into $(n+1)^{2}$ chunks, which we call masks. Note that a larger $n$ value would create more masks, allowing the attacker to have more control over the “optimal” spot of trigger placement. We discuss its effectiveness in Section 4. The smart backdoor attacks leverage the inputs’ polarity changes to find the most active mask. We define the mask activation as the sum of all the polarity changes happening in a mask for all the frames, excluding the polarity $p_{0}$, which represents no movement, i.e., the black color or the background. This way, the smart backdoor finds the most active mask in the input sample. Formally, given a set of masks, $\mathcal{S}_{mask}$ the most active mask is found by $$v^{\prime}=\operatorname*{argmax}_{v\in\mathcal{S}_{mask}}\sum_{i=0}^{(n+1)^{2}}p^{i}_{1}+p^{i}_{2}+p^{i}_{3},$$ where $p_{1},p_{2},$ and $p_{3}$ are different polarities. For example, in 1(e), $n=2$ lines horizontally and vertically split the image into nine masks. The smart attack will decide which mask is the most active, the “2,c” mask in this case.333We also investigate the effect of injection of the trigger in the least active area in Section 4. Once the location is chosen, the smart backdoor attack also selects the best polarity for the trigger, backdoor performance-wise, i.e., the least used polarity in the mask. Formally, the polarity $p^{\prime}$ is selected, given $$p^{\prime}=\operatorname*{argmin}_{p\in\llbracket 0,3\rrbracket}\sum_{i=0}^{3}v(p_{i}).$$ Therefore, $p^{\prime}$ is used for the polarity of the trigger $k$ and is injected in $v^{\prime}$, randomly and smoothly moving around the mask for all the frames. Note that the trigger polarity $p^{\prime}$ and the mask $v^{\prime}$ are calculated for the poisoned dataset $m$, i.e., all the poisoned samples have the same trigger location and polarity. Formally, the smart backdoor creation function is defined as $\mathcal{A}(\textbf{x},p^{\prime},v^{\prime},T)$, generating a set of moving triggers that are combined with the clean samples to create a set of poisoned samples $\mathcal{D}_{bk}$. Additionally, we investigate the effect of injecting the trigger in the least active masks, such as $$v^{\prime}=\operatorname*{argmin}_{v\in\mathcal{S}_{mask}}\sum_{i=0}^{(n+1)^{2}}p^{i}_{1}+p^{i}_{2}+p^{i}_{3},$$ and we consider the usage of the most used polarity in the mask defined as $$p^{\prime}=\operatorname*{argmax}_{p\in\llbracket 0,3\rrbracket}\sum_{i=0}^{3}v(p_{i}).$$ 3.6 Dynamic Backdoor Having explored how to inject a backdoor in SNNs using static triggers, exploiting the time-encoded nature of neuromorphic data with moving backdoors, and optimizing the trigger polarity and location with the smart trigger, we propose a stealthy, invisible, and dynamic trigger. More precisely, motivated by dynamic backdoors in the image domain [37, 10], we investigate dynamic moving backdoors where the triggers are invisible, unique for each image and frame, i.e., the trigger changes from frame to frame. To achieve this, we use a spiking autoencoder to generate the optimal noise, as big as the image, which maximizes the backdoor performance, maintains a clean accuracy, and is invisible. More precisely, one of the weakest points of the previous backdoor triggers is that they are detectable under human inspection. Therefore, we aim to create an invisible trigger. For example, autoencoders are used for denoising tasks, where we would usually require clean (denoised) and noisy versions of the image to train the autoencoder, i.e., the autoencoder is trained on image pairs. However, we do not have the clean image and trigger pair to train the autoencoder for our attack. If we had the trigger, there would be no need for the autoencoder. Therefore, to fulfill these requirements, we must train the model and the autoencoder simultaneously to make the autoencoder generate a trigger unique for each sample. Contrary to previous work [10], we do not need a fine-tuning phase to achieve a successful backdoor model. Intuitively, the dynamic backdoor is designed as follows (see Figure 3). At first, we generate the perturbation by passing a clean image to the spiking autoencoder $g(\cdot):\delta=g(\textbf{x})$. The perturbation is then added to the clean image to construct a backdoor image $\hat{\textbf{x}}=\textbf{x}+\delta$. However, this naïve approach would saturate x with $\delta$, which makes the trigger visible. Thus, we project the perturbation to a $l_{p}$-ball of a given budget $\gamma:\|g(\textbf{x})\|_{\infty}\leq\gamma$. Then, $g(\cdot)$ is updated aiming to maximize the backdoor accuracy of $f(\cdot)$, thus during training $g(\cdot)$ optimizes the parameters $\zeta$ that minimize the given loss function: $$\zeta^{\prime}=\operatorname*{argmin}_{\zeta}\sum_{i=0}^{n}\mathcal{L}(f_{\theta}(g_{\zeta}({\textbf{x}}_{i})+\textbf{x}_{i}),\hat{y}_{i}),\\ s.t.\ \|g_{\zeta}(\textbf{x})\|_{\infty}\leq\gamma\quad\forall\;\textbf{x},$$ (2) where $\hat{y}$ is the target label, $n$ is the length of the dataset, and $\mathcal{L}$ is a loss function, mean squared error in our case. For training $f(\cdot)$, the parameters $\theta$ are updated by minimizing $$\theta^{\prime}=\operatorname*{argmin}_{\theta}\sum_{i=0}^{n}\alpha\mathcal{L}(f_{\theta}(\textbf{x}_{i})),y_{i})+\\ (1-\alpha)\mathcal{L}(f_{\theta}(g_{\zeta}({\textbf{x}}_{i})+\textbf{x}_{i}),\hat{y}_{i}),\\ s.t.\ \|g_{\zeta}(\textbf{x})\|_{\infty}\leq\gamma\quad\forall\;\textbf{x},$$ (3) where $\alpha$ controls the trade-off between the clean and the backdoor performance, a large $\alpha$ as 1 is equivalent to training $f(\cdot)$ only with clean data. $\gamma$ controls the visibility of the trigger. We discuss the influence of $\alpha$ and $\gamma$ in Section 4. where $\alpha$ controls the trade-off between the clean and the backdoor performance, a large $\alpha$ as 1 is equivalent to training $f(\cdot)$ only with clean data. Indeed, $\gamma$ controls the visibility of the trigger. 4 Evaluation Datasets We use three datasets: N-MNIST [38], CIFAR10-DVS [30], and DVS128 Gesture [3]. We use N-MNIST and CIFAR10-DVS because the non-neuromorphic version of them are the most common benchmarking datasets in computer vision for security/privacy in ML. The DVS128 Gesture dataset is a “fully neuromorphic” dataset created for SNN tasks. N-MNIST is a spiking version of MNIST [28], which contains $34\times 34$ 60 000 training, and 10 000 test samples. An ATIS sensor captured the dataset across the 10 MNIST digits shown on an LCD monitor. The CIFAR10-DVS dataset is also a spiking version of the CIFAR10 [25] dataset, which contains 60 000 training, and 10 000 test $128\times 128$ samples, corresponding to 10 classes. Lastly, the DVS128 Gesture dataset collects real-time motion captures from 29 subjects making 11 different hand gestures under three illumination conditions, creating a total of 1 176 $128\times 128$ training samples, and 288 test samples. All the datasets samples’ shape is $T\times P\times H\times W$, where $T$ is the time steps (we set it to $T=16$), $P$ is the polarity, $H$ is the height, and $W$ is the width. Network Architectures We consider three network architectures for the victim classifiers used in related works [13]. The N-MNIST dataset’s network comprises a single convolutional layer and a fully connected layer. For the CIFAR10-DVS dataset, the network contains two convolutional layers followed by batch normalization and max pooling layers. Then two fully connected layers with dropout are added, and lastly, a voting layer—for improving the classification robustness [13]—of size ten is incorporated. Finally, for the DVS128 Gesture dataset, five convolutional layers with batch normalization and max pooling, two fully connected layers with dropout, and a voting layer compose the network. The spiking autoencoder for the dynamic attack has four convolutional layers with batch normalization, four deconvolutional layers with batch normalization, and tanh as the activation function for the DVS128 Gesture and CIFAR10-DVS datasets. For N-MNIST, we use two convolutional and two deconvolutional layers with batch normalization and tanh as the activation function, which is the common structure for an autoencoder [18]. Default Training Settings For training, we set a default learning rate (LR) of 0.001, mean squared error (MSE) as the loss function, Adam as the optimizer, and we split the neuromorphic datasets in $T=16$ frames using the SpikingJelly framework [12]. For the N-MNIST dataset, we achieve a (clean) accuracy of 99% on a holdout test set in 10 epochs. For the CIFAR10-DVS case, we achieve 68% accuracy on 28 epochs and a 93% accuracy on 64 epochs for the DVS128 Gesture dataset. The results are aligned with the state-of-the-art results [40]. A summary can be found in Table 1. 4.1 Experimental Results In this section, we provide the results for four different attacks, emphasizing their strong and weak points. We use the same training settings as in Section 4. We evaluate the attacks with the commonly used metrics: • Attack success rate (ASR) measures the backdoor performance of the model based on a holdout fully backdoored dataset. • Model utility or clean accuracy is the performance of the model test on a holdout clean dataset. • Clean accuracy degradation is the accuracy drop (in percentage) from the clean and backdoor models. It is calculated as $\frac{V_{2}-V_{1}}{V1}\times 100$, where $V_{1}$ is the clean baseline accuracy, and $V_{2}$ is the clean accuracy after the attack. 4.1.1 Static Backdoor To first evaluate the viability of backdoors attacks in SNNs, we explore the basic BadNets [20] approach by placing a static trigger in different locations of the input space, various $\epsilon$ values with different trigger sizes, and polarities. We test the static attack with $\epsilon$ values of 0.001, 0.005, 0.01, 0.05, and 0.1. We set the trigger sizes to 1% and 10% of the input image size. Lastly, we experiment with three trigger locations: bottom-right, middle, and top-left, and four different polarities. Our results show that static backdoors require a trigger size as big as 10% of the input size to inject the backdoor behavior in complex datasets like CIFAR10-DVS or DVS128 Gesture. When the trigger is 1% of the input size, the backdoor is only injected in N-MNIST when the polarity is different from 0. However, when the trigger is in the middle, we observe that $p=0$ gets up to 100% ASR. This is caused because the data is centered; thus, the black trigger is on top of the image, contrasting and allowing the model to distinguish the trigger. In subsequent sections, we further investigate the importance of injecting the trigger in the most important or least important location. Increasing the trigger size makes the backdoor achieve an excellent ASR (up to 100%) when the trigger is placed in the corners. See 7(a), 7(b), and 7(c), for the results of bottom-right, middle, and bottom-right placed triggers. Since the DVS128 Gesture dataset is small, the $\epsilon$ value drastically affects ASR. When $\epsilon=0.01$, only a single sample will contain the trigger, which is insufficient to inject the backdoor when the trigger size is small. We further experiment with a larger trigger size, i.e., 0.3, achieving 99% ASR with $\epsilon=0.01$, in the top left, and using the background polarity. CIFAR10-DVS achieves 100% ASR in all the setting when the trigger size and the poisoning rate are 0.1. CIFAR10-DVS is the only dataset that achieves 100% ASR in the bottom right, with the polarity 0. This is caused by the dataset itself, which is noisy; thus, the black trigger can contrast with the background. Regarding the clean accuracy degradation, we notice a slight degradation in most cases concerning the clean accuracy baseline. See Figure 10, Figure 11, and Figure 12, for the results of the clean accuracy degradation of bottom-right, middle, and bottom-right placed triggers. N-MNIST does not show any significant degradation, while DVS128 Gesture and CIFAR10-DVS are more prone to degrade the main task up to 10%. Overall, static backdoors in SNNs show excellent performance (due to detailed experimentation, even better than [1]). However, they can be easily detected by human investigation (or by automated tools). Specifically, placing a static trigger in a moving input is unnatural, and it could be detected by checking if a part of the image is not moving or by inspecting changes in polarity between pixels. We address this limitation in consequent sections. 4.1.2 Moving Backdoor We investigate the effect of moving triggers to overcome the stationary behavior of static backdoor attacks. Moving backdoors change the trigger position per frame horizontally, moving in a constant loop. We experiment with the same setting as static backdoors. However, the trigger location varies in time by horizontally moving using top-left, middle, and bottom-right as initial locations. The trigger changes location in two pixels every frame. Thus, the triggers change 16 times in our experiments. We observe that moving the backdoor overcomes the limitation of static triggers when placed on top of the image action. Since the trigger is moving, it is not always on top of the active area, thus allowing the model to capture both clean and backdoor features. Interestingly, as seen in 8(a) and contrary to the static backdoor (see 7(a)), triggers in the bottom-right corner with background polarity do not work with complex datasets because they merge better with the image, making it impossible for the model to recognize them. However, in N-MNIST, we can achieve 100% ASR with $p\neq 0$ and large $\epsilon$. Moreover, triggers with background polarity in the bottom-right position do not inject the backdoor successfully for the DVS128 Gesture dataset, contrary to static backdoors. That effect is intuitively explained as moving backdoors are more difficult to inject than static ones. The model has to find a more complex relation between the trigger, samples, and label, thus, requiring large datasets. We observe the opposite behavior with triggers in the top-left corner (see 8(c)). We investigate the data samples and conclude that images are usually centered; thus, the main action of the image is also in the middle of the image. In the case of DVS128 Gesture or CIFAR10-DVS, the action is also contained in some corners of the image. From here, we can intuitively explain that injecting the trigger in an active or inactive area of the image could enable or disable the backdoor effect. We investigate the backdoor effect when placing the trigger in the most and least active areas in Section 4.1.3. Lastly, by placing the triggers in the middle (see 8(b)), we observe that the DVS128 Gesture dataset achieves 100% ASR when polarity is 1 or 2. These results also suggest that the trigger’s polarity strongly affects the backdoor’s performance. Furthermore, depending on where the trigger is placed, a given polarity could have a different effect, as observed with polarity 2 in 8(b) and 8(a) for the DVS128 Gesture dataset. Section 4.1.3 investigates this effect in more detail. Regarding the clean accuracy degradation, we notice a slight degradation in most cases concerning the clean accuracy baseline and an improvement in the clean accuracy in some other settings. See Figure 13, Figure 14, and Figure 15, for the results of the clean accuracy degradation of bottom-right, middle, and bottom-right placed triggers. N-MNIST does not show any degradation, while DVS128 Gesture and CIFAR10-DVS are more prone to degrade the main task up to 7%, contrary to static triggers, which may degrade the clean accuracy up to 10%. We believe this happens as those datasets are more complex, and adding backdoors makes the main task more challenging. 4.1.3 Smart Backdoor To explore the effects of the trigger location and the trigger polarity, we designed a novel attack that chooses the best combination of both. The smart attack removes the trigger location and the polarity selection by choosing either the most active or least active area of the image. Then, it chooses the least or most common polarity in that mask, where the least common polarity would contrast while the most common one would be more stealthy—enabling a more optimized attack. We experiment with different settings, such as poisoning rates, trigger sizes, and most and least active masks. We also investigate the trigger polarity’s effect using the most/least active polarity in the selected mask. We split the image using $n=2$, see 1(e). We observe that the backdoor is not successful with the DVS128 Gesture dataset. The few samples in the dataset make the choice of the most/least active mask of the image not precise. Note that the activity sum is done over all the images in the training set. The more samples, the more precise the selected mask is. Experimentation using the most active area shows excellent performance when the least common trigger polarity is used; see 3(a). Intuitively, the least common trigger polarity is preferred to increase ASR because of the trigger contrast compared to the background image. Using 1% of the image size for the trigger only shows promising results with N-MNIST with $\epsilon=0.1$. A larger trigger size improves the backdoor success using the least poisoned samples. Interestingly, injecting the trigger in the least active area with the least active trigger polarity, see 3(b), shows excellent backdoor performance even with a small trigger size. Finally, experimenting with the most active trigger polarities shows that the trigger mergers with the actual image, not allowing the model to capture both the clean image and the trigger, see 3(c) and 3(d). Finding. The triggers are best injected in the most active area with the least common polarity. We also experiment with the clean accuracy degradation with the smart trigger. As shown in Figure 16, Figure 17, Figure 18, and Figure 19, we observe a maximum of 4% degradation when the trigger size is 0.01 and the poisoning rate 0.1 for the most active area. Results also show that when the trigger size is more prominent, the clean accuracy drop is negligible in all the cases, even improving it slightly. Injecting the trigger in the least active area shows similar performance; however, the maximum degradation is less. This could be caused by the trigger not overlapping the active area, allowing the model to capture both tasks. This addresses Challenge C.1. 4.1.4 Dynamic Backdoor Lastly, we experiment with the dynamic moving backdoor. We consider different $\alpha$ (clean backdoor trade-off) and $\gamma$ (trigger intensity) values, ranging from 0.5 to 0.8 and 0.01 to 0.1, respectively. The clean and backdoor accuracies are measured at the best epochs, selected based on Algorithm 1, prioritizing high ASR instead of maintaining high clean accuracy. Thus, we expect a more significant standard error in the results as we measure the clean accuracies in different epochs. As seen in 4(a), for N-MNIST, our dynamic attack achieves 100% ASR in all the settings tested while maintaining the clean accuracy the same, i.e., there is no clean accuracy degradation when $\gamma$ is small. We experiment with a more noticeable clean accuracy degradation as $\gamma$ gets larger. With a more complex dataset, using DVS128 Gesture (see 4(b)), we achieve at least 99% ASR. However, we see a slight degradation in the clean accuracy either when the $\alpha$ value is close to 0.5, i.e., the clean and backdoor task have the same importance, or when the $\gamma$ value is large, i.e., the perturbation is more intense. Lastly, we observe lower performance in CIFAR10-DVS (see 4(c)). Indeed, we achieve close to 100% ASR in most settings and a minimum of 90% ASR with $\alpha=0.8$ and $\gamma=0.05$. Regarding the clean accuracy, we see a noticeable degradation when (i) $\gamma$ is large and (ii) the dataset is complex. To our knowledge, the state-of-the-art (clean) accuracy on CIFAR10-DVS is 69.2% [40]. The accuracy is not high; indeed, the model is not thoroughly learning the representation of the dataset, and thus including more complexity with the attack lowers its performance even more. We also observe a better performance with $\alpha=0.5$ and $\gamma=0.01$, where the trigger is subtle—and does not degrade the model much. To achieve a successful attack, we observe that no matter the setting, ASR is mainly close to 100%. However, to achieve a good trade-off between clean accuracy degradation and trigger invisibility, the attacker should carefully choose $\alpha$ and $\gamma$ values, specifically $\gamma$ control the invisibility of the backdoor image, being completely indistinguishable from the clean image when $\gamma=0.01$. This addresses Challenge C.2. Finding. An invisible trigger that cannot be detected by humans can be generated using $\gamma=0.01$. 4.2 Evaluating State-of-the-Art Defenses In this section, and due to the lack of specially crafted countermeasures for SNNs, we discuss state-of-the-art backdoor defenses in DNNs, and how they are adapted to SNNs and neuromorphic datasets. As discussed in the following sections, defenses for DNNs have core problems since they are based on DL assumptions or consider regular static data. We select two representative defenses based on model inspection: Artificial Brain Stimulation (ABS) [33] and fine-pruning [31]. 4.2.1 ABS The ABS method, as introduced in Liu et al. [33], is a method for identifying backdoors in neural networks using a model-based approach. It works by stimulating neurons in a specific layer and examining the resulting outputs for deviations from expected behavior. ABS is based on the idea that a class can be represented as a subspace within a feature space and that a backdoored class will create a distinct subspace throughout the feature space. Therefore, ABS hypothesizes that a poisoned neuron activated with a target input will tend to produce a larger output than a non-poisoned neuron. We adapt ABS to handle neuromorphic data and SNNs. Specifically, we modify the code to process all frames of an image together rather than treating each frame individually since neuromorphic data contains time-encoded information. However, ABS also does not support dynamic, moving, or smart triggers, which are types of backdoors that can change position or be unique to each image. Since the trigger position changes could be interpreted as multi-trigger backdoors—attacks that contain more than one trigger within a single input—which ABS cannot handle. Additionally, dynamic backdoors present a twofold problem for ABS. First, dynamic triggers can also be interpreted as multi-trigger. Second, ABS requires the trigger to be the same every time, i.e., the trigger is not unique per sample. However, the dynamic attack creates input-specific triggers, which can surpass ABS from its core design. We observe several false positives when testing ABS against static backdoors. When applied to a clean model, ABS marked it as compromised, and when applied to a poisoned model, ABS identified it as compromised but with the wrong target class. This behavior was consistent across all datasets. One possible explanation for this issue is the core assumption of ABS. ABS relies on “turn-points” created by the activation functions in the model, such as ReLU. However, the lack of ReLU activation functions in SNNs makes ABS malfunction, providing inaccurate results. 4.2.2 Fine-pruning Fine-pruning [31] is a defense mechanism against backdoor attacks composed of two parts: pruning and fine-tuning. Existing works show that by removing (pruning) some neurons of a DNN, the efficiency of DNNs improves while the prediction capacity remains equal [22, 53]. The authors suggested that some neurons may contain the primary task information, others the backdoor behavior, and the rest a combination of main and backdoor behavior. Thus, the backdoor could be completely removed by removing the neurons containing the malicious information. The authors proposed ranking the neurons in the last convolutional layer based on their activation values by querying some data. A pruning rate $\tau$ controls the number of neurons to prune. The second part of the defense is fine-tuning. Fine-tuning consists of retraining the (pruned) model for some (small) number of epochs on clean data. By doing this, the model could (i) recover its dropped accuracy during pruning and (ii) altogether remove the backdoor effect. The authors showed that by combining these two, the ASR of a poisoned model could drop from 99% to 0%. We implement this defense for SNNs and adapt it to work with neuromorphic data. We investigate the effect of pruning, fine-pruning (pruning + fine-tuning), and solely fine-tuning (when the pruning rate is 0). We also investigate various pruning rates, i.e., $\tau=\{0.0,0.1,0.3,0.5,0.8\}$ and analyze their impact, see Figure 6. Analyzing the results, we observe that pruning alone does not work. We notice that the clean accuracy drops drastically while ASR remains high, for example, as seen in 5(a). Depending on the trigger type, the drop in the clean accuracy is not that severe, but ASR remains high, as seen in 5(g). When combining pruning with a fine-tuning phase, i.e., fine-pruning, we observe that ASR can be drastically reduced while the clean accuracy remains high. However, the effect can be similar when focusing solely on the effect of fine-tuning, i.e., no pruned neurons ($\tau=0$). Thus, pruning will not necessarily affect the model’s backdoor performance. However, we find that solely retraining the model with clean data reduces the backdoor effect. We conclude that fine-tuning could effectively reduce the backdoor performance while keeping clean accuracy high. Still, we note that the effect is more pronounced for backdoors that aim to be more stealthy, making it an interesting trade-off. Since neuromorphic data consists of several frames (16), we are able to inject a moving backdoor, which is not possible in a single image. According to our experimental results shown in 5(d), we postulate that fine-pruning may fail to reduce the effect of the moving trigger. Finding. Fine-pruning can be effective against backdoor attacks in SNNs using neuromorphic data. Still, this depends on the dataset characteristics and the trigger type. The performance of ABS in detecting backdoors in neuromorphic data and SNNs is limited by its inability to handle dynamic, moving, and smart triggers and its reliance on activation functions not present in SNNs. Further research and development are necessary to address these limitations and improve the robustness of ABS in these contexts. Specific countermeasures considering the nature of neuromorphic data and SNNs are necessary to detect and defend against these backdoors effectively. This addresses Challenge C.3. 5 Stealthiness Evaluation Quantifying image quality is often used for applications where the end user is a human. Subjective evaluation is often unsuitable for specific applications due to time constraints or expensive costs. This section discusses different state-of-the-art metrics used for quantifying image quality, which we would use for measuring the stealthiness, i.e., variation for the clean input of our backdoor attacks. 5.1 Proposed Metrics Mean squared error [48] (MSE) compares two signals, e.g., image and audio, and measures the error or distortion between them. In our case, our signal is a frame sequence of images, where we compare a clean sample x with a distorted (backdoored) sample $\hat{\textbf{x}}$. In MSE, the error signal is given by $e=\textbf{x}-\hat{\textbf{x}}$, which is indeed the difference between pixels for two samples. However, MSE has no context neighbor pixels, which could lead to misleading results [17, 49]. For instance, a blurry image with an MSE score of 0.2, i.e., 20% of the pixels are modified, and a square of 20% of the sample size on top of the image would give the same MSE value. However, the blurry image is recognizable while the other is not. That is, two differently distorted images could have the same MSE for some perturbations more visible than others. Therefore, MSE cannot be the best measurement for backdoor attacks. Still, it could provide sufficient insights for quantifying stealthiness. To overcome the locality of MSE, Wang et al. [50] proposed a measure for structural similarity index (SSIM) that compares local patterns of pixel intensities rather than single pixels, as in MSE. Images are highly structured, whereas pixels exhibit strong dependencies carrying meaningful information. SSIM computes the changes between two windows instead of the pixel-by-pixel calculations given by $$SSIM(\textbf{x},\hat{\textbf{x}})=\frac{(2\mu_{\textbf{x}}\mu_{\hat{\textbf{x}}}+c_{1})(2\sigma_{\textbf{x}\hat{\textbf{x}}}+c_{2})}{(\mu_{\textbf{x}}^{2}+\mu_{\hat{\textbf{x}}}^{2}+c_{1})(\sigma_{\textbf{x}}^{2}+\sigma_{\hat{\textbf{x}}}^{2}+c_{2})},$$ where $\mu_{\textbf{x}}$ is the pixel sample mean of x, $\mu_{\hat{\textbf{x}}}$ is the pixel sample mean of $\hat{\textbf{x}}$, $c_{1}$ and $c_{2}$ are two variables to stabilize the division. 5.2 Evaluating Stealthiness In this section, having analyzed metrics for comparing the variation between the clean and the backdoor images, we select SSIM as the most useful for our case. We evaluate the stealthiness of our different attacks based on the SSIM between the clean and backdoored images. The SSIM values are averaged over 16 (as the batch size) randomly selected images from the test set. Precisely, we compare each clean frame with its each backdoor frame counterpart. Then, the SSIM per frame is averaged. To the best of our knowledge, this is the first application of SSIM for comparing similarities in neuromorphic data, used for backdoor attacks in SNNs, or used in the SNN domain overall. This addresses Challenge C.4. 5.2.1 Static and Moving Triggers We first analyze the static and moving triggers in two positions: corner (top-left) and middle, see 6(a). We observe that the simpler the dataset, the more the stealthiness reduction, i.e., SSIM is lower. Additionally, the trigger size and the polarity affect the stealthiness. Indeed, the larger the trigger size, the less SSIM, which is expected. However, polarity also plays a crucial role in stealthiness. We observe a noticeable similarity downgrade related to the trigger polarity. The largest similarity degradation is observed for the N-MNIST dataset, with a trigger size of 0.01, and placing the trigger in the top-left corner. The background polarity, i.e., $p=0$, shows high SSIM; however, with $p=3$, the SSIM lowers to 94.5. This could be directly linked to the number of pixels of a given polarity in an area. The less polarity in an area, the more “contrast” it would create, being less stealthy. Lastly, comparing the static and moving triggers, we observe a more significant degradation when the trigger is moving, although it is negligible in some datasets or settings. Triggers in noisy datasets as CIFAR10-DVS, are more tolerable to input perturbations. However, modifications in cleaner datasets, such as N-MNIST, significantly change the image’s overall structure, achieving a lower SSIM. 5.2.2 Smart Triggers Smart triggers select the trigger polarity and location by themselves, based on the image’s least or most active area and the least prominent polarity in an area. We observe a larger degradation based on the trigger size and when placed in the least active area (see 6(b)). This is expected as the trigger in the most active area gets hidden by the high activity, i.e., motion. Thus, the performance of triggers in the most active area gets lowered but gains trigger stealthiness, which must be considered a trade-off between stealthiness and performance. 5.2.3 Dynamic Triggers Lastly, dynamic triggers (see 6(c)) show impressive stealthiness as $\gamma$ gets smaller, even to the point of being indistinguishable from the clan image. We also observe that the more complex the dataset, the less the reduction in the stealthiness, contrary to N-MNIST, where the degradation is notable with $\gamma=0.1$. This effect is related to the number of pixel changes and the noise in the data. A dataset with large noise has much activity, thus being easier for the trigger to be hidden, as in CIFAR10-DVS. However, with “clean” datasets that contain little noise as N-MNIST, even the subtlest perturbation makes a noticeable change. Still, with $\gamma=0.01$, the perturbation in every tested dataset is invisible. 6 Related Work 6.1 SNNs SNNs are artificial neural networks inspired by how the neurons in the brain work. In recent years, numerous efforts have been made to develop supervised learning algorithms for SNNs to make them more practical and widely applicable. One of the first such algorithms was Spike Prop [4], which was based on backpropagation and could be used to train single-layer SNNs. However, it was not until more recent developments that SNNs could be applied to multi-layer setups. Despite such advances, most existing SNN training methods still require manual tuning of the spiking neuron membrane, which can be time-consuming and may limit the performance of the SNN. To overcome this limitation, Fang et al. [13] proposed a method that can learn the weights and hyperparameters of the membranes in an automated way, thus eliminating the need for manual tuning. This advancement may make SNNs more practical and easier to use for a broader range of applications. Several other notable developments in the field of SNNs are worth mentioning. One such development is event-driven update rules, allowing SNNs to operate more efficiently by only updating the network when necessary [54]. This contrasts traditional neural networks that require continuous updates and can be computationally expensive. Another area of research in SNNs is structural plasticity, which refers to the network’s ability to change its structure during training [54, 51]. This can be accomplished through the addition or removal of connections between neurons or through the creation of new neurons altogether. Structural plasticity can improve the learning efficiency and generalization capabilities of SNNs and is effective in various tasks. There are also ongoing efforts to develop unsupervised learning algorithms for SNNs, allowing them to learn from data without needing labeled examples [23]. Unsupervised learning is a critical component of the brain’s learning process and can significantly expand the range of tasks that SNNs can perform. 6.2 Backdoor Attacks Backdoor attacks were first introduced by Gu et al., where the authors presented BadNets [20]. BadNets uses a square-shaped trigger on a fixed location to inject the backdoor task; it was the first to show backdoor vulnerabilities in machine learning. BadNets requires access to the training data for injecting the backdoor, contrary to the work by Liu et al. [34], which alleviated this assumption. The authors presented a novel work where access to the training data was not needed. The authors systematically reconstructed training samples to inject the backdoor by adding the trigger on top of the samples and retraining the model. The abovementioned approaches use static triggers, i.e., the trigger is in the exact location for all the samples. Nguyen and Tran [37] developed a dynamic backdoor attack in which the trigger varies with the input. Specifically, a generator creates a pixel scatter that is then overlapped with the clean input. A similar approach was investigated by Salem et al. [39], who also constructed a dynamic backdoor attack. However, instead of a pixel-scattered trigger, the trigger is a square, which has the advantage of applying it to physical objects. Aiming to increase the stealthiness of the backdoor, Lie et al. created ReFool [35], which includes benign-looking triggers, as reflections, in the clean sample. A similar approach was followed by Zhang et al. [55], who proposed Poison Ink, where the image structure of the image is extracted. The structure is then injected with a crafted pixel pattern and included in the clean image. The resulting poisoned image is indistinguishable from the clean sample. In the area of SNNs and neuromorphic datasets, only [1] explored backdoor attacks. However, their experimentation is limited to exploring static and moving triggers using simple datasets and models. Moreover, they do not detail the core reasoning behind backdoors in SNN. We extend the previous work by obtaining better ASR in the same settings, exploring novel backdoor injection methods, and assessing their performance in the presence of defenses. 6.3 Defenses Against Backdoor Attacks Backdoor attacks can be mitigated using either model-based or data-based defenses. Model-based defenses involve examining potentially infected models to identify and reconstruct the backdoor trigger. An example is Neural Cleanse (NC) [47], which aims to reconstruct the smallest trigger capable of causing the model to misclassify a specific input. This approach is based on the premise that an infected model is more likely to misclassify an input with a trigger than one that does not have a trigger. Another model-based approach is ABS [33], which involves activating every model neuron and analyzing anomalies in the resulting output. Model-based defenses are designed to specifically target infected models, searching for signs of a trigger and attempting to reconstruct it. This approach is effective because the presence of a trigger is often a reliable indicator that the model has been compromised. However, it is also possible for a model to be infected without a trigger, where they are not applicable. Thus, model-based countermeasures are limited to cases with triggers. Data-based defenses aim to detect the presence of a backdoor by analyzing the dataset without inspecting the model itself. One approach in this category is clustering techniques to differentiate between clean and poisoned data [7]. Another approach, STRIP [14], combines the provided dataset with known infected data and queries the model on the resulting combined dataset. By measuring the entropy of the model’s output on this combined dataset, STRIP can detect backdoored inputs, which tend to have lower entropy than clean inputs. Data-based defenses focus on analyzing the dataset to detect the presence of a backdoor. These approaches are helpful because they do not require knowledge of the specific trigger used to infect the model, making them more robust against variations in the method of infection. However, data-based defenses may not be as effective at detecting more subtle forms of backdoor attacks, which may not leave as clear a signature in the dataset. Currently, no defenses are SNN or neuromorphic data-specific. As we have shown, some well-performing defenses adapted from the image domain do not work well in SNNs. Developing SNNs or neuromorphic data-specific countermeasures is necessary for future research. 7 Conclusions & Future Work The wide usage of DNNs in multiple tasks has led to their security evaluations. However, emerging technologies, such as SNNs, are gaining importance but have not been much evaluated against security threats. In this paper, we investigate backdoor attacks in SNNs using neuromorphic data. We propose different attack methods, even an invisible dynamic trigger that changes with time and is undetectable under human inspection. Furthermore, we evaluate these new attacks against (adapted by us) state-of-the-art defenses from the image domain. We show that our attacks could achieve up to 100% ASR without noticeable clean accuracy degradation, even when the defenses are applied. Our investigation shows that SNNs are vulnerable to backdoor attacks and that current defenses must be improved. Developing SNNs and neuromorphic data-specific countermeasures is an important future line of research. SNNs are also emerging in other domains like the graph domain, where backdoor attacks are becoming more important. Backdoor attacks in spiking graph neural networks have challenges, which could require specific backdoor designs. Lastly, other common threats to DNNs could also be adapted to SNNs, e.g., inference attacks and adversarial examples. For example, data reconstruction from a trained model has been well-explored but is yet unknown if neuromorphic data could be recovered. References [1] Gorka Abad, Oguzhan Ersoy, Stjepan Picek, Víctor Julio Ramírez-Durán, and Aitor Urbieta. Poster: Backdoor attacks on spiking nns and neuromorphic datasets. In Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pages 3315–3317, 2022. [2] Filipp Akopyan, Jun Sawada, Andrew Cassidy, Rodrigo Alvarez-Icaza, John Arthur, Paul Merolla, Nabil Imam, Yutaka Nakamura, Pallab Datta, Gi-Joon Nam, Brian Taba, Michael Beakes, Bernard Brezzo, Jente B. Kuang, Rajit Manohar, William P. Risk, Bryan Jackson, and Dharmendra S. Modha. Truenorth: Design and tool flow of a 65 mw 1 million neuron programmable neurosynaptic chip. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 34(10):1537–1557, 2015. [3] Arnon Amir, Brian Taba, David Berg, Timothy Melano, Jeffrey McKinstry, Carmelo Di Nolfo, Tapan Nayak, Alexander Andreopoulos, Guillaume Garreau, Marcela Mendoza, et al. A low power, fully event-based gesture recognition system. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7243–7252, 2017. [4] Sander M Bohte, Joost N Kok, and Han La Poutre. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing, 48(1-4):17–37, 2002. [5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020. [6] Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), pages 2633–2650, 2021. [7] Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, and Biplav Srivastava. Detecting backdoor attacks on deep neural networks by activation clustering. arXiv preprint arXiv:1811.03728, 2018. [8] Guang Chen, Hu Cao, Jorg Conradt, Huajin Tang, Florian Rohrbein, and Alois Knoll. Event-based neuromorphic vision for autonomous driving: A paradigm shift for bio-inspired visual sensing and perception. IEEE Signal Processing Magazine, 37(4):34–49, 2020. [9] Payal Dhar. The carbon impact of artificial intelligence. Nat. Mach. Intell., 2(8):423–425, 2020. [10] Khoa Doan, Yingjie Lao, Weijie Zhao, and Ping Li. Lira: Learnable, imperceptible and robust backdoor attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 11966–11976, 2021. [11] Jason K Eshraghian, Max Ward, Emre Neftci, Xinxin Wang, Gregor Lenz, Girish Dwivedi, Mohammed Bennamoun, Doo Seok Jeong, and Wei D Lu. Training spiking neural networks using lessons from deep learning. arXiv preprint arXiv:2109.12894, 2021. [12] Wei Fang, Yanqi Chen, Jianhao Ding, Ding Chen, Zhaofei Yu, Huihui Zhou, Yonghong Tian, and other contributors. Spikingjelly. https://github.com/fangwei123456/spikingjelly, 2020. Accessed: 2022-10-12. [13] Wei Fang, Zhaofei Yu, Yanqi Chen, Timothée Masquelier, Tiejun Huang, and Yonghong Tian. Incorporating learnable membrane time constant to enhance learning of spiking neural networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2661–2671, 2021. [14] Yansong Gao, Change Xu, Derui Wang, Shiping Chen, Damith C Ranasinghe, and Surya Nepal. Strip: A defence against trojan attacks on deep neural networks. In Proceedings of the 35th Annual Computer Security Applications Conference, pages 113–125, 2019. [15] Samanwoy Ghosh-Dastidar and Hojjat Adeli. Improved spiking neural networks for eeg classification and epilepsy and seizure detection. Integr. Comput.-Aided Eng., 14(3):187–212, aug 2007. [16] Samanwoy Ghosh-Dastidar and Hojjat Adeli. Spiking neural networks. International journal of neural systems, 19(04):295–308, 2009. [17] Bernd Girod. What’s wrong with mean-squared error? Digital images and human vision, pages 207–220, 1993. [18] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org. [19] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In 2013 IEEE international conference on acoustics, speech and signal processing, pages 6645–6649. Ieee, 2013. [20] Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, 7:47230–47244, 2019. [21] Ankur Gupta and Lyle N. Long. Character recognition using spiking neural networks. In 2007 International Joint Conference on Neural Networks, pages 53–58, 2007. [22] S Han, H Mao, and WJ Dally. Compressing deep neural networks with pruning, trained quantization and huffman coding. arxiv 2015. arXiv preprint arXiv:1510.00149. [23] Hananel Hazan, Daniel Saunders, Darpan T Sanghavi, Hava Siegelmann, and Robert Kozma. Unsupervised learning with self-organizing spiking neural networks. In 2018 International Joint Conference on Neural Networks (IJCNN), pages 1–6. IEEE, 2018. [24] Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, and Nicolas Papernot. High accuracy and high fidelity extraction of neural networks. In 29th USENIX security symposium (USENIX Security 20), pages 1345–1362, 2020. [25] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. [26] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012. [27] Souvik Kundu, Gourav Datta, Massoud Pedram, and Peter A Beerel. Spike-thrift: Towards energy-efficient deep spiking neural networks by limiting spiking activity via attention-guided compression. In Proceedings of the IEEE/CVF WACV, pages 3953–3962, 2021. [28] Yann LeCun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/, 1998. [29] Jun Haeng Lee, Tobi Delbruck, and Michael Pfeiffer. Training deep spiking neural networks using backpropagation. Frontiers in neuroscience, 10:508, 2016. [30] Hongmin Li, Hanchao Liu, Xiangyang Ji, Guoqi Li, and Luping Shi. Cifar10-dvs: an event-stream dataset for object classification. Frontiers in neuroscience, 11:309, 2017. [31] Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Fine-pruning: Defending against backdooring attacks on deep neural networks. In International Symposium on Research in Attacks, Intrusions, and Defenses, pages 273–294. Springer, 2018. [32] Shih-Chii Liu, Bodo Rueckauer, Enea Ceolini, Adrian Huber, and Tobi Delbruck. Event-driven sensing for efficient perception: Vision and audition algorithms. IEEE Signal Processing Magazine, 36(6):29–37, 2019. [33] Yingqi Liu, Wen-Chuan Lee, Guanhong Tao, Shiqing Ma, Yousra Aafer, and Xiangyu Zhang. Abs: Scanning neural networks for back-doors by artificial brain stimulation. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pages 1265–1282, 2019. [34] Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. Trojaning attack on neural networks. 2017. [35] Yunfei Liu, Xingjun Ma, James Bailey, and Feng Lu. Reflection backdoor: A natural backdoor attack on deep neural networks. In European Conference on Computer Vision, pages 182–199. Springer, 2020. [36] Ana I Maqueda, Antonio Loquercio, Guillermo Gallego, Narciso García, and Davide Scaramuzza. Event-based vision meets deep learning on steering prediction for self-driving cars. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5419–5427, 2018. [37] Tuan Anh Nguyen and Anh Tran. Input-aware dynamic backdoor attack. Advances in Neural Information Processing Systems, 33:3454–3464, 2020. [38] Garrick Orchard, Ajinkya Jayawant, Gregory K Cohen, and Nitish Thakor. Converting static image datasets to spiking neuromorphic datasets using saccades. Frontiers in neuroscience, 9:437, 2015. [39] Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, and Yang Zhang. Dynamic backdoor attacks against machine learning models. In 2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P), pages 703–718. IEEE, 2022. [40] Ali Samadzadeh, Fatemeh Sadat Tabatabaei Far, Ali Javadi, Ahmad Nickabadi, and Morteza Haghir Chehreghani. Convolutional spiking neural networks for spatio-temporal feature extraction. arXiv preprint arXiv:2003.12346, 2020. [41] Teresa Serrano-Gotarredona and Bernabé Linares-Barranco. A 128$\backslash$,$\backslash$ $\times$ 128 1.5% contrast sensitivity 0.9% fpn 3 $\mu$s latency 4 mw asynchronous frame-free dynamic vision sensor using transimpedance preamplifiers. IEEE Journal of Solid-State Circuits, 48(3):827–838, 2013. [42] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. [43] Amirhossein Tavanaei, Masoud Ghodrati, Saeed Reza Kheradpisheh, Timothée Masquelier, and Anthony Maida. Deep learning in spiking neural networks. Neural networks, 111:47–63, 2019. [44] Amirhossein Tavanaei, Masoud Ghodrati, Saeed Reza Kheradpisheh, Timothée Masquelier, and Anthony Maida. Deep learning in spiking neural networks. Neural Networks, 111:47–63, mar 2019. [45] Alexander Turner, Dimitris Tsipras, and Aleksander Madry. Clean-label backdoor attacks. 2018. [46] Alberto Viale, Alberto Marchisio, Maurizio Martina, Guido Masera, and Muhammad Shafique. Carsnn: An efficient spiking neural network for event-based autonomous cars on the loihi neuromorphic research processor. In 2021 International Joint Conference on Neural Networks (IJCNN), pages 1–10. IEEE, 2021. [47] Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y Zhao. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In 2019 IEEE Symposium on Security and Privacy (SP), pages 707–723. IEEE, 2019. [48] Zhou Wang and Alan C Bovik. Mean squared error: Love it or leave it? a new look at signal fidelity measures. IEEE signal processing magazine, 26(1):98–117, 2009. [49] Zhou Wang, Alan C Bovik, and Ligang Lu. Why is image quality assessment so difficult? In 2002 IEEE International conference on acoustics, speech, and signal processing, volume 4, pages IV–3313. IEEE, 2002. [50] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004. [51] Mahima Milinda Alwis Weerasinghe, Josafath I Espinosa-Ramos, Grace Y Wang, and Dave Parry. Incorporating structural plasticity approaches in spiking neural networks for eeg modelling. IEEE Access, 9:117338–117348, 2021. [52] Simei Gomes Wysoski, Lubica Benuskova, and Nikola Kasabov. Evolving spiking neural networks for audiovisual information processing. Neural Networks, 23(7):819–835, 2010. [53] Jiecao Yu, Andrew Lukefahr, David Palframan, Ganesh Dasika, Reetuparna Das, and Scott Mahlke. Scalpel: Customizing dnn pruning to the underlying hardware parallelism. ACM SIGARCH Computer Architecture News, 45(2):548–560, 2017. [54] Anguo Zhang, Xiumin Li, Yueming Gao, and Yuzhen Niu. Event-driven intrinsic plasticity for spiking convolutional neural networks. IEEE Transactions on Neural Networks and Learning Systems, 33(5):1986–1995, 2021. [55] Jie Zhang, Chen Dongdong, Qidong Huang, Jing Liao, Weiming Zhang, Huamin Feng, Gang Hua, and Nenghai Yu. Poison ink: Robust and invisible backdoor attack. IEEE Transactions on Image Processing, 31:5691–5705, 2022. [56] Alex Zihao Zhu, Dinesh Thakur, Tolga Özaslan, Bernd Pfrommer, Vijay Kumar, and Kostas Daniilidis. The multivehicle stereo event camera dataset: An event camera dataset for 3d perception. IEEE Robotics and Automation Letters, 3(3):2032–2039, 2018. Appendix A Additional Experiments A.1 Results for Static and Moving Backdoors In Figures 8 and 9, we provide results for static backdoors and moving backdoors, respectively. A.2 Clean Accuracy Degradation of Static Triggers The clean accuracy degradation after the static attack in the bottom-right corner is shown in Figure 10, for the middle trigger see Figure 11. Lastly, the degradation in the top-left corner is shown in Figure 12. A.3 Clean Accuracy Degradation of Moving Triggers The clean accuracy degradation after the moving attack in the bottom-right corner is shown in Figure 13, for the middle trigger see Figure 14. Lastly, the degradation in the top-left corner is shown in Figure 15. A.4 Clean Accuracy Degradation of Smart Triggers The clean accuracy degradation of the smart attack in the most active area and the least active trigger is shown in Figure 16. For the least active area and the least active trigger, see Figure 17. For the most active triggers in the most and least active areas, see Figure 18 and Figure 19, respectively.
On the unilateral shift as a Hilbert module over the disc algebra Raphaël Clouâtre Department of Mathematics, Indiana University, 831 East 3rd Street, Bloomington, IN 47405 rclouatr@indiana.edu (Date:: December 2, 2020) Abstract. We study the unilateral shift (of arbitrary countable multiplicity) as a Hilbert module over the disc algebra and the associated extension groups. In relation with the problem of determining whether this module is projective, we consider a special class of extensions, which we call polynomial. We show that the subgroup of polynomial extensions of a contractive module by the adjoint of the unilateral shift is trivial. The main tool is a function theoretic decomposition of the Sz.-Nagy–Foias model space for completely non-unitary contractions. 1. Introduction In their pioneering work [douglas1989], Douglas and Paulsen reformulated several interesting operator theoretic questions in the language of module theory, and in doing so introduced the notion of Hilbert modules over function algebras. This suggested the use of cohomological methods to further the study of problems such as commutant lifting. Naturally, the question of identifying those Hilbert modules which are projective arose and attracted a lot of interest. The first result in that direction was obtained by Carlson and Clark in [carlson1995], where it was shown that a contractive projective Hilbert module over the disc algebra $A(\operatorname{\mathbb{D}})$ must be similar to an isometric one. Soon thereafter, the same authors along with Foias and Williams proved in [carlson1994] that isometric modules over $A(\operatorname{\mathbb{D}})$ are projective in the category of contractive Hilbert modules. This turns out to be an equivalence, as was later shown by Ferguson in [ferguson1997]. In addition, the authors of [carlson1994] show that unitary modules over $A(\operatorname{\mathbb{D}})$ are projective in the larger category of (non-necessarily contractive) Hilbert modules. Projective Hilbert modules over $A(\operatorname{\mathbb{D}})$ are to this day still quite mysterious. In fact, as things stand currently, unitary modules are the only known instances of such objects. On the other hand, by the results mentioned above a contractive projective module must be similar to an isometric module. In view of the classical Wold-von Neumann decomposition of an isometry, we see that the quest to identify the contractive projective Hilbert modules over the disc algebra is reduced to the following question: are unilateral shifts projective? A consequence of Pisier’s famous counterexample to the Halmos conjecture (see [pisier1997]) is that the answer is negative in the case of infinite multiplicity. Whether or not things are different for finite multiplicities is still an open problem. We study extension groups associated to unilateral shifts viewed as Hilbert modules over the disc algebra. With the notation established in Section 2, our main result (Theorem 5.4) establishes the triviality of the the subgroup of elements $[X]\in\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S^{*})$ such that $S^{*N}XT^{N}=0$ for some integer $N\geq 0$ whenever $T$ is similar to a contraction (here $S^{*}$ is the adjoint of the unilateral shift of arbitrary countable multiplicity). In some sense, this supports the idea that the unilateral shift is projective. However, the reader should keep in mind that our result holds regardless of multiplicity and thus does not capture the fact that the shift of infinite multiplicity is not projective. The crucial ingredient for the proof of Theorem 5.4 is a decomposition of the Sz.-Nagy–Foias model space $H(\Theta)$ which we think is of independent interest (see Theorem 5.3). There has been further work on the question of projective Hilbert modules following the appearance of [carlson1995], [carlson1994] and [ferguson1997]. Generalizing the fact that unitary modules are projective over $A(\operatorname{\mathbb{D}})$, it was shown in [chen2000] that whenever the algebra $A$ is a so-called unit modulus algebra and the module action can be extended to an action of $C(\partial A)$ (here $\partial A$ denotes the Shilov boundary of $A$), then the module is projective. An earlier paper of Guo (see [guo1999]) establishes using essentially the same idea that the result holds for the ball algebra $A(\mathbb{B}^{N})$ under an additional continuity assumption on the module action. This assumption was later removed by Didas and Eschmeier in [didas2006], where domains more general than the ball are considered. The case of the polydisc algebra $A(\operatorname{\mathbb{D}}^{N})$ was first considered in [carlson1997], where results exhibiting a sharp contrast with the one dimensional case were obtained. From the point of view of reproducing kernel Hilbert spaces, Clancy and McCullough showed in [clancy1998] that the Hilbert space $H^{2}(k)$ associated to a Nevanlinna-Pick kernel $k$ considered as a Hilbert module over its multiplier algebra is projective in an appropriate category. The existence of a projective Hilbert module over very general function algebras was established in [gulinskiy2003]. Note finally that the notion of Hilbert modules and the question of projectivity have also been studied over general operator algebras, see [muhly1995]. The paper is organized as follows. Section 2 introduces the necessary preliminaries about Hilbert modules. In Section 3 we develop some technical tools which are used in Section 5 to obtain the main result. In the meantime, we examine in Section 4 some simple examples and offer some explicit calculations of the objects introduced in Section 3. Finally, in Section 6 we briefly address the issue of non-contractive modules by considering operators of the type constructed by Pisier in [pisier1997]. 2. Preliminaries Let $\operatorname{\mathcal{H}}$ be a Hilbert space and let $T:\operatorname{\mathcal{H}}\to\operatorname{\mathcal{H}}$ be a bounded linear operator, which we indicate by $T\in B(\operatorname{\mathcal{H}})$. Recall that the operator $T$ is said to be polynomially bounded if there exists a constant $C>0$ such that for every polynomial $\varphi$, we have $$\|\varphi(T)\|\leq C\|\varphi\|_{\infty}$$ where $$\|\varphi\|_{\infty}=\sup_{|z|<1}|\varphi(z)|.$$ This inequality allows one to extend continuously the polynomial functional calculus $\varphi\mapsto\varphi(T)$ to all functions $\varphi$ in the disc algebra $A(\operatorname{\mathbb{D}})$, which consists of the holomorphic functions on $\operatorname{\mathbb{D}}$ that are continuous on $\overline{\operatorname{\mathbb{D}}}$ (throughout the paper $\operatorname{\mathbb{D}}$ denotes the open unit disc and $\operatorname{\mathbb{T}}$ denotes the unit circle). If $T\in B(\operatorname{\mathcal{H}})$ is a polynomially bounded operator, the map $$A(\operatorname{\mathbb{D}})\times\operatorname{\mathcal{H}}\to\operatorname{% \mathcal{H}}$$ $$(\varphi,h)\mapsto\varphi(T)h$$ gives rise to a structure of an $A(\operatorname{\mathbb{D}})$-module on $\operatorname{\mathcal{H}}$, and we say that $(\operatorname{\mathcal{H}},T)$ is a Hilbert module (see [douglas1989] for more details). We only deal with $A(\operatorname{\mathbb{D}})$-modules in this paper, so no confusion may arise regarding the underlying function algebra and we usually do not mention it explicitely. Moreover, when the underlying Hilbert space is understood, we slightly abuse terminology and say that $T$ is a Hilbert module. Given two Hilbert modules $(\operatorname{\mathcal{H}}_{1},T_{1})$ and $(\operatorname{\mathcal{H}}_{2},T_{2})$, we can consider the extension group $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T_{2},T_{1})$. This group consists of equivalence classes of exact sequences $$0\to\operatorname{\mathcal{H}}_{1}\to\operatorname{\mathcal{K}}\to% \operatorname{\mathcal{H}}_{2}\to 0$$ where $\operatorname{\mathcal{K}}$ is another Hilbert module and each map is a module morphism. Rather than formally defining the equivalence relation and the group operation, we simply use the following characterization from [carlson1995]. Theorem 2.1. Let $(\operatorname{\mathcal{H}}_{1},T_{1})$ and $(\operatorname{\mathcal{H}}_{2},T_{2})$ be Hilbert modules. Then, the group $$\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T_{2},T_{1})$$ is isomorphic to $\operatorname{\mathcal{A}}/\mathcal{J}$, where $\operatorname{\mathcal{A}}$ is the space of operators $X:\operatorname{\mathcal{H}}_{2}\to\operatorname{\mathcal{H}}_{1}$ for which the operator $$\left(\begin{array}[]{cc}T_{1}&X\\ 0&T_{2}\end{array}\right)$$ is polynomially bounded, and $\mathcal{J}$ is the space of operators of the form $T_{1}L-LT_{2}$ for some bounded operator $L:\operatorname{\mathcal{H}}_{2}\to\operatorname{\mathcal{H}}_{1}$. Extension groups are invariant under similarity: if $(\operatorname{\mathcal{H}}_{1}^{\prime},T_{1}^{\prime})$ and $(\operatorname{\mathcal{H}}_{2}^{\prime},T_{2}^{\prime})$ are Hilbert modules which are similar to $(\operatorname{\mathcal{H}}_{1},T_{1})$ and $(\operatorname{\mathcal{H}}_{2},T_{2})$ respectively, then the groups $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T_{2},T_{1})$ and $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T_{2}^{\prime},T_{1}^{% \prime})$ are isomorphic. In view of Theorem 2.1, the next lemma is useful. Before stating it, we recall a well-known estimate. Let $$D:A(\operatorname{\mathbb{D}})\to A(\operatorname{\mathbb{D}})$$ be defined as $$(Df)(z)=\frac{1}{z}(f(z)-f(0))$$ for every $z\in\operatorname{\mathbb{D}}$ and $f\in A(\operatorname{\mathbb{D}})$. It is a classical fact that there exists a constant $M>0$ such that $$\|D^{n}\|\leq M(1+\log n)$$ for every $n\geq 1$. Lemma 2.2. Let $(\operatorname{\mathcal{H}}_{1},T_{1})$ and $(\operatorname{\mathcal{H}}_{2},T_{2})$ be Hilbert modules. Let $X:\operatorname{\mathcal{H}}_{2}\to\operatorname{\mathcal{H}}_{1}$ be a bounded operator such that $T_{1}^{N}XT_{2}^{N}=0$ for some integer $N\geq 0$. Then, the operator $R:\operatorname{\mathcal{H}}_{1}\oplus\operatorname{\mathcal{H}}_{2}\to% \operatorname{\mathcal{H}}_{1}\oplus\operatorname{\mathcal{H}}_{2}$ defined as $$R=\left(\begin{array}[]{cc}T_{1}&X\\ 0&T_{2}\end{array}\right)$$ is polynomially bounded. Proof. Choose a $\varphi(z)=\sum_{k=0}^{d}a_{k}z^{k}$. A quick calculation shows that $$\varphi(R)=\left(\begin{array}[]{cc}\varphi(T_{1})&\delta_{X}(\varphi)\\ 0&\varphi(T_{2})\end{array}\right)$$ where $$\delta_{X}(\varphi)=\sum_{k=1}^{d}a_{k}\sum_{j=0}^{k-1}T_{1}^{j}XT_{2}^{k-1-j}.$$ Since both $T_{1}$ and $T_{2}$ are polynomially bounded, there exist constants $C_{1},C_{2}>0$ independent of $\varphi$ such that $$\|\varphi(T_{1})\|\leq C_{1}\|\varphi\|_{\infty}$$ and $$\|\varphi(T_{2})\|\leq C_{2}\|\varphi\|_{\infty}.$$ Therefore, we simply need to verify that there exists a constant $C>0$ independent of $\varphi$ such that $$\|\delta_{X}(\varphi)\|\leq C\|\varphi\|_{\infty}.$$ We have $$\displaystyle\|\delta_{X}(\varphi)\|$$ $$\displaystyle\leq\sum_{k=1}^{d}|a_{k}|\sum_{j=0}^{k-1}\|T_{1}^{j}XT_{2}^{k-1-j}\|$$ $$\displaystyle\leq C_{1}C_{2}\|X\|\sum_{k=1}^{d}k|a_{k}|$$ $$\displaystyle=C_{1}C_{2}\|X\|\sum_{k=1}^{d}k\left|\frac{\varphi^{(k)}(0)}{k!}\right|$$ and in light of the classical Cauchy estimates, we find (1) $$\|\delta_{X}(\varphi)\|\leq C_{1}C_{2}\frac{d(d+1)}{2}\|X\|\|\varphi\|_{\infty}.$$ In particular, if we set $$C=C_{1}C_{2}N(2N-1)\|X\|,$$ which depends only on $X,T_{1},T_{2}$ and $N$, then $$\|\delta_{X}(\varphi)\|\leq C\|\varphi\|_{\infty}$$ whenever $\varphi$ has degree at most $2N-1$. We focus therefore on the case where $d\geq 2N$. We have (2) $$\delta_{X}(\varphi)=\sum_{k=1}^{2N-1}a_{k}\sum_{j=0}^{k-1}T_{1}^{j}XT_{2}^{k-1% -j}+\sum_{k=2N}^{d}a_{k}\sum_{j=0}^{k-1}T_{1}^{j}XT_{2}^{k-1-j}.$$ Since $$\left\|\sum_{k=1}^{2N-1}a_{k}\sum_{j=0}^{k-1}T_{1}^{j}XT_{2}^{k-1-j}\right\|% \leq C\|\varphi\|_{\infty}$$ we are left with estimating the second sum in (2), where $k\geq 2N$. By assumption, we know that $T_{1}^{j}XT_{2}^{k-1-j}\neq 0$ only when $j\leq N-1$ or $k-1-j\leq N-1$. This allows us to write $$\displaystyle\sum_{k=2N}^{d}a_{k}\sum_{j=0}^{k-1}T_{1}^{j}XT_{2}^{k-1-j}$$ $$\displaystyle=\sum_{k=2N}^{d}a_{k}\sum_{j=0}^{N-1}T_{1}^{j}XT_{2}^{k-1-j}+\sum% _{k=2N}^{d}a_{k}\sum_{j=k-N}^{k-1}T_{1}^{j}XT_{2}^{k-1-j}$$ $$\displaystyle=\sum_{k=2N}^{d}a_{k}\sum_{j=0}^{N-1}T_{1}^{j}XT_{2}^{k-1-j}+\sum% _{k=2N}^{d}a_{k}\sum_{j=0}^{N-1}T_{1}^{k-1-j}XT_{2}^{j}$$ $$\displaystyle=\sum_{j=0}^{N-1}(T_{1}^{j}X\Phi_{j}(T_{2})+\Phi_{j}(T_{1})XT_{2}% ^{j})$$ where $$\Phi_{j}(z)=\sum_{k=2N}^{d}a_{k}z^{k-1-j}$$ whenever $0\leq j\leq N-1$. Notice now that $$\Phi_{j}(z)+\sum_{k=j+1}^{2N-1}a_{k}z^{k-1-j}=(D^{j+1}\varphi)(z)$$ whence $$\displaystyle\|\Phi_{j}\|_{\infty}$$ $$\displaystyle\leq\|D^{j+1}\varphi\|_{\infty}+\sum_{k=0}^{2N-1}|a_{k}|$$ $$\displaystyle=\|D^{j+1}\varphi\|_{\infty}+\sum_{k=0}^{2N-1}\left|\frac{\varphi% ^{(k)}(0)}{k!}\right|$$ for every $0\leq j\leq N-1$. Another use of the Cauchy estimates along with the remark preceding the statement of the lemma implies the existence of a constant $C^{\prime}>0$ depending only on $N$ such that $$\|\Phi_{j}\|_{\infty}\leq C^{\prime}\|\varphi\|_{\infty}$$ for every $0\leq j\leq N-1$. Thus, $$\left\|\sum_{k=2N}^{d}a_{k}\sum_{j=0}^{k-1}T_{1}^{j}XT_{2}^{k-1-j}\right\|\leq 2% NC^{\prime}C_{1}C_{2}\|X\|\|\varphi\|_{\infty}$$ and $$\displaystyle\|\delta_{X}(\varphi)\|$$ $$\displaystyle\leq\left\|\sum_{k=0}^{2N-1}a_{k}\sum_{j=0}^{k-1}T_{1}^{j}XT_{2}^% {k-1-j}\right\|+\left\|\sum_{k=2N}^{d}a_{k}\sum_{j=0}^{k-1}T_{1}^{j}XT_{2}^{k-% 1-j}\right\|$$ $$\displaystyle\leq C^{\prime\prime}\|\varphi\|_{\infty}$$ where $C^{\prime\prime}>0$ depends only on $N,X,T_{1},T_{2}$. The proof is complete. ∎ An important question in the study of extension groups is that of determining which Hilbert modules $(\operatorname{\mathcal{H}}_{2},T_{2})$ have the property that $$\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T_{2},T_{1})=0$$ for every Hilbert module $(\operatorname{\mathcal{H}}_{1},T_{1})$. Such Hilbert modules are said to be projective. It is easy to verify using Theorem 2.1 that the map $[X]\mapsto[X^{*}]$ establishes an isomorphism between the groups $\operatorname{Ext}^{1}_{A(\operatorname{\mathbb{D}})}(T_{2},T_{1})$ and $\operatorname{Ext}^{1}_{A(\operatorname{\mathbb{D}})}(T_{1}^{*},T_{2}^{*})$, so $T_{2}$ is projective if and only if $$\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T_{1},T_{2}^{*})=0$$ for every Hilbert module $(\operatorname{\mathcal{H}}_{1},T_{1})$. A characterization of projective Hilbert modules has long been sought. This result from [carlson1994] was mentioned in the introduction. Theorem 2.3. If $T\in B(\operatorname{\mathcal{H}})$ is similar to a unitary operator, then the Hilbert module $(\operatorname{\mathcal{H}},T)$ is projective. If $\operatorname{\mathcal{E}}$ is a separable Hilbert space, we denote by $L^{2}(\operatorname{\mathcal{E}})$ the Hilbert space of weakly measurable square integrable functions $f:\operatorname{\mathbb{T}}\to\operatorname{\mathcal{E}}$. The Hardy space $H^{2}(\operatorname{\mathcal{E}})$ is the closed subspace of $L^{2}(\operatorname{\mathcal{E}})$ consisting of functions with vanishing negative Fourier coefficients. Elements of $H^{2}(\operatorname{\mathcal{E}})$ can also be viewed as $\operatorname{\mathcal{E}}$-valued functions holomorphic on $\operatorname{\mathbb{D}}$ with square summable Taylor coefficients. We embed $\operatorname{\mathcal{E}}$ in $H^{2}(\operatorname{\mathcal{E}})$ as the subspace consisting of constant functions, and we denote by $P_{\operatorname{\mathcal{E}}}$ the orthogonal projection of $H^{2}(\operatorname{\mathcal{E}})$ onto $\operatorname{\mathcal{E}}$. When $\operatorname{\mathcal{E}}=\operatorname{\mathbb{C}}$, we simply write $H^{2}(\operatorname{\mathcal{E}})=H^{2}$ and $L^{2}(\operatorname{\mathcal{E}})=L^{2}$. The unilateral shift operator $$S_{\operatorname{\mathcal{E}}}:H^{2}(\operatorname{\mathcal{E}})\to H^{2}(% \operatorname{\mathcal{E}})$$ is defined as $$(S_{\operatorname{\mathcal{E}}}f)(z)=zf(z)$$ for every $f\in H^{2}(\operatorname{\mathcal{E}})$. Recall that the multiplicity of $S_{\operatorname{\mathcal{E}}}$ is the dimension of $\operatorname{\mathcal{E}}$. Note also that since $S_{\operatorname{\mathcal{E}}}$ is isometric, it gives rise to a Hilbert module structure on $H^{2}(\operatorname{\mathcal{E}})$. We now give a rather precise description of the group $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S)$ where $(\operatorname{\mathcal{H}},T)$ is any Hilbert module. This result was originally proved in [carlson1995] (Proposition 3.1.1 and Theorem 3.2.1) for the shift of multiplicity one. However, a quick glance at the proof of Proposition 3.1.1 shows that it can be adapted to any multiplicity, while the more general version of Theorem 3.2.1 can be found in Lemma 2.1 of [carlson1997]. Theorem 2.4. Let $(\operatorname{\mathcal{H}},T$) be a Hilbert module. Then, an operator $X:\operatorname{\mathcal{H}}\to\operatorname{\mathcal{E}}$ gives rise to an element $[X]\in\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S_{\operatorname% {\mathcal{E}}})$ if and only if there exists a constant $c>0$ such that $$\sum_{n=0}^{\infty}\|XT^{n}h\|^{2}\leq c\|h\|^{2}$$ for every $h\in\operatorname{\mathcal{H}}$. Moreover, for every $[X]\in\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S_{\operatorname% {\mathcal{E}}})$ there exists an operator $Y:\operatorname{\mathcal{H}}\to\operatorname{\mathcal{E}}$ with the property that $[X]=[Y]$. We bring the reader’s attention to the fact that the group $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S_{\operatorname{% \mathcal{E}}})$ is really of a “scalar” nature: it consists of elements $[X]$ where the operator $X:\operatorname{\mathcal{H}}\to H^{2}(\operatorname{\mathcal{E}})$ has range contained in the constant functions $\operatorname{\mathcal{E}}$. We use Theorem 2.4 throughout as a basis for comparison with our own results about $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S_{\operatorname{% \mathcal{E}}}^{*})$. Finally, we end this section with a theorem that identifies the projective modules in the smaller category of contractive Hilbert modules (see [ferguson1997]). Theorem 2.5. Let $T\in B(\operatorname{\mathcal{H}})$ be similar to a contraction. The following statements are equivalent: (i) $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S_{\operatorname{% \mathcal{E}}})=0$ for some separable Hilbert space $\operatorname{\mathcal{E}}$ (ii) the Hilbert module $(\operatorname{\mathcal{H}},T)$ is projective in the category of Hilbert modules similar to a contractive one (iii) the operator $T$ is similar to an isometry. 3. A criterion for the projectivity of isometric Hilbert modules Throughout the paper we will assume that $\operatorname{\mathcal{E}}$ is a separable Hilbert space. The first result of this section is elementary. We record it here for convenience. Lemma 3.1. Let $X,\Lambda:\operatorname{\mathcal{H}}\to H^{2}(\operatorname{\mathcal{E}})$ be bounded operators defined as $$Xh=\sum_{n=0}^{\infty}z^{n}X_{n}h$$ and $$\Lambda h=\sum_{n=0}^{\infty}z^{n}L_{n}h$$ for every $h\in\operatorname{\mathcal{H}}$, where $L_{n},X_{n}\in B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})$ for every $n\geq 0$. Then, $X=S_{\operatorname{\mathcal{E}}}^{*}\Lambda-\Lambda T$ if and only if $X_{n}=L_{n+1}-L_{n}T$ for every $n\geq 0$. The following observation lies at the base of our investigations. Lemma 3.2. Let $(\operatorname{\mathcal{H}},T)$ be a Hilbert module. Let $X:\operatorname{\mathcal{H}}\to H^{2}(\operatorname{\mathcal{E}})$ be a bounded operator defined as $$Xh=\sum_{n=0}^{\infty}z^{n}X_{n}h$$ for every $h\in\operatorname{\mathcal{H}}$, where $X_{n}\in B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})$ for every $n\geq 0$. Let $c>0$ and $L\in B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})$. Then, there exists a bounded operator $\Lambda:\operatorname{\mathcal{H}}\to H^{2}(\operatorname{\mathcal{E}})$ such that $$X=S_{\operatorname{\mathcal{E}}}^{*}\Lambda-\Lambda T,$$ $$P_{\operatorname{\mathcal{E}}}\Lambda=-L$$ and $$\|\Lambda h\|^{2}\leq\|Lh\|^{2}+c\|h\|^{2}$$ for every $h\in\operatorname{\mathcal{H}}$ if and only if $$\sum_{n=1}^{\infty}\left\|\left(\sum_{j=0}^{n-1}X_{n-1-j}T^{j}-LT^{n}\right)h% \right\|^{2}\leq c\|h\|^{2}$$ for every $h\in\operatorname{\mathcal{H}}$. Proof. Assume first that $$\sum_{n=1}^{\infty}\left\|\left(\sum_{j=0}^{n-1}X_{n-1-j}T^{j}-LT^{n}\right)h% \right\|^{2}\leq c\|h\|^{2}$$ for every $h\in\operatorname{\mathcal{H}}$. Set $L_{0}=-L$ and $$L_{n}=\sum_{j=0}^{n-1}X_{n-1-j}T^{j}-LT^{n}$$ for $n\geq 1$. Notice now that we have $$L_{0}T=-LT=L_{1}-X_{0}$$ and $$\displaystyle L_{n}T$$ $$\displaystyle=\sum_{j=0}^{n-1}X_{n-1-j}T^{j+1}-LT^{n+1}$$ $$\displaystyle=\sum_{j=1}^{n}X_{n-j}T^{j}-LT^{n+1}$$ $$\displaystyle=\sum_{j=0}^{n}X_{n-j}T^{j}-X_{n}-LT^{n+1}$$ $$\displaystyle=L_{n+1}-X_{n}$$ for $n\geq 1$, which shows that for every $n\geq 0$ we have $$X_{n}=L_{n+1}-L_{n}T.$$ Define $$\Lambda h=\sum_{n=0}^{\infty}z^{n}L_{n}h$$ for every $h\in\operatorname{\mathcal{H}}$. By Lemma 3.1, we see that $$S_{\operatorname{\mathcal{E}}}^{*}\Lambda-\Lambda T=X.$$ Moreover, by assumption we have for every $h\in\operatorname{\mathcal{H}}$ that $$\displaystyle\|\Lambda h\|^{2}$$ $$\displaystyle=\sum_{n=0}^{\infty}\|L_{n}h\|^{2}$$ $$\displaystyle=\|Lh\|^{2}+\sum_{n=1}^{\infty}\left\|\left(\sum_{j=0}^{n-1}X_{n-% 1-j}T^{j}-LT^{n}\right)h\right\|^{2}$$ $$\displaystyle\leq\|Lh\|^{2}+c\|h\|^{2}.$$ Conversely, assume that there exists a bounded linear operator $\Lambda:\operatorname{\mathcal{H}}\to H^{2}(\operatorname{\mathcal{E}})$ defined as $$\Lambda h=\sum_{n=0}^{\infty}z^{n}L_{n}h$$ for every $h\in\operatorname{\mathcal{H}}$ with the property that $$X=S_{\operatorname{\mathcal{E}}}^{*}\Lambda-\Lambda T,$$ $$L_{0}=-L$$ and $$\|\Lambda h\|^{2}\leq\|Lh\|^{2}+c\|h\|^{2}$$ for every $h\in\operatorname{\mathcal{H}}.$ Then, by Lemma 3.1 we have that $$X_{n}=L_{n+1}-L_{n}T$$ for every $n\geq 0$, so that we find $$\displaystyle\sum_{j=0}^{n}X_{n-j}T^{j}$$ $$\displaystyle=\sum_{j=0}^{n}(L_{n-j+1}-L_{n-j}T)T^{j}$$ $$\displaystyle=\sum_{j=0}^{n}L_{n-j+1}T^{j}-\sum_{j=1}^{{n+1}}L_{n-j+1}T^{j}$$ $$\displaystyle=L_{n+1}-L_{0}T^{n+1}$$ $$\displaystyle=L_{n+1}+LT^{n+1}$$ for every $n\geq 0$. Consequently, $$\sum_{n=1}^{\infty}\left\|\left(LT^{n}-\sum_{j=0}^{n-1}X_{n-1-j}T^{j}\right)h% \right\|^{2}=\sum_{n=1}^{\infty}\|L_{n}h\|^{2}=\|\Lambda h\|^{2}-\|Lh\|^{2}% \leq c\|h\|^{2}$$ and the proof is complete. ∎ As suggested by this result, we make the following definition. Definition 3.3. Let $(\operatorname{\mathcal{H}},T)$ be a Hilbert module and let $\operatorname{\mathcal{E}}$ be a separable Hilbert space. We denote by $Z_{\operatorname{\mathcal{E}}}(T)$ the subspace of $B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})$ consisting of the operators $X\in B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})$ with the property that there exists a constant $c_{X}>0$ such that $$\sum_{n=0}^{\infty}\|XT^{n}h\|^{2}\leq c_{X}\|h\|^{2}$$ for every $h\in\operatorname{\mathcal{H}}$. By Theorem 2.4, we see that the set $Z_{\operatorname{\mathcal{E}}}(T)$ consists exactly of those operators $X:\operatorname{\mathcal{H}}\to\operatorname{\mathcal{E}}$ which give rise to an element $[X]\in\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S_{\operatorname% {\mathcal{E}}})$. We now give a criterion for an element $[X]$ of $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S_{\operatorname{% \mathcal{E}}}^{*})$ to be trivial when $X$ is particularly simple, namely of the type considered in Lemma 2.2. Theorem 3.4. Let $(\operatorname{\mathcal{H}},T)$ be a Hilbert module. Let $X:\operatorname{\mathcal{H}}\to H^{2}(\operatorname{\mathcal{E}})$ be defined as $$Xh=\sum_{n=0}^{\infty}z^{n}X_{n}h$$ for every $h\in\operatorname{\mathcal{H}}$, where $X_{n}\in B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})$ for every $n\geq 0$. Assume that $S_{\operatorname{\mathcal{E}}}^{*N}XT^{N}=0$ for some integer $N\geq 0$. Then, the element $[X]$ of $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S_{\operatorname{% \mathcal{E}}}^{*})$ is trivial if and only if $$\sum_{j=0}^{N-1}X_{j}T^{N-1-j}\in B(\operatorname{\mathcal{H}},\operatorname{% \mathcal{E}})T^{N}+Z_{\operatorname{\mathcal{E}}}(T).$$ Proof. By Lemma 3.2, we find that $[X]=0$ if and only if for some $L\in B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})$ we have $$\sum_{n=1}^{\infty}\left\|\left(LT^{n}+\sum_{j=0}^{n-1}X_{n-1-j}T^{j}\right)h% \right\|^{2}\leq c\|h\|^{2}$$ for some constant $c>0$ and every $h\in\operatorname{\mathcal{H}}$. Now, the condition $S_{\operatorname{\mathcal{E}}}^{*N}XT^{N}=0$ implies that $X_{n}T^{m}=0$ if $n\geq N$ and $m\geq N$. Thus, $X_{n-1-j}T^{j}\neq 0$ only if $j\leq N-1$ or $j\geq n-N$. Therefore, for $n\geq 2N$, we can write $$\displaystyle\sum_{j=0}^{n-1}X_{n-1-j}T^{j}$$ $$\displaystyle=\sum_{j=0}^{N-1}X_{n-1-j}T^{j}+\sum_{j=n-N}^{n-1}X_{n-1-j}T^{j}$$ $$\displaystyle=\sum_{j=0}^{N-1}(X_{n-1-j}T^{j}+X_{j}T^{n-1-j}).$$ Notice now that $$\displaystyle\sum_{n=2N}^{\infty}\left\|\left(\sum_{j=0}^{N-1}X_{n-1-j}T^{j}% \right)h\right\|^{2}$$ $$\displaystyle\leq N\sum_{n=2N}^{\infty}\sum_{j=0}^{N-1}\left\|X_{n-1-j}T^{j}h% \right\|^{2}$$ $$\displaystyle\leq N\sum_{j=0}^{N-1}\|XT^{j}h\|^{2}$$ $$\displaystyle\leq N^{2}\|X\|^{2}C_{T}^{2}\|h\|^{2}$$ where as usual $C_{T}>0$ is a constant satisfying $$\|\varphi(T)\|\leq C_{T}\|\varphi\|_{\infty}$$ for every $\varphi\in A(\operatorname{\mathbb{D}})$. Thus, $[X]=0$ if and only if $$\sum_{n=2N}^{\infty}\left\|\left(LT^{n}+\sum_{j=0}^{N-1}X_{j}T^{n-1-j}\right)h% \right\|^{2}\leq c\|h\|^{2}$$ which is in turn equivalent to $$\sum_{n=2N}^{\infty}\left\|\left(LT^{N}+\sum_{j=0}^{N-1}X_{j}T^{N-1-j}\right)T% ^{n-N}h\right\|^{2}\leq c\|h\|^{2}$$ and thus to $$LT^{N}+\sum_{j=0}^{N-1}X_{j}T^{N-1-j}\in Z_{\operatorname{\mathcal{E}}}(T).$$ ∎ Definition 3.5. Let $\operatorname{\mathcal{E}}$ be a separable Hilbert space. Given two Hilbert modules $(H^{2}(\operatorname{\mathcal{E}}),T_{1})$ and $(\operatorname{\mathcal{H}},T_{2})$, we define the polynomial subgroup $\operatorname{Ext}^{1}_{\rm{poly}}(T_{2},T_{1})$ of $\operatorname{Ext}^{1}_{A(\operatorname{\mathbb{D}})}(T_{2},T_{1})$ to be the subgroup of elements $[X]$ such that $S_{\operatorname{\mathcal{E}}}^{*N}XT_{2}^{N}=0$ for some integer $N\geq 0$. We are primarily interested in the case of $T_{1}=S_{\operatorname{\mathcal{E}}}$ or $T_{1}=S_{\operatorname{\mathcal{E}}}^{*}$. In particular, we obtain the following consequence of Theorem 3.4. Corollary 3.6. Let $S_{\operatorname{\mathcal{E}}}:H^{2}(\operatorname{\mathcal{E}})\to H^{2}(% \operatorname{\mathcal{E}})$ be the unilateral shift and let $(\operatorname{\mathcal{H}},T)$ be a Hilbert module. Then $$B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})T+Z_{\operatorname{% \mathcal{E}}}(T)=B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})$$ if and only if $$\operatorname{Ext}^{1}_{\rm{poly}}(T,S_{\operatorname{\mathcal{E}}}^{*})=0.$$ Proof. Note that $Z_{\operatorname{\mathcal{E}}}(T)T\subset Z_{\operatorname{\mathcal{E}}}(T)$. Thus, if $$B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})T+Z_{\operatorname{% \mathcal{E}}}(T)=B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})$$ then by using an iterative argument we find $$B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})T^{N}+Z_{\operatorname% {\mathcal{E}}}(T)=B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})$$ for every $N\geq 0$, and Theorem 3.4 immediately implies that $$\operatorname{Ext}^{1}_{\rm{poly}}(T,S_{\operatorname{\mathcal{E}}}^{*})=0.$$ Conversely, assume the polynomial subgroup vanishes and fix $X\in B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})$. In light of the equality $S^{*}_{\operatorname{\mathcal{E}}}X=0$, Lemma 2.2 implies that the operator $X:\operatorname{\mathcal{H}}\to H^{2}(\operatorname{\mathcal{E}})$ gives rise to an element $[X]$ in $\operatorname{Ext}^{1}_{\rm{poly}}(T,S_{\operatorname{\mathcal{E}}}^{*})$ and by Theorem 3.4 we find $$X\in B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})T+Z_{% \operatorname{\mathcal{E}}}(T).$$ Since $X\in B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})$ was arbitrary, we see that $$B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})T+Z_{\operatorname{% \mathcal{E}}}(T)=B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}}).$$ ∎ Corollary 3.7. Let $S_{\operatorname{\mathcal{E}}}:H^{2}(\operatorname{\mathcal{E}})\to H^{2}(% \operatorname{\mathcal{E}})$ be the unilateral shift and let $(\operatorname{\mathcal{H}},T)$ be a Hilbert module. If $$\operatorname{Ext}^{1}_{\rm{poly}}(T,S_{\operatorname{\mathcal{E}}})=% \operatorname{Ext}_{\rm{poly}}^{1}(T,S_{\operatorname{\mathcal{E}}}^{*})=0,$$ then $T$ is bounded below. Conversely, if $T$ is bounded below, then $$\operatorname{Ext}_{\rm{poly}}^{1}(T,S_{\operatorname{\mathcal{E}}}^{*})=0.$$ Proof. Assume first that $T$ is bounded below. Then, $T$ is left-invertible so that $$B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})T=B(\operatorname{% \mathcal{H}},\operatorname{\mathcal{E}})$$ and we obtain $$\operatorname{Ext}_{\rm{poly}}^{1}(T,S_{\operatorname{\mathcal{E}}}^{*})=0$$ by Corollary 3.6. Conversely, assume $$\operatorname{Ext}^{1}_{\rm{poly}}(T,S_{\operatorname{\mathcal{E}}})=% \operatorname{Ext}_{\rm{poly}}^{1}(T,S_{\operatorname{\mathcal{E}}}^{*})=0.$$ By Theorem 2.4 we know that $X\in Z_{\operatorname{\mathcal{E}}}(T)$ if and only if $[X]\in\operatorname{Ext}_{\rm{poly}}^{1}(T,S_{\operatorname{\mathcal{E}}})$. Since this group is assumed to be trivial, we see $X\in Z_{\operatorname{\mathcal{E}}}(T)$ implies $X=S_{\operatorname{\mathcal{E}}}L-LT$. Now, the range of $X$ lies in $\operatorname{\mathcal{E}}$, so we obtain $X=-P_{\operatorname{\mathcal{E}}}LT$, whence $Z_{\operatorname{\mathcal{E}}}(T)\subset B(\operatorname{\mathcal{H}},% \operatorname{\mathcal{E}})T$. Using $\operatorname{Ext}_{\rm{poly}}^{1}(T,S_{\operatorname{\mathcal{E}}}^{*})=0$, Corollary 3.6 implies $$B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})=B(\operatorname{% \mathcal{H}},\operatorname{\mathcal{E}})T+Z_{\operatorname{\mathcal{E}}}(T)% \subset B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})T$$ and thus $T$ is bounded below. ∎ It is known that the fact that $T$ is bounded below isn’t sufficient for the group $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S_{\operatorname{% \mathcal{E}}})$ to vanish, so that the preceding corollary cannot be improved to an equivalence. In fact, in the case where $T$ is a contraction, the vanishing of this extension group is equivalent to the operator $T$ being similar to an isometry by Theorem 2.5. We obtain another consequence of Corollary 3.6, which applies in particular to self-adjoint contractions with closed range. Corollary 3.8. Let $S_{\operatorname{\mathcal{E}}}:H^{2}(\operatorname{\mathcal{E}})\to H^{2}(% \operatorname{\mathcal{E}})$ be the unilateral shift and let $(\operatorname{\mathcal{H}},T)$ be a Hilbert module. If $$T\operatorname{\mathcal{H}}\subset(\ker T)^{\perp}\subset T^{*}\operatorname{% \mathcal{H}}$$ then $$\operatorname{Ext}^{1}_{\rm{poly}}(T,S_{\operatorname{\mathcal{E}}}^{*})=0.$$ Proof. Let $X\in B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})$ and set $X_{1}=XP_{(\ker T)^{\perp}}$ and $X_{2}=XP_{\ker T}$ (here $P_{M}$ denotes the orthogonal projection onto the closed subspace $M\subset\operatorname{\mathcal{H}}$). Then, we have that the range of $X_{1}^{*}$ is contained in $(\ker T)^{\perp}\subset T^{*}\operatorname{\mathcal{H}}$, and thus $X_{1}\in B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})T$ by Douglas’s lemma. Moreover, since $T\operatorname{\mathcal{H}}\subset(\ker T)^{\perp}$, we get $X_{2}T^{k}=0$ for every $k\geq 1$. Therefore, $X_{2}\in Z_{\operatorname{\mathcal{E}}}(T)$. The decomposition $X=X_{1}+X_{2}$ shows that $$B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})=B(\operatorname{% \mathcal{H}},\operatorname{\mathcal{E}})T+Z_{\operatorname{\mathcal{E}}}(T)$$ and an application of Corollary 3.6 finishes the proof. ∎ We close this section with an example. In view of Corollary 3.6, one way of establishing that $(H^{2}(\operatorname{\mathcal{E}}),S_{\operatorname{\mathcal{E}}})$ is not a projective Hilbert module would be to find a polynomially bounded operator $T\in B(\operatorname{\mathcal{H}})$ that satisfies $$B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})T+Z_{\operatorname{% \mathcal{E}}}(T)\neq B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}}).$$ It is easy to exhibit a polynomially bounded operator that satisfies the weaker condition $$B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})T\cup Z_{\operatorname% {\mathcal{E}}}(T)\neq B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}}).$$ Let $\operatorname{\mathcal{E}}=\operatorname{\mathbb{C}}$, $\operatorname{\mathcal{H}}=H^{2}\oplus L^{2}$ and $T=S_{\operatorname{\mathbb{C}}}^{*}\oplus U$ where $U$ is the unitary operator of multiplication by the variable $e^{it}$ on $L^{2}$. Define $X_{1}:H^{2}\to\operatorname{\mathbb{C}}$ as $$X_{1}h=h(0)$$ for every $h\in H^{2}$. Choose $\xi\in L^{2}$ to be a positive function with the property that $\xi^{2}\notin L^{2}$ and define $X_{2}:L^{2}\to\operatorname{\mathbb{C}}$ as $$X_{2}f=\langle f,\xi\rangle$$ for every $f\in L^{2}$. Set $X(h_{1}\oplus h_{2})=X_{1}h_{1}+X_{2}h_{2}$. We have that $X_{1}1\neq 0$, whence $X$ does not vanish on $\ker T$ and $X\notin B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})T$. Moreover, the sum $\sum_{n=0}^{\infty}\|X_{2}U^{n}\xi\|^{2}$ is infinite since $$\sum_{n=0}^{\infty}\|X_{2}U^{n}\xi\|^{2}=\sum_{n=0}^{\infty}|\langle\xi^{2},e^% {int}\rangle|^{2}\geq\frac{1}{2}\sum_{n=-\infty}^{\infty}|\langle\xi^{2},e^{% int}\rangle|^{2}$$ and $\xi^{2}\notin L^{2}$. Therefore, $X\notin Z_{\operatorname{\mathcal{E}}}(T)$ and we conclude that $$X\notin B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})T\cup Z_{% \operatorname{\mathcal{E}}}(T).$$ In particular, $[X]$ defines an “almost” non-trivial element of $\text{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S_{\operatorname{\mathbb{C}}}^% {*})$. Indeed, since $S_{\operatorname{\mathbb{C}}}^{*}X=0$ we have that $X$ gives rise to an element $[X]$ of $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S_{\operatorname{% \mathbb{C}}}^{*})$ by Lemma 2.2. Assume that $\Lambda:\operatorname{\mathcal{H}}\to H^{2}$ satisfies $\Lambda\operatorname{\mathcal{H}}\subset zH^{2}$. We see via Lemma 3.2 that $$X=S_{\operatorname{\mathbb{C}}}^{*}\Lambda-\Lambda T$$ is equivalent to $X\in Z_{\operatorname{\mathcal{E}}}(T)$. Therefore, $$X\neq S_{\operatorname{\mathbb{C}}}^{*}\Lambda-\Lambda T$$ whenever $\Lambda\operatorname{\mathcal{H}}\subset zH^{2}$. Of course, this does not imply that $[X]$ is a non-trivial element of $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S_{\operatorname{% \mathbb{C}}}^{*})$, but it shows that the equation $$X=S_{\operatorname{\mathbb{C}}}^{*}\Lambda-\Lambda T$$ has no solution if we impose the extra condition that the range of $\Lambda$ should lie entirely in the codimension one subspace $zH^{2}$. 4. Explicit calculations of the subspace Z The goal of this section is to identify the spaces $Z_{\operatorname{\mathcal{E}}}(S^{*}_{\operatorname{\mathcal{F}}})$ and $Z_{\operatorname{\mathcal{E}}}(S_{\operatorname{\mathcal{F}}})$ for separable Hilbert spaces $\operatorname{\mathcal{E}},\operatorname{\mathcal{F}}$ (recall Definition 3.3). First we set up some notation. Given a bounded operator $X:H^{2}(\operatorname{\mathcal{F}})\to\operatorname{\mathcal{E}}$, we can write $$X^{*}e=\sum_{n=0}^{\infty}z^{n}X_{n}^{*}e$$ for every $e\in\operatorname{\mathcal{E}}$, where $X_{n}^{*}:\operatorname{\mathcal{E}}\to\operatorname{\mathcal{F}}$ for each $n$. In particular, $X_{n}:\operatorname{\mathcal{F}}\to\operatorname{\mathcal{E}}$ and $$Xh=\sum_{n=0}^{\infty}X_{n}\widehat{h}(n)$$ where $\widehat{h}(n)\in\operatorname{\mathcal{F}}$ denotes the $n$-th Fourier coefficient of the function $h\in H^{2}(\operatorname{\mathcal{F}})$. Associated to $X$, there is the Toeplitz operator $$T_{X}:H^{2}(\operatorname{\mathcal{F}})\to H^{2}(\operatorname{\mathcal{E}})$$ defined as $$T_{X}h=\sum_{n=0}^{\infty}z^{n}\left(\sum_{m=n}^{\infty}X_{m-n}\widehat{h}(m)\right)$$ and the Hankel operator $$H_{X}:H^{2}(\operatorname{\mathcal{F}})\to H^{2}(\operatorname{\mathcal{E}})$$ defined as $$H_{X}h=\sum_{n=0}^{\infty}z^{n}\left(\sum_{m=0}^{\infty}X_{m+n}\widehat{h}(m)% \right).$$ Typically, $T_{X}$ and $H_{X}$ are unbounded operators, but they are always defined on the dense subset of polynomials. Proposition 4.1. If $S_{\operatorname{\mathcal{F}}}:H^{2}(\operatorname{\mathcal{F}})\to H^{2}(% \operatorname{\mathcal{F}})$ is the unilateral shift, then $Z_{\operatorname{\mathcal{E}}}(S_{\operatorname{\mathcal{F}}}^{*})$ consists of the operators $X\in B(H^{2}(\operatorname{\mathcal{F}}),\operatorname{\mathcal{E}})$ with the property that $T_{X}$ is bounded, while $Z_{\operatorname{\mathcal{E}}}(S_{\operatorname{\mathcal{F}}})$ consists of the operators $X\in B(H^{2}(\operatorname{\mathcal{F}}),\operatorname{\mathcal{E}})$ with the property that $H_{X}$ is bounded. Proof. We first observe that $$\displaystyle\sum_{n=0}^{\infty}\|XS_{\operatorname{\mathcal{F}}}^{*n}h\|^{2}$$ $$\displaystyle=\sum_{n=0}^{\infty}\left\|\sum_{m=0}^{\infty}X_{m}\widehat{h}(m+% n)\right\|^{2}$$ $$\displaystyle=\sum_{n=0}^{\infty}\left\|\sum_{m=n}^{\infty}X_{m-n}\widehat{h}(% m)\right\|^{2}$$ $$\displaystyle=\|T_{X}h\|^{2}$$ and $$\displaystyle\sum_{n=0}^{\infty}\|XS_{\operatorname{\mathcal{F}}}^{n}h\|^{2}$$ $$\displaystyle=\sum_{n=0}^{\infty}\left\|\sum_{m=n}^{\infty}X_{m}\widehat{h}(m-% n)\right\|^{2}$$ $$\displaystyle=\sum_{n=0}^{\infty}\left\|\sum_{m=0}^{\infty}X_{m+n}\widehat{h}(% m)\right\|^{2}$$ $$\displaystyle=\|H_{X}h\|^{2}.$$ The result now follows directly from the definition of the spaces $Z_{\operatorname{\mathcal{E}}}(S_{\operatorname{\mathcal{F}}}^{*})$ and $Z_{\operatorname{\mathcal{E}}}(S_{\operatorname{\mathcal{F}}})$. ∎ It is well-known (see Chapter 5 of [bercovici1988]) that $T_{X}$ is bounded if and only if the function $$\Phi_{X}(z)=\sum_{n=0}^{\infty}z^{n}X_{n}$$ belongs to $H^{\infty}(B(\operatorname{\mathcal{F}},\operatorname{\mathcal{E}}))$, the space of weakly holomorphic bounded functions on $\operatorname{\mathbb{D}}$ with values in $B(\operatorname{\mathcal{F}},\operatorname{\mathcal{E}})$. Furthermore, $H_{X}$ is bounded if and only if we can find for every integer $n<0$ an operator $X_{n}:\operatorname{\mathcal{F}}\to\operatorname{\mathcal{E}}$ with the property that the function $$\sum_{n=-\infty}^{\infty}z^{n}X_{n}$$ belongs to $L^{\infty}(B(\operatorname{\mathcal{F}},\operatorname{\mathcal{E}}))$, the space of essentially bounded weakly measurable functions from $\operatorname{\mathbb{T}}$ into $B(\operatorname{\mathcal{F}},\operatorname{\mathcal{E}})$ (this is usually referred to as the Nehari-Page theorem, see [page1970]). In light of these remarks, let us examine what Proposition 4.1 says when $\operatorname{\mathcal{E}}=\operatorname{\mathbb{C}}$. In this case, any operator $X\in B(H^{2}(\operatorname{\mathcal{F}}),\operatorname{\mathcal{E}})$ acts as $Xh=\langle h,\xi\rangle$ for some fixed $\xi\in H^{2}(\operatorname{\mathcal{F}})$ and thus $$X_{n}\widehat{h}(n)=\langle\widehat{h}(n),\widehat{\xi}(n)\rangle.$$ We find $$\displaystyle T_{X}h$$ $$\displaystyle=\sum_{n=0}^{\infty}\left(\sum_{m=n}^{\infty}\langle\widehat{h}(m% ),\widehat{\xi}(m-n)\rangle\right)z^{n}$$ and $$\displaystyle H_{X}h$$ $$\displaystyle=\sum_{n=0}^{\infty}\left(\sum_{m=0}^{\infty}\langle\widehat{h}(m% ),\widehat{\xi}(m+n)\rangle\right)z^{n}$$ for every $h\in H^{2}(\operatorname{\mathcal{F}})$. This last equality shows that Corollary 3.1.6 in [carlson1995] follows from Proposition 4.1 upon taking $\operatorname{\mathcal{E}}=\operatorname{\mathbb{C}}$. Of course, this is to be expected since $X\in Z_{\operatorname{\mathbb{C}}}(S_{\operatorname{\mathcal{F}}})$ is equivalent to the fact that $X$ gives rise to an element $[X]$ of $\operatorname{Ext}_{A(D)}^{1}(S_{\operatorname{\mathcal{F}}},S_{\operatorname{% \mathbb{C}}})$, by Theorem 2.4. In addition, we see that $X\in Z_{\operatorname{\mathbb{C}}}(S^{*}_{\operatorname{\mathcal{F}}})$ if and only if $\xi\in H^{\infty}(\operatorname{\mathcal{F}})$, while $X\in Z_{\operatorname{\mathbb{C}}}(S_{\operatorname{\mathcal{F}}})$ if and only if there exists another holomorphic function $\eta$ with the property that $\xi+\overline{\eta}\in L^{\infty}(\operatorname{\mathcal{F}})$. 5. Vanishing of the polynomial subgroup in the case of contractions The goal of this section is to show that $\operatorname{Ext}_{\rm{poly}}^{1}(T,S_{\operatorname{\mathcal{E}}}^{*})=0$ whenever $T$ is a contraction. To achieve it, we make use of the functional model of a completely non-unitary contraction which we briefly recall (see [nagy2010] or [bercovici1988] for greater detail). Let $\operatorname{\mathcal{E}},\operatorname{\mathcal{E}}_{*}$ be separable Hilbert spaces and let $\Theta\in H^{\infty}(B(\operatorname{\mathcal{E}},\operatorname{\mathcal{E}}_{% *}))$ be a contractive (weakly) holomorphic function. Define $\Delta\in L^{\infty}(B(\operatorname{\mathcal{E}}))$ as follows $$\Delta(e^{it})=\sqrt{I-\Theta(e^{it})^{*}\Theta(e^{it})}.$$ If we set $$M(\Theta)=\{\Theta u\oplus\Delta u:u\in H^{2}(\operatorname{\mathcal{E}})\},$$ then the space $H(\Theta)$ is defined as $$H(\Theta)=(H^{2}(\operatorname{\mathcal{E}}_{*})\oplus\overline{\Delta L^{2}(% \operatorname{\mathcal{E}})})\ominus M({\Theta})$$ and we have $$S(\Theta)=P_{H(\Theta)}(S\oplus U)|H(\Theta)$$ where $S=S_{\operatorname{\mathcal{E}}_{*}}$ is the unilateral shift on $H^{2}(\operatorname{\mathcal{E}}_{*})$ and $U$ is the unitary operator of multiplication by the variable $e^{it}$ on $L^{2}(\operatorname{\mathcal{E}})$. In order to proceed, we require two technical lemmas which are most likely well-known. We provide the calculation for the reader’s convenience. Lemma 5.1. Let $e_{*}\in\operatorname{\mathcal{E}}_{*}$. Then, $$P_{M(\Theta)}(e_{*}\oplus 0)=\Theta\Theta(0)^{*}e_{*}\oplus\Delta\Theta(0)^{*}% e_{*}$$ and $$P_{H(\Theta)}(e_{*}\oplus 0)=(I-\Theta\Theta(0)^{*}e_{*})\oplus\left(-\Delta% \Theta(0)^{*}e_{*}\right).$$ Proof. For any $u\in H^{2}(\operatorname{\mathcal{E}})$ we have $$\langle e_{*}\oplus 0,\Theta u\oplus\Delta u\rangle_{H^{2}(\operatorname{% \mathcal{E}}_{*})\oplus L^{2}(\operatorname{\mathcal{E}})}=\langle\Theta(0)^{*% }e_{*},u(0)\rangle_{\operatorname{\mathcal{E}}}$$ and $$\displaystyle\langle\Theta\Theta(0)^{*}e_{*}\oplus\Delta\Theta(0)^{*}e_{*},% \Theta u\oplus\Delta u\rangle_{H^{2}(\operatorname{\mathcal{E}}_{*})\oplus L^{% 2}(\operatorname{\mathcal{E}})}$$ $$\displaystyle=\langle\Theta^{*}\Theta\Theta(0)^{*}e_{*},u\rangle_{H^{2}(% \operatorname{\mathcal{E}})}+\langle\Delta^{2}\Theta(0)^{*}e_{*},u\rangle_{L^{% 2}(\operatorname{\mathcal{E}})}$$ $$\displaystyle=\langle\Theta^{*}\Theta\Theta(0)^{*}e_{*},u\rangle_{L^{2}(% \operatorname{\mathcal{E}})}+\langle(I-\Theta^{*}\Theta)\Theta(0)^{*}e_{*},u% \rangle_{L^{2}(\operatorname{\mathcal{E}})}$$ $$\displaystyle=\langle\Theta(0)^{*}e_{*},u\rangle_{L^{2}(\operatorname{\mathcal% {E}})}$$ $$\displaystyle=\langle\Theta(0)^{*}e_{*},u(0)\rangle_{\operatorname{\mathcal{E}}}$$ which shows the first equality, and the second follows immediately. ∎ Lemma 5.2. The range of the operator $S(\Theta)$ is $$\{f_{1}\oplus f_{2}\in H(\Theta):f_{1}(0)\in\Theta(0)\operatorname{\mathcal{E}% }\}.$$ Proof. Assume that $$f_{1}\oplus f_{2}=S(\Theta)(v_{1}\oplus v_{2})$$ for $v_{1}\oplus v_{2}\in H(\Theta)$. Then, we can write $$f_{1}\oplus f_{2}=zv_{1}\oplus e^{it}v_{2}+\Theta u\oplus\Delta u$$ for some $u\in H^{2}(\operatorname{\mathcal{E}})$, and therefore $$f_{1}(0)=\Theta(0)u(0)$$ lies in the range of $\Theta(0)$. Conversely, pick $f=f_{1}\oplus f_{2}\in H(\Theta)$ such that $f_{1}(0)=\Theta(0)e$ for some $e\in\operatorname{\mathcal{E}}$. Then, the function $$f_{1}-\Theta e\in H^{2}(\operatorname{\mathcal{E}}_{*})$$ vanishes at $z=0$, so we can find another function $v_{1}\in H^{2}(\operatorname{\mathcal{E}}_{*})$ with the property that $$f_{1}-\Theta e=zv_{1}.$$ Since $U\Delta=\Delta U$, we find that the function $$v_{2}=U^{*}(f_{2}-\Delta e)$$ lies in $\overline{\Delta L^{2}(\operatorname{\mathcal{E}})}$ and satisfies $$Uv_{2}=f_{2}-\Delta e.$$ We see that $$\displaystyle P_{H(\Theta)}(Sv_{1}\oplus Uv_{2})$$ $$\displaystyle=P_{H(\Theta)}((f_{1}-\Theta e)\oplus(f_{2}-\Delta e))$$ $$\displaystyle=P_{H(\Theta)}(f_{1}\oplus f_{2})$$ $$\displaystyle=f_{1}\oplus f_{2}$$ and therefore $$f_{1}\oplus f_{2}=S(\Theta)P_{H(\Theta)}(v_{1}\oplus v_{2})$$ lies in the range of $S(\Theta)$. ∎ The following is the crucial technical step in the proof of the main result. Theorem 5.3. Let $\operatorname{\mathcal{F}},\operatorname{\mathcal{F}}_{*},\operatorname{% \mathcal{E}}$ be separable Hilbert spaces. Let $\Theta\in H^{\infty}(B(\operatorname{\mathcal{F}},\operatorname{\mathcal{F}}_{% *}))$ be a contractive holomorphic function. Then, $$B(H(\Theta),\operatorname{\mathcal{E}})=B(H(\Theta),\operatorname{\mathcal{E}}% )S(\Theta)^{*}+Z_{\operatorname{\mathcal{E}}}(S(\Theta)^{*}).$$ Proof. Let $X\in B(H(\Theta),\operatorname{\mathcal{E}})$. Define $X_{1}:H(\Theta)\to\operatorname{\mathcal{E}}$ as $$X_{1}h=XP_{H(\Theta)}P_{\operatorname{\mathcal{F}}^{*}\oplus\{0\}}\widehat{h}(0)$$ where for a function $h\in H(\Theta)$ we define $\widehat{h}(n)$ to be its $n$-th Fourier coefficient, which lies in $\operatorname{\mathcal{F}}_{*}\oplus\operatorname{\mathcal{F}}$. Given $e\in\operatorname{\mathcal{E}}$ and $h=h_{1}\oplus h_{2}\in H(\Theta)$, we have $$\displaystyle\left\langle X_{1}h,e\right\rangle_{\operatorname{\mathcal{E}}}$$ $$\displaystyle=\left\langle XP_{H(\Theta)}P_{\operatorname{\mathcal{F}}^{*}% \oplus\{0\}}\widehat{h}(0),e\right\rangle_{\operatorname{\mathcal{E}}}$$ $$\displaystyle=\left\langle h_{1}(0)\oplus 0,X^{*}e\right\rangle_{H^{2}(% \operatorname{\mathcal{F}}_{*})\oplus L^{2}(\operatorname{\mathcal{F}})}$$ $$\displaystyle=\left\langle\widehat{h}(0),P_{\operatorname{\mathcal{F}}_{*}% \oplus\{0\}}\widehat{X^{*}e}(0)\right\rangle_{\operatorname{\mathcal{F}}_{*}% \oplus\operatorname{\mathcal{F}}}$$ $$\displaystyle=\left\langle h,P_{H(\Theta)}P_{\operatorname{\mathcal{F}}_{*}% \oplus\{0\}}\widehat{X^{*}e}(0)\right\rangle_{H(\Theta)}$$ whence $$X_{1}^{*}e=P_{H(\Theta)}P_{\operatorname{\mathcal{F}}_{*}\oplus\{0\}}\widehat{% X^{*}e}(0).$$ Set $X_{2}=X-X_{1}$ and $\widehat{X^{*}e}(0)=f_{*}\oplus f\in\operatorname{\mathcal{F}}_{*}\oplus% \operatorname{\mathcal{F}}$. Using Lemma 5.1, we find $$X_{1}^{*}e=(I-\Theta\Theta(0)^{*})f_{*}\oplus\left(-\Delta\Theta(0)^{*}f_{*}% \right).$$ A straightforward verification using Lemma 5.2 establishes that the range of $X_{2}^{*}$ is contained in the range of $S(\Theta)$. By Douglas’s Lemma, this implies in turn that $$X_{2}\in B(H(\Theta),\operatorname{\mathcal{E}})S(\Theta)^{*}.$$ Since $X=X_{1}+X_{2}$, it remains only to check that $X_{1}\in Z_{\operatorname{\mathcal{E}}}(S(\Theta)^{*})$. First, we note that for $h=h_{1}\oplus h_{2}\in H(\Theta)$ we have $$S(\Theta)^{*n}h=(P_{H^{2}(\operatorname{\mathcal{F}}_{*})}\overline{z}^{n}h_{1% })\oplus e^{-int}h_{2}$$ and the Fourier coefficient of order zero of $S(\Theta)^{*n}h$ is therefore equal to $\widehat{h}(n)$. Consequently, $$X_{1}S(\Theta)^{*n}h=XP_{H(\Theta)}P_{\operatorname{\mathcal{F}}_{*}\oplus\{0% \}}\widehat{h}(n)$$ and $$\sum_{n=0}^{\infty}\|X_{1}S(\Theta)^{*n}h\|^{2}\leq\|X\|^{2}\sum_{n=0}^{\infty% }\|\widehat{h}(n)\|^{2}\leq\|X\|^{2}\|h\|^{2}$$ so that $X_{1}\in Z_{\operatorname{\mathcal{E}}}(S(\Theta)^{*})$. ∎ We now come to the main result of the paper (recall Definition 3.5). Theorem 5.4. Let $\operatorname{\mathcal{E}}$ be a separable Hilbert space and let $S_{\operatorname{\mathcal{E}}}:H^{2}(\operatorname{\mathcal{E}})\to H^{2}(% \operatorname{\mathcal{E}})$ be the unilateral shift. Then, $\operatorname{Ext}^{1}_{\rm{poly}}(T,S_{\operatorname{\mathcal{E}}}^{*})=0$ for every operator $T$ which is similar to a contraction. Proof. Since extension groups are invariant under similarity, we may assume that $T\in B(\operatorname{\mathcal{H}})$ is a contraction. Then, it is well-known that there exists a reducing subspace $M\subset\operatorname{\mathcal{H}}$ with the property that $T|M$ is completely non-unitary and $T|M^{\perp}$ is unitary. According to this decomposition, it is easy to verify that any bounded operator $X:\operatorname{\mathcal{H}}\to H^{2}(\operatorname{\mathcal{E}})$ giving rise to an element $[X]\in\operatorname{Ext}^{1}_{\rm{poly}}(T,S_{\operatorname{\mathcal{E}}}^{*})$ can be written as $X=(X_{1},X_{2})$, where $[X_{1}]\in\operatorname{Ext}^{1}_{\rm{poly}}(T|M,S_{\operatorname{\mathcal{E}}% }^{*})$ and $[X_{2}]\in\operatorname{Ext}^{1}_{\rm{poly}}(T|M^{\perp},S_{\operatorname{% \mathcal{E}}}^{*})$. Using Theorem 2.3 we see that $[X]=0$ if and only if $[X_{1}]=0$. Therefore, we may assume that $T$ (and hence $T^{*}$) is completely non-unitary. By Theorem VI.2.3 of [nagy2010], we know that $T^{*}$ is unitarily equivalent to $S(\Theta)$ for some contractive operator-valued holomorphic function $\Theta$, so for our purposes we may as well take $T^{*}$ to be equal to $S(\Theta)$. In light of Theorem 5.3, we find $$B(H(\Theta),\operatorname{\mathcal{E}})=B(H(\Theta),\operatorname{\mathcal{E}}% )S(\Theta)^{*}+Z_{\operatorname{\mathcal{E}}}(S(\Theta)^{*})$$ and thus an application of Corollary 3.6 completes the proof. ∎ Theorem 2.5 and Theorem 5.4 illustrate a clear difference between $S_{\operatorname{\mathcal{E}}}$ and $S_{\operatorname{\mathcal{E}}}^{*}$ on the level of extension groups: $\operatorname{Ext}_{\rm{poly}}^{1}(T,S_{\operatorname{\mathcal{E}}}^{*})=0$ for every contraction $T$ while $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S_{\operatorname{% \mathcal{E}}})=0$ only when the contraction $T$ is similar to an isometry. The reader will object immediately to the fact that we are considering the polynomial subgroup in one case and the full group in the other. In some sense however, there is no discrepancy between the two settings. Indeed, by Theorem 2.4, every element in $\operatorname{Ext}^{1}_{A(\operatorname{\mathbb{D}})}(T,S_{\operatorname{% \mathcal{E}}})$ can be represented by an operator $X:\operatorname{\mathcal{H}}\to H^{2}(\operatorname{\mathcal{E}})$ with range contained in $\operatorname{\mathcal{E}}$. In particular, we see that $S_{\operatorname{\mathcal{E}}}^{*}X=0$, and thus $X$ is a polynomial operator. Therefore, the group $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(\cdot,S_{\operatorname{% \mathcal{E}}})$ coincides with $\operatorname{Ext}_{\rm{poly}}^{1}(\cdot,S_{\operatorname{\mathcal{E}}})$. This is not the case if $S_{\operatorname{\mathcal{E}}}$ is replaced by $S_{\operatorname{\mathcal{E}}}^{*}$, as is shown in Section 6. 6. The case of non-contractive modules: Pisier’s counterexample Much of the vanishing results for extension groups obtained in the previous sections focus on extensions of the unilateral shift (and its adjoint) by contractive modules. It is natural to wonder what happens for extensions by polynomially bounded operators which are not similar to a contraction. Unfortunately, few examples of such operators are known. In fact, only the family of counterexamples introduced by Pisier in [pisier1997] is available. Let us recall the details of his construction here. Let $S_{\operatorname{\mathcal{F}}}:H^{2}(\operatorname{\mathcal{F}})\to H^{2}(% \operatorname{\mathcal{F}})$ be the unilateral shift with infinite multiplicity, where $$\operatorname{\mathcal{F}}=\bigoplus_{n=1}^{\infty}(\operatorname{\mathbb{C}}^% {2})^{\otimes n}.$$ Define $$V=\left(\begin{array}[]{cc}1&0\\ 0&-1\end{array}\right)$$ and $$D=\left(\begin{array}[]{cc}0&0\\ 1&0\end{array}\right).$$ For $0\leq k\leq n-1$, define $C_{k,n}:(\operatorname{\mathbb{C}}^{2})^{\otimes n}\to(\operatorname{\mathbb{C% }}^{2})^{\otimes n}$ as $$C_{k,n}=V^{\otimes(k+1)}\otimes D\otimes I^{\otimes(n-k-2)}$$ and for any $k\geq 0$ set $$W_{k}=\bigoplus_{n=k+1}^{\infty}C_{k,n}$$ which acts on $\operatorname{\mathcal{F}}$. It is well-known (see [davidson1997] or [paulsen2002]) that the sequence of operators $\{W_{k}\}_{k}\subset B(\operatorname{\mathcal{F}})$ satisfies the so-called canonical anticommutation relations. Given a sequence $\alpha=\{\alpha_{n}\}_{n=0}^{\infty}\subset\operatorname{\mathbb{C}}$, we define a Hankel operator $X_{\alpha}$ acting on $H^{2}(\operatorname{\mathcal{F}})$ by $$X_{\alpha}=(\alpha_{i+j}W_{i+j})_{i,j=0}^{\infty}$$ and we set $$R(X_{\alpha})=\left(\begin{array}[]{cc}S_{\operatorname{\mathcal{F}}}^{*}&X_{% \alpha}\\ 0&S_{\operatorname{\mathcal{F}}}\end{array}\right).$$ The following result can be found in [davidson1997] and [ricard2002]. Theorem 6.1. The operator $R(X_{\alpha})$ is polynomially bounded if and only if $$\sup_{k\geq 0}(k+1)^{2}\sum_{i=k}^{\infty}|\alpha_{i}|^{2}$$ is finite, and it is similar to a contraction if and only if $$\sum_{k=0}^{\infty}(k+1)^{2}|\alpha_{k}|^{2}$$ is finite. We noted at the end of Section 5 that $$\operatorname{Ext}^{1}_{A(\operatorname{\mathbb{D}})}(\cdot,S_{\operatorname{% \mathcal{E}}})=\operatorname{Ext}^{1}_{\rm{poly}}(\cdot,S_{\operatorname{% \mathcal{E}}}).$$ However, things are different for $\operatorname{Ext}^{1}_{A(\operatorname{\mathbb{D}})}(\cdot,S_{\operatorname{% \mathcal{E}}}^{*})$. Indeed, if the sequence $\alpha=\{\alpha_{n}\}_{n}$ is chosen such that $R(X_{\alpha})$ is polynomially bounded but not similar to a contraction, then $[X_{\alpha}]$ is a non-trivial element of $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(S_{\operatorname{% \mathcal{F}}},S_{\operatorname{\mathcal{F}}}^{*})$. In particular, Theorem 5.4 implies that $$\operatorname{Ext}_{\rm{poly}}^{1}(S_{\operatorname{\mathcal{F}}},S_{% \operatorname{\mathcal{F}}}^{*})\neq\operatorname{Ext}_{A(\operatorname{% \mathbb{D}})}^{1}(S_{\operatorname{\mathcal{F}}},S_{\operatorname{\mathcal{F}}% }^{*}).$$ We do not know whether equality holds if we require that the shift be of finite multiplicity. The remainder of this section is dedicated to the study of the group $$\operatorname{Ext}_{\rm{poly}}^{1}(R(X_{\alpha}),S_{\operatorname{\mathbb{C}}}% )=\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(R(X_{\alpha}),S_{% \operatorname{\mathbb{C}}}).$$ Of particular interest is the case where $R(X_{\alpha})$ is not similar to a contraction, which lies outside the reach of Theorem 2.5 where little is known. We start by giving an alternative formulation of Corollary 3.6 adapted to the unilateral shift of multiplicity one. For a Hilbert module $(\operatorname{\mathcal{H}},T)$, define $Z(T)\subset\operatorname{\mathcal{H}}$ to be the set consisting of those vectors $x\in\operatorname{\mathcal{H}}$ with the property that there exists a constant $c_{x}>0$ such that $$\sum_{n=0}^{\infty}|\langle h,T^{*n}x\rangle|\leq c_{x}\|h\|^{2}$$ for every $h\in\operatorname{\mathcal{H}}$. Lemma 6.2. Let $(\operatorname{\mathcal{H}},T)$ be a Hilbert module and let $S_{\operatorname{\mathbb{C}}}:H^{2}\to H^{2}$ be the unilateral shift with multiplicity one. Then $$T^{*}\operatorname{\mathcal{H}}+Z(T)=\operatorname{\mathcal{H}}$$ if and only if $$\operatorname{Ext}^{1}_{\rm{poly}}(T,S_{\operatorname{\mathbb{C}}}^{*})=0.$$ Proof. Note that any operator $X:\operatorname{\mathcal{H}}\to\operatorname{\mathbb{C}}$ is given by $Xh=\langle h,\xi\rangle$ for some $\xi\in\operatorname{\mathcal{H}}$. It is a routine verification to establish that under this identification, the equality $$B(\operatorname{\mathcal{H}},\operatorname{\mathbb{C}})T+Z_{\operatorname{% \mathbb{C}}}(T)=B(\operatorname{\mathcal{H}},\operatorname{\mathbb{C}})$$ corresponds to $$T^{*}\operatorname{\mathcal{H}}+Z(T)=\operatorname{\mathcal{H}}$$ so the result follows from Corollary 3.6. ∎ This corollary offers the advantage over the more complicated general version that the equality we are interested in takes places inside the Hilbert space $\operatorname{\mathcal{H}}$ instead of inside the Banach space $B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})$. Note also that the discussion at the end of Section 4 shows that $Z(S_{\operatorname{\mathcal{F}}}^{*})=H^{\infty}(\operatorname{\mathcal{F}})$. We now state a simple result. Lemma 6.3. Let $(\operatorname{\mathcal{H}},T)$ be a Hilbert module. Any operator $X\in Z_{\operatorname{\mathcal{E}}}(T)$ for which $[X]=0$ in $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(T,S_{\operatorname{% \mathcal{E}}})$ belongs to $B(\operatorname{\mathcal{H}},\operatorname{\mathcal{E}})T$. Proof. If $[X]=0$, then $X=S_{\operatorname{\mathcal{E}}}L-LT$ and arguing as in the proof of Corollary 3.7 we find that $X=L^{\prime}T$. ∎ Let us now apply this lemma to the study of $\operatorname{Ext}_{\rm{poly}}^{1}(R(X_{\alpha}),S_{\operatorname{\mathbb{C}}})$. Using the fact that $S_{\operatorname{\mathcal{F}}}^{*}X=XS_{\operatorname{\mathcal{F}}}$, it is readily verified that $$R(X_{\alpha})^{*n}=\left(\begin{array}[]{cc}S_{\operatorname{\mathcal{F}}}^{n}% &0\\ nX^{*}S_{\operatorname{\mathcal{F}}}^{n-1}&S_{\operatorname{\mathcal{F}}}^{*n}% \end{array}\right)$$ for every integer $n\geq 1$. Thus, for $h\in H^{2}(\operatorname{\mathcal{F}})$ we have that $h\oplus 0\in Z(R(X_{\alpha}))$ if and only if $$\sum_{n=1}^{\infty}\left|\left\langle\left(\begin{array}[]{c}z^{n}h\\ nX^{*}z^{n-1}h\end{array}\right),\left(\begin{array}[]{c}g_{1}\\ g_{2}\end{array}\right)\right\rangle\right|^{2}\leq c\|g\|^{2}$$ for some constant $c>0$ and every $g=g_{1}\oplus g_{2}\in H^{2}(\operatorname{\mathcal{F}})\oplus H^{2}(% \operatorname{\mathcal{F}})$. Consequently, $h\oplus 0\in Z(R(X_{\alpha}))$ is equivalent to $h\in Z(S_{\operatorname{\mathcal{F}}}^{*})$ and $$\sum_{n=1}^{\infty}|\langle nX^{*}z^{n-1}h,g\rangle|^{2}\leq c\|g\|^{2}$$ for every $g\in H^{2}(\operatorname{\mathcal{F}})$. Notice at this point that for $\omega\in\operatorname{\mathcal{F}}$ we have $$X^{*}z^{n}\omega=\sum_{m=0}^{\infty}z^{m}\overline{\alpha_{m+n}}W^{*(m+n)}\omega.$$ Let $\omega=e_{1}\oplus 0\oplus 0\oplus\ldots\in\operatorname{\mathcal{F}}$ where $e_{1}=(1,0)\in\operatorname{\mathbb{C}}^{2}$. Then, $W^{*}_{k}\omega=0$ for $k\geq 1$ so that $$nX^{*}z^{n-1}\omega=0$$ for $n\geq 2$ and thus $$\sum_{n=1}^{\infty}|\langle nX^{*}z^{n-1}\omega,g\rangle|^{2}\leq\|X^{*}\omega% \|^{2}\|g\|^{2}$$ for every $g\in H^{2}(\operatorname{\mathcal{F}})$. In addition, it is clear that $\omega\in H^{\infty}(\operatorname{\mathcal{F}})=Z(S^{*}_{\operatorname{% \mathcal{F}}})$ so in fact $\omega\in Z(R(X_{\alpha}))$. Define now $\Omega:H^{2}(\operatorname{\mathcal{F}})\oplus H^{2}(\operatorname{\mathcal{F}% })\to\operatorname{\mathbb{C}}$ by $$\Omega(f_{1}\oplus f_{2})=\langle f_{1}(0),\omega\rangle_{\operatorname{% \mathcal{F}}}.$$ Since $\omega\in Z(R(X_{\alpha}))$, we have that $\Omega\in Z_{\operatorname{\mathbb{C}}}(R(X_{\alpha}))$, whence $$[\Omega]\in\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(R(X_{\alpha})% ,S_{\operatorname{\mathbb{C}}})$$ by Theorem 2.4. Moreover, $\Omega(\omega\oplus 0)=1$ and $R(X_{\alpha})(\omega\oplus 0)=0$ so that $$\Omega\ker R(X_{\alpha})\neq 0$$ and thus $[\Omega]\neq 0$ in $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(R(X_{\alpha}),S_{% \operatorname{\mathbb{C}}})$ by Lemma 6.3. In other words, $$\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(R(X_{\alpha}),S_{% \operatorname{\mathbb{C}}})\neq 0.$$ It is easy to see that this argument can be adapted to show that $$\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(R(X_{\alpha}),S_{% \operatorname{\mathcal{E}}})\neq 0$$ for every separable Hilbert space $\operatorname{\mathcal{E}}$. Note finally that $[\Omega]=0$ in $\operatorname{Ext}_{A(\operatorname{\mathbb{D}})}^{1}(R(X_{\alpha}),S^{*}_{% \operatorname{\mathbb{C}}})$ by Theorem 3.4. In conclusion, let us mention that the question of whether or not $$\operatorname{Ext}_{\rm{poly}}^{1}(R(X_{\alpha}),S^{*}_{\operatorname{\mathbb{% C}}})$$ vanishes (in the case where $R(X_{\alpha})$ is not similar to a contraction, of course) remains open. Given its direct relation to the projectivity of the unilateral shift of multiplicity one, this problem is obviously meaningful. We hope that Lemma 6.2 may help settle it in the future. References
A crystal embedding into Lusztig data of type $A$ JAE-HOON KWON Department of Mathematical Sciences, Seoul National University, Seoul 08826, Korea jaehoonkw@snu.ac.kr Abstract. Let $\boldsymbol{\rm i}$ be a reduced expression of the longest element in the Weyl group of type $A$, which is adapted to a Dynkin quiver with a single sink. We present a simple description of an embedding of the crystal of Young tableaux of arbitrary shape into the crystal of $\boldsymbol{\rm i}$-Lusztig’s data, which also gives an algorithm for the transition matrix between Lusztig data associated to reduced expressions, which are adapted to quivers with a single sink. Key words and phrases:quantum groups, crystal graphs 2010 Mathematics Subject Classification: 17B37, 22E46, 05E10 This work was supported by Samsung Science and Technology Foundation under Project Number SSTF-BA1501-01. 1. Introduction Let $U_{q}(\mathfrak{g})$ be the quantized enveloping algebra associated to a symmetrizable Kac-Moody algebra $\mathfrak{g}$. The negative part of $U_{q}(\mathfrak{g})$ has a basis called a canonical basis [15] or lower global crystal basis [6], which has many fundamental properties. The canonical basis forms a colored oriented graph $B(\infty)$, called a crystal, with respect to Kashiwara operators, which plays an important role in the study of combinatorial aspects of $U_{q}(\mathfrak{g})$-modules together with its subgraph $B(\lambda)$ associated to any integrable highest weight module $V(\lambda)$ with highest weight $\lambda$. Suppose that $\mathfrak{g}$ is a finite-dimensional semi-simple Lie algebra with the index set $I$ of simple roots. Let $\boldsymbol{\rm i}=(i_{1},\ldots,i_{N})$ be a sequence of indices in $I$ corresponding to a reduced expression of the longest element in the Weyl group of $\mathfrak{g}$. A Lusztig’s PBW-type basis associated to $\boldsymbol{\rm i}$ [13] is a basis of the negative part of $U_{q}(\mathfrak{g})$, which is parametrized by the set $\mathcal{B}_{\boldsymbol{\rm i}}$ of $N$-tuple of non-negative integers. One can identify $B(\infty)$ with $\mathcal{B}_{\boldsymbol{\rm i}}$ since the associated PBW-type basis coincides with the canonical basis at $q=0$. We call an element in $\mathcal{B}_{\boldsymbol{\rm i}}$ an $\boldsymbol{\rm i}$-Lusztig datum or a Lusztig’s parametrization associated to $\boldsymbol{\rm i}$. Consider the map (1.1) $$\begin{split}\displaystyle\xymatrixcolsep{3pc}\xymatrixrowsep{4pc}\xymatrix{% \psi_{\lambda}^{\boldsymbol{\rm i}}:\ B(\lambda)\ \ar@{{}^{(}->}[r]&% \displaystyle\ \mathcal{B}_{\boldsymbol{\rm i}}},\end{split}$$ given by the $\boldsymbol{\rm i}$-Lusztig datum of $b\in B(\lambda)$ under the embedding of $B(\lambda)$ into $B(\infty)$. In this paper, we give a simple combinatorial description of (1.1) when $\mathfrak{g}=\mathfrak{gl}_{n}$, and $\boldsymbol{\rm i}$ is a reduced expression adapted to a Dynkin quiver of type $A_{n-1}$ with a single sink (Theorem 5.4). It is well-known that when $\boldsymbol{\rm i}$ is adapted to a quiver with one direction, for example $\boldsymbol{\rm i}=\boldsymbol{\rm i}_{0}=(1,2,1,3,2,1,\ldots,n-1,\ldots,1)$, the $\boldsymbol{\rm i}_{0}$-Lusztig datum of a Young tableaux is simply given by counting the number of occurrences of each entry in each row. But the $\boldsymbol{\rm i}$-Lusztig datum for arbitrary $\boldsymbol{\rm i}$ is not easy to describe in general, and one may apply a sequence of Lusztig’s transformations [16] or the formula for a transition map $R_{\boldsymbol{\rm i}_{0}}^{\boldsymbol{\rm i}}:\mathcal{B}_{\boldsymbol{\rm i% }_{0}}\rightarrow\mathcal{B}_{\boldsymbol{\rm i}}$ by Berenstein-Fomin-Zelevinsky [2]. We remark that our algorithm for computing $\psi_{\lambda}^{\boldsymbol{\rm i}}$ is completely different from the known methods, and hence provides an alternative simple description of $R^{\boldsymbol{\rm i}}_{\boldsymbol{\rm i}_{0}}$. The basic ideas in our description of $\psi_{\lambda}^{\boldsymbol{\rm i}}$ are as follows: Suppose that $\Omega$ is a quiver of type $A_{n-1}$ with a single sink, and $\boldsymbol{\rm i}$ is adapted to $\Omega$. Let $J\subset I$ be a maximal subset such that each connected component of the corresponding quiver $\Omega_{J}\subset\Omega$ has only one direction. Let $\mathfrak{g}_{J}$ be the maximal Levi subalgebra and $\mathfrak{u}_{J}$ the nilradical associated to $J$, respectively. The first step is to prove a tensor product decomposition $\mathcal{B}_{\boldsymbol{\rm i}}\cong B^{J}(\infty)\otimes B_{J}(\infty),$ as a crystal, where $B_{J}(\infty)$ is the crystal of the negative part of $U_{q}(\mathfrak{g}_{J})$ and $B^{J}(\infty)$ is the crystal of the quantum nilpotent subalgebra $U_{q}(\mathfrak{u}_{J})$. The isomorphism is just given by restricting the Lusztig datum to each part, and it is a special case of the bijection introduced in [1, 19] using the crystal reflection. Here we show that it is indeed a morphism of crystals by using the Reineke’s description of $B(\infty)$ in terms of representations of $\Omega$ [18]. The next step is to construct an embedding of $B(\lambda)$ into $B^{J}(\infty)\otimes B_{J}(\infty)$ using a crystal theoretic interpretation of the Sagan and Stanley’s skew RSK algorithm [21], which was observed in the author’s previous work [10] (see also [11, 12]), and using the embedding (1.1) in the case of $\boldsymbol{\rm i}_{0}$ and its inverse. Hence we obtain an $\boldsymbol{\rm i}$-Lusztig datum of a Young tableau for any $\boldsymbol{\rm i}$ adapted to $\Omega$. Since the embedding naturally yields the transition map $R^{\boldsymbol{\rm i}}_{\boldsymbol{\rm i}_{0}}$ and its inverse, we also obtain an algorithm for a transition map $R^{\boldsymbol{\rm i}^{\prime}}_{\boldsymbol{\rm i}}=R^{\boldsymbol{\rm i}^{% \prime}}_{\boldsymbol{\rm i}_{0}}\circ R^{\boldsymbol{\rm i}_{0}}_{\boldsymbol% {\rm i}}$ for any $\boldsymbol{\rm i}$ and $\boldsymbol{\rm i}^{\prime}$ which are adapted to quivers with a single sink. Roughly speaking, $R^{\boldsymbol{\rm i}^{\prime}}_{\boldsymbol{\rm i}}$ is given by a composition of skew RSK and its inverse algorithms with respect to various maximal Levi subalgebras depending on $\boldsymbol{\rm i}$ and $\boldsymbol{\rm i}^{\prime}$. The paper is organized as follows: In Sections 2 and 3, we review necessary background on crystals and related materials. In Section 4, we give an explicit description of the crystal $\mathcal{B}_{\boldsymbol{\rm i}}$ when $\boldsymbol{\rm i}$ is adapted to a Dynkin quiver of type $A_{n-1}$ with a single sink, and then prove the decomposition of $B_{\boldsymbol{\rm i}}$ as a tensor product of two subcrystals. Finally in Section 5, we construct an embedding of the crystal of Young tableaux of arbitrary shape $\lambda$ into $\mathcal{B}_{\boldsymbol{\rm i}}$. Acknowledgement The author would like to thank Myungho Kim for valuable discussion and kind explanation on representations of quivers. 2. Preliminary 2.1. Let us give a brief review on crystals (see [3, 7] for more details). We denote by $\mathbb{Z}_{+}$ the set of non-negative integers. Fix a positive integer $n$. Throughout the paper, $\mathfrak{g}$ denotes the general linear Lie algebra $\mathfrak{gl}_{n}(\mathbb{C})$ which is spanned by the elementary matrices $E_{ij}$ for $1\leq i,j\leq n$. Let $P^{\vee}=\bigoplus_{i=1}^{n}\mathbb{Z}e_{ii}$ be the dual weight lattice and $P={\rm Hom}_{\mathbb{Z}}(P^{\vee},\mathbb{Z})=\bigoplus_{i=1}^{n}\mathbb{Z}% \epsilon_{i}$ the weight lattice of $\mathfrak{g}$ with $\langle\epsilon_{i},e_{jj}\rangle=\delta_{ij}$ for $i,j$. Define a symmetric bilinear form $(\,\cdot\,|\,\cdot\,)$ on $P$ such that $(\epsilon_{i}|\epsilon_{j})=\delta_{ij}$ for $i,j$. Put $I=\{1,\ldots,n-1\}$. Then $\{\,\alpha_{i}:=\epsilon_{i}-\epsilon_{i+1}\,|\,i\in I\,\}$ is the set of simple roots, and $\{\,h_{i}:=e_{ii}-e_{i+1\,i+1}\,|\,i\in I\,\}$ is the set of simple coroots of $\mathfrak{g}$, respectively. Let $\Phi^{+}=\{\,\epsilon_{i}-\epsilon_{j}\,|\,1\leq i<j\leq n\,\}$ denote the set of positive roots of $\mathfrak{g}$. Let $W\cong\mathfrak{S}_{n}$ be the Weyl group of $\mathfrak{g}$, which is generated by simple reflections $s_{i}$ for $i\in I$. Let $w_{0}$ be the longest element in $W$, which is of length $N:=n(n-1)/2$, and let $R(w_{0})=\{\,(i_{1},\ldots,i_{N})\,|\,w_{0}=s_{i_{1}}\ldots s_{i_{N}}\,\}$ be the set of reduced expressions of $w_{0}$. For $J\subset I$, let $\mathfrak{g}_{J}$ be the subalgebra of $\mathfrak{g}$ generated by $e_{ii}$ for $1\leq i\leq n$ and the root vectors associated to $\pm\alpha_{j}$ for $j\in J$. Let $\Phi^{+}_{J}$ be the set of positive roots of $\mathfrak{g}_{J}$ and $\Phi^{+}(J)=\Phi^{+}\setminus\Phi^{+}_{J}$. A $\mathfrak{g}$-crystal is a set $B$ together with the maps ${\rm wt}:B\rightarrow P$, $\varepsilon_{i},\varphi_{i}:B\rightarrow\mathbb{Z}\cup\{-\infty\}$ and $\widetilde{e}_{i},\widetilde{f}_{i}:B\rightarrow B\cup\{{\bf 0}\}$ for $i\in I$ satisfying the following conditions: for $b\in B$ and $i\in I$, (1) $\varphi_{i}(b)=\langle{\rm wt}(b),h_{i}\rangle+\varepsilon_{i}(b)$, (2) $\varepsilon_{i}(\widetilde{e}_{i}b)=\varepsilon_{i}(b)-1,\ \varphi_{i}(% \widetilde{e}_{i}b)=\varphi_{i}(b)+1,\ {\rm wt}(\widetilde{e}_{i}b)={\rm wt}(b% )+\alpha_{i}$ if $\widetilde{e}_{i}b\in B$, (3) $\varepsilon_{i}(\widetilde{f}_{i}b)=\varepsilon_{i}(b)+1,\ \varphi_{i}(% \widetilde{f}_{i}b)=\varphi_{i}(b)-1,\ {\rm wt}({\widetilde{f}_{i}}b)={\rm wt}% (b)-\alpha_{i}$ if $\widetilde{f}_{i}b\in B$, (4) $\widetilde{f}_{i}b=b^{\prime}$ if and only if $b=\widetilde{e}_{i}b^{\prime}$ for $b^{\prime}\in B$, (5) $\widetilde{e}_{i}b=\widetilde{f}_{i}b={\bf 0}$ when $\varphi_{i}(b)=-\infty$. Here ${\bf 0}$ is a formal symbol and $-\infty$ is the smallest element in $\mathbb{Z}\cup\{-\infty\}$ such that $-\infty+n=-\infty$ for all $n\in\mathbb{Z}$. Unless otherwise specified, a crystal means a $\mathfrak{g}$-crystal throughout the paper for simplicity. Let $B_{1}$ and $B_{2}$ be crystals. A tensor product $B_{1}\otimes B_{2}$ is a crystal, which is defined to be $B_{1}\times B_{2}$ as a set with elements denoted by $b_{1}\otimes b_{2}$, where (2.1) $$\begin{split}\displaystyle{\rm wt}(b_{1}\otimes b_{2})&\displaystyle={\rm wt}(% b_{1})+{\rm wt}(b_{2}),\\ \displaystyle\varepsilon_{i}(b_{1}\otimes b_{2})&\displaystyle={\rm max}\{% \varepsilon_{i}(b_{1}),\varepsilon_{i}(b_{2})-\langle{\rm wt}(b_{1}),h_{i}% \rangle\},\\ \displaystyle\varphi_{i}(b_{1}\otimes b_{2})&\displaystyle={\rm max}\{\varphi_% {i}(b_{1})+\langle{\rm wt}(b_{2}),h_{i}\rangle,\varphi_{i}(b_{2})\},\end{split}$$ (2.2) $$\begin{split}\displaystyle{\widetilde{e}}_{i}(b_{1}\otimes b_{2})&% \displaystyle=\begin{cases}{\widetilde{e}}_{i}b_{1}\otimes b_{2},&\text{if $% \varphi_{i}(b_{1})\geq\varepsilon_{i}(b_{2})$},\\ b_{1}\otimes{\widetilde{e}}_{i}b_{2},&\text{if $\varphi_{i}(b_{1})<\varepsilon_{i}(b_{2})$},\end{cases}\\ \displaystyle{\widetilde{f}}_{i}(b_{1}\otimes b_{2})&\displaystyle=\begin{% cases}{\widetilde{f}}_{i}b_{1}\otimes b_{2},&\text{if $\varphi_{i}(b_{1})>% \varepsilon_{i}(b_{2})$},\\ b_{1}\otimes{\widetilde{f}}_{i}b_{2},&\text{if $\varphi_{i}(b_{1})\leq% \varepsilon_{i}(b_{2})$},\end{cases}\end{split}$$ for $i\in I$. Here we assume that ${\bf 0}\otimes b_{2}=b_{1}\otimes{\bf 0}={\bf 0}$. A morphism $\psi:B_{1}\rightarrow B_{2}$ is a map from $B_{1}\cup\{{\bf 0}\}$ to $B_{2}\cup\{{\bf 0}\}$ such that (1) $\psi(\bf{0})=\bf{0}$, (2) ${\rm wt}(\psi(b))={\rm wt}(b)$, $\varepsilon_{i}(\psi(b))=\varepsilon_{i}(b)$, and $\varphi_{i}(\psi(b))=\varphi_{i}(b)$ when $\psi(b)\neq\bf{0}$, (3) $\psi(\widetilde{e}_{i}b)=\widetilde{e}_{i}\psi(b)$ for $b\in B_{1}$ such that $\psi(b)\neq\bf{0}$ and $\psi(\widetilde{e}_{i}b)\neq\bf{0}$, (4) $\psi(\widetilde{f}_{i}b)=\widetilde{f}_{i}\psi(b)$ for $b\in B_{1}$ such that $\psi(b)\neq\bf{0}$ and $\psi(\widetilde{f}_{i}b)\neq\bf{0}$. We call $\psi$ an embedding and $B_{1}$ a subcrystal of $B_{2}$ when $\psi$ is injective. The dual crystal $B^{\vee}$ of a crystal $B$ is defined to be the set $\{\,b^{\vee}\,|\,b\in B\,\}$ with ${\rm wt}(b^{\vee})=-{\rm wt}(b)$, $\varepsilon_{i}(b^{\vee})=\varphi_{i}(b)$, $\varphi_{i}(b^{\vee})=\varepsilon_{i}(b)$, $\widetilde{e}_{i}(b^{\vee})=(\widetilde{f}_{i}b)^{\vee}$ and $\widetilde{f}_{i}(b^{\vee})=\left(\widetilde{e}_{i}b\right)^{\vee}$ for $b\in B$ and $i\in I$. We assume that ${\bf 0}^{\vee}={\bf 0}$. For $\mu\in P$, let $T_{\mu}=\{t_{\mu}\}$ be a crystal, where ${\rm wt}(t_{\mu})=\mu$, $\widetilde{e}_{i}\mu=\widetilde{f}_{i}\mu={\bf 0}$, and $\varepsilon_{i}(t_{\mu})=\varphi_{i}(t_{\mu})=-\infty$ for all $i\in I$. 2.2. Let $q$ be an indeterminate. Let $U=U_{q}(\mathfrak{g})$ be the quantized enveloping algebra of $\mathfrak{g}$, which is an associative $\mathbb{Q}(q)$-algebra with $1$ generated by $e_{i}$, $f_{i}$, and $q^{h}$ for $i\in I$ and $h\in P^{\vee}$. Let $U^{-}=U^{-}_{q}(\mathfrak{g})$ be the negative part of $U$, the subalgebra generated by $f_{i}$ for $i\in I$. We put $[m]=\frac{q^{m}-q^{-m}}{q-q^{-1}}$ and $[m]!=[1][2]\cdots[m]$ for $m\in\mathbb{N}$. Let $t_{i}=q^{h_{i}}$, $e_{i}^{(m)}=e_{i}^{m}/[m]!$, and $f_{i}^{(m)}=f_{i}^{m}/[m]!$ for $m\in\mathbb{N}$ and $i\in I$. Let $A_{0}$ denote the subring of $\mathbb{Q}(q)$ consisting of rational functions regular at $q=0$. For $i\in I$, let $T_{i}$ be the $\mathbb{Q}(q)$-algebra automorphism of $U$ given by $$\displaystyle T_{i}(t_{j})$$ $$\displaystyle=t_{j}t_{i}^{-a_{ij}},$$ $$\displaystyle T_{i}(e_{j})$$ $$\displaystyle=\begin{cases}-t_{i}^{-1}f_{i},&\text{if $j=i$},\\ \sum_{k+l=-a_{ij}}(-1)^{k}q^{-k}e_{i}^{(k)}e_{j}e_{i}^{(l)},&\text{if $j\neq i% $},\end{cases}$$ $$\displaystyle T_{i}(f_{j})$$ $$\displaystyle=\begin{cases}e_{i}t_{i},&\text{if $j=i$},\\ \sum_{k+l=-a_{ij}}(-1)^{k}q^{k}f_{i}^{(k)}f_{j}f_{i}^{(l)},&\text{if $j\neq i$% },\end{cases}$$ for $j\in I$, where $a_{ij}=\langle\alpha_{j},h_{i}\rangle$. Note that $T_{i}$ is denoted by $T^{\prime}_{i,-1}$ in [16]. For ${\boldsymbol{\rm i}}=(i_{1},\ldots,i_{N})\in R(w_{0})$ and ${\bf c}=(c_{1},\ldots,c_{N})\in\mathbb{Z}_{+}^{N}$, consider the vectors of the following form: (2.3) $$\begin{split}\displaystyle b_{\boldsymbol{\rm i}}(\bf c)=&\displaystyle f^{(c_% {1})}_{i_{1}}T_{i_{1}}(f^{(c_{2})}_{i_{2}})\cdots T_{i_{1}}T_{i_{2}}\cdots T_{% i_{N-1}}(f^{(c_{N})}_{i_{N}}).\end{split}$$ The set $B_{\bf i}:=\{\,b_{\boldsymbol{\rm i}}({\bf c})\,|\,{\bf c}\in\mathbb{Z}_{+}^{N% }\,\}$ is a $\mathbb{Q}(q)$-basis of $U^{-}$, which is often referred to as a Lusztig’s PBW-type basis. The $A_{0}$-lattice of $U^{-}$ generated by $B_{\bf i}$ is independent of the choice of ${\bf i}$, which we denote by $L(\infty)$. If $\pi:L(\infty)\rightarrow L(\infty)/qL(\infty)$ is a canonical projection, then $\pi(B_{\bf i})$ is a $\mathbb{Q}$-basis of $L(\infty)/qL(\infty)$ and also independent of the choice of ${\bf i}$, which we denote by $B(\infty)$ [13]. It is proved in [14, 19] that the pair $(L(\infty),B(\infty))$ coincides with the Kashiwara’s crystal base of $U^{-}$ [6]. That is, $L(\infty)$ is invariant under $\widetilde{e}_{i}$, $\widetilde{f}_{i}$, and $\widetilde{e}_{i}B(\infty)\subset B(\infty)\cup\{0\}$, $\widetilde{f}_{i}B(\infty)\subset B(\infty)\cup\{0\}$ for $i\in I$, where $\widetilde{e}_{i}$ and $\widetilde{f}_{i}$ denote the modified Kashiwara operators on $U^{-}$ given by $$\widetilde{e}_{i}x=\sum_{k\geq 1}f_{i}^{(k-1)}x_{k},\quad\quad\widetilde{f}_{i% }x=\sum_{k\geq 0}f_{i}^{(k+1)}x_{k},$$ for $x=\sum_{k\geq 0}f_{i}^{(k)}x_{k}$, where $x_{k}\in T_{i}(U^{-})\cap U^{-}$ for $k\geq 0$. The set $B(\infty)$ equipped with the induced operators $\widetilde{e}_{i}$ and $\widetilde{f}_{i}$ becomes a crystal, where $\varepsilon_{i}(b)=\max\{\,k\,|\,\widetilde{e}_{i}^{k}b\neq 0\,\}$ for $i\in I$ and $b\in B(\infty)$. Let $P^{+}=\{\,\lambda\in P\,|\,\langle\lambda,h_{i}\rangle\gg 0\ \text{for $i\in I% $}\,\}$ be the set of dominant integral weights. For $\lambda\in P^{+}$, let $V(\lambda)$ be the irreducible highest weight $U$-module with highest weight $\lambda$, which is given by $U^{-}/\sum_{i\in I}U^{-}f_{i}^{\langle\lambda,h_{i}\rangle+1}\cdot 1$ as a left $U^{-}$-module. If $\pi_{\lambda}:U^{-}\rightarrow V(\lambda)$ is a canonical projection, then $L(\lambda):=\pi_{\lambda}(L(\infty))$ is an $A_{0}$-lattice of $V(\lambda)$ and $B(\lambda):=\pi_{\lambda}(B(\infty))\setminus\{0\}$ is a $\mathbb{Q}$-basis of $L(\lambda)/qL(\lambda)$. The pair $(L(\lambda),B(\lambda))$ is called the crystal base of $V(\lambda)$. The set $B(\lambda)$ becomes a crystal with respect to $\widetilde{e}_{i}$ and $\widetilde{f}_{i}$ induced from those on $B(\infty)$, where $\varepsilon_{i}(b)=\max\{\,k\,|\,\widetilde{e}_{i}^{k}b\neq 0\,\}$ and $\varphi_{i}(b)=\max\{\,k\,|\,\widetilde{f}_{i}^{k}b\neq 0\,\}$ for $i\in I$ and $b\in B(\lambda)$ [6]. 3. Crystal of Young tableaux 3.1. Let us recall some necessary background on semistandard tableaux and related combinatorics following [4]. Let $\mathscr{P}$ be the set of partitions. We identify $\lambda=(\lambda_{i})_{i\geq 1}\in\mathscr{P}$ with a Young diagram, and denote by $\lambda^{\pi}$ the skew Young diagram obtained by $180^{\circ}$-rotation of $\lambda$. Let $\mathbb{A}$ be a linearly ordered set. For a skew Young diagram $\lambda/\mu$, let $SST_{\mathbb{A}}(\lambda/\mu)$ be the set of all semistandard tableaux of shape $\lambda/\mu$ with entries in $\mathbb{A}$. Let $\mathcal{W}_{\mathbb{A}}$ be the set of finite words in $\mathbb{A}$. For $T\in SST_{\mathbb{A}}(\lambda/\mu)$, let ${\rm sh}(T)$ denote the shape of $T$, and let $w(T)$ be a word in $\mathcal{W}_{\mathbb{A}}$ obtained by reading the entries of $T$ row by row from top to bottom, and from right to left in each row. Let $T\in SST_{\mathbb{A}}(\lambda^{\pi})$ be given for $\lambda\in\mathscr{P}$. For $a\in\mathbb{A}$, we define $T\leftarrow a$ to be the tableau obtained by applying the Schensted’s column insertion of $a$ into $T$ in a reverse way starting from the rightmost column of $T$ so that ${\rm sh}(T\leftarrow a)=\mu^{\pi}$ for some $\mu\supset\lambda$ obtained by adding a box in a corner of $\lambda$. We also denote by $T^{{}^{\nwarrow}}$ the unique tableau in $SST_{\mathbb{A}}(\lambda)$, which is Knuth equivalent to $T$. Let $\mathbb{B}$ be another linearly ordered set, and let (3.1) $${\mathcal{M}}_{\mathbb{A}\times\mathbb{B}}=\left\{\,M=(m_{ab})_{a\in\mathbb{A}% ,b\in\mathbb{B}}\,\,\Bigg{|}\,\,m_{ab}\in\mathbb{Z}_{\geq 0},\ \ \sum_{a,b}m_{% ab}<\infty\,\right\}.$$ Let $\mathcal{I}_{\mathbb{A}\times\mathbb{B}}$ be the set of biwords $(\boldsymbol{\rm a},\boldsymbol{\rm b})\in\mathcal{W}_{\mathbb{A}}\times% \mathcal{W}_{\mathbb{B}}$ such that (1) $\boldsymbol{\rm a}=a_{1}\cdots a_{r}$ and $\boldsymbol{\rm b}=b_{1}\cdots b_{r}$ for some $r\geq 0$, (2) $(a_{1},b_{1})\leq\cdots\leq(a_{r},b_{r})$, where for $(a,b)$ and $(c,d)\in\mathbb{A}\times\mathbb{B}$, $(a,b)<(c,d)$ if and only if $(a<c)$ or ($a=c$ and $b>d$). There is a bijection (3.2) $$\begin{split}&\displaystyle\xymatrixcolsep{3pc}\xymatrixrowsep{0pc}\xymatrix{% \mathcal{I}_{\mathbb{A}\times\mathbb{B}}\ \ar@{->}[r]\hfil&\displaystyle\ {% \mathcal{M}}_{\mathbb{A}\times\mathbb{B}}\\ \displaystyle(\boldsymbol{\rm a},\boldsymbol{\rm b})\ \ar@{|->}[r]&% \displaystyle\ M(\boldsymbol{\rm a},\boldsymbol{\rm b})\end{split}}$$ where $M(\boldsymbol{\rm a},\boldsymbol{\rm b})=(m_{ab})$ with $m_{ab}=\left|\{\,k\,|\,(a_{k},b_{k})=(a,b)\,\}\right|$, and the pair of empty words $(\emptyset,\emptyset)$ corresponds to the zero matrix. Similarly, one can define $\mathcal{I}_{\mathbb{B}\times\mathbb{A}}$ and a bijection $\mathcal{I}_{\mathbb{B}\times\mathbb{A}}\rightarrow{\mathcal{M}}_{\mathbb{B}% \times\mathbb{A}}$ as in (3.2). For $(\boldsymbol{\rm b},\boldsymbol{\rm a})\in\mathcal{I}_{\mathbb{B}\times\mathbb% {A}}$, we write $M[\boldsymbol{\rm a},\boldsymbol{\rm b}]=M(\boldsymbol{\rm b},\boldsymbol{\rm a% })^{t}\in{\mathcal{M}}_{\mathbb{A}\times\mathbb{B}}$, where $M^{t}$ denotes the transpose of $M\in{\mathcal{M}}_{\mathbb{B}\times\mathbb{A}}$. For $(\boldsymbol{\rm a},\boldsymbol{\rm b})\in\mathcal{I}_{\mathbb{A}\times\mathbb% {B}}$, there exist unique $\boldsymbol{\rm a}^{\tau}\in\mathcal{W}_{\mathbb{A}}$ and $\boldsymbol{\rm b}^{\tau}\in\mathcal{W}_{\mathbb{B}}$, which are rearrangements of $\boldsymbol{\rm a}$ and $\boldsymbol{\rm b}$, respectively, satisfying (3.3) $$M[\boldsymbol{\rm a}^{\tau},\boldsymbol{\rm b}^{\tau}]=M(\boldsymbol{\rm a},% \boldsymbol{\rm b})\in{\mathcal{M}}_{\mathbb{A}\times\mathbb{B}},$$ or equivalently, $M(\boldsymbol{\rm b}^{\tau},\boldsymbol{\rm a}^{\tau})=M(\boldsymbol{\rm a},% \boldsymbol{\rm b})^{t}\in{\mathcal{M}}_{\mathbb{B}\times\mathbb{A}}$. Fix $\lambda\in\mathscr{P}$. Let $T\in SST_{\mathbb{A}}(\lambda^{\pi})$ and $M\in{\mathcal{M}}_{\mathbb{A}\times\mathbb{B}}$ be given, where $M=M(\boldsymbol{\rm a},\boldsymbol{\rm b})$ for some $(\boldsymbol{\rm a},\boldsymbol{\rm b})\in\mathcal{I}_{\mathbb{B}\times\mathbb% {A}}$. Suppose that $\boldsymbol{\rm a}^{\tau}=a^{\tau}_{1}\cdots a^{\tau}_{r}$ and $\boldsymbol{\rm b}^{\tau}=b^{\tau}_{1}\cdots b^{\tau}_{r}$. We define the pair of tableaux ${\bf P}(T\leftarrow M)$ and ${\bf Q}(T\leftarrow M)$ inductively as follows: For $1\leq i\leq r$, put ${\bf P}^{(i)}=({\bf P}^{(i-1)}\leftarrow a^{\tau}_{r-i+1})$, and $\lambda^{(i)}={\rm sh}\left({\bf P}^{(i)}\right)^{\pi}$ with ${\bf P}^{(0)}=T$ and $\lambda^{(0)}=\lambda$. Define ${\bf P}(T\leftarrow M)={\bf P}^{(r)}$ with $\mu^{\pi}={\rm sh}\left({\bf P}^{(r)}\right)$, and define ${\bf Q}(T\leftarrow M)$ to be the tableau of shape $\left(\mu/\lambda\right)^{\pi}$, where $\left(\lambda^{(i)}/\lambda^{(i-1)}\right)^{\pi}$ is filled with $b^{\tau}_{r-i+1}$ for $1\leq i\leq r$. Then the map (3.4) $$\begin{split}\displaystyle\xymatrixcolsep{3pc}\xymatrixrowsep{0pc}\xymatrix{% \kappa:SST_{\mathbb{A}}(\lambda^{\pi})\times{\mathcal{M}}_{\mathbb{A}\times% \mathbb{B}}\ \ar@{->}[r]&\displaystyle\ \ \displaystyle\bigsqcup_{\mu\supset% \lambda}SST_{\mathbb{A}}(\mu^{\pi})\times SST_{\mathbb{B}}(\left(\mu/\lambda% \right)^{\pi})\\ \displaystyle(T,M)\ \ar@{|->}[r]&\displaystyle\ \ ({\bf P}(T\leftarrow M),{\bf Q% }(T\leftarrow M))\end{split}}$$ is a bijection, which is a skew analogue of the usual RSK correspondence [21]. 3.2. Let $[n]=\{\,1<\cdots<n\,\}$ and $[\overline{n}]=\{\,\overline{n}<\cdots<\overline{1}\,\}$ be linearly ordered sets. We regard $[n]$ as a crystal of $B(\epsilon_{1})$, where ${\rm wt}(k)=\epsilon_{k}$ for $k\in[n]$, and $[\overline{n}]$ as the dual crystal $[n]^{\vee}$, where $\overline{k}=k^{\vee}$ for $k\in[n]$. Then $\mathcal{W}_{[n]}$ and $\mathcal{W}_{[\overline{n}]}$ are crystals, where we identify $w=w_{1}\ldots w_{r}$ with $w_{1}\otimes\cdots\otimes w_{r}$. The crystal structure on $\mathcal{W}_{[n]}$ is easily described by so-called signature rule (cf. [8, Section 2.1]). Let $\mathscr{P}_{n}$ be the set of partitions $\lambda=(\lambda_{1},\ldots,\lambda_{n})$ of length less than or equal to $n$. For $\lambda\in\mathscr{P}_{n}$, $SST_{[n]}(\lambda)$ is a crystal under the identification of $T$ with $w(T)\in\mathcal{W}_{[n]}$, and it is isomorphic to $B(\lambda)$, where we regard $\lambda$ as $\sum_{i=1}^{n}\lambda_{i}\epsilon_{i}\in P^{+}$ [8], while $SST_{[\overline{n}]}(\lambda)$ is isomorphic to $B(-w_{0}\lambda)$. One can define a crystal structure on $SST_{[n]}(\mu/\nu)$ for a skew Young diagram $\mu/\nu$ in a similar way. Note that $SST_{[n]}(\lambda)^{\vee}\cong SST_{[\overline{n}]}(\lambda^{\pi})$, where the isomorphism is given by taking the $180^{\circ}$-rotation and replacing the entry $i$ with $\overline{n-i+1}$ for $i\in[n]$. For $0\leq t\leq n$, let $\sigma^{-1}:SST_{[n]}(1^{t})\longrightarrow SST_{[\overline{n}]}(1^{n-t})$ be a bijection, where $\sigma^{-1}(T)$ is the tableau with entries $[\overline{n}]\setminus\{\overline{b_{1}},\ldots,\overline{b_{t}}\}$ for $T$ with entries $b_{1}<\cdots<b_{t}$. For $d\geq\lambda_{1}$, define (3.5) $$\xymatrixcolsep{3pc}\xymatrixrowsep{4pc}\xymatrix{\sigma^{-d}:SST_{[n]}(% \lambda)\ \ar@{->}[r]&\ SST_{[\overline{n}]}(\sigma^{-d}(\lambda))}$$ where $\sigma^{-d}(\lambda)=(d^{n})/\lambda$, and the $i$th column of $\sigma^{-d}(T)$ from the left is obtained by applying $\sigma^{-1}$ to the $i$th column of $T\in SST_{[n]}(\lambda)$ (which is assumed to be empty if $i>\lambda_{1}$). Then $\sigma^{-d}$ commutes with $\widetilde{e}_{i}$ and $\widetilde{f}_{i}$ for $i\in I$, where ${\rm wt}(\sigma^{-d}(T))={\rm wt}(T)-d(\epsilon_{1}+\cdots+\epsilon_{n})$. We have an isomorphism of crystals (3.6) $$\xymatrixcolsep{3pc}\xymatrixrowsep{4pc}\xymatrix{SST_{[n]}(\lambda)\otimes T_% {\xi}\ \ar@{->}[r]&\ SST_{[\overline{n}]}(\sigma^{-d}(\lambda))}$$ where $\xi=-d(\epsilon_{1}+\cdots+\epsilon_{n})$. Also (3.5) and (3.6) hold when $[n]$ and $[\overline{n}]$ are exchanged. 4. Crystal of Lusztig data 4.1. Let ${\bf i}=(i_{1},\ldots,i_{N})\in R(w_{0})$ be given. We have $$\Phi^{+}=\{\,\beta_{1}:=\alpha_{i_{1}},\ \beta_{2}:=s_{i_{1}}(\alpha_{i_{2}}),% \ \ldots,\ \beta_{N}:=s_{i_{1}}\cdots s_{i_{N-1}}(\alpha_{i_{N}})\,\}.$$ Since $\pi(B_{\bf i})=B(\infty)$ and $b_{\bf i}:\mathbb{Z}_{+}^{N}\rightarrow B(\infty)$ is a bijection by (2.3), one can define a crystal structure on $\mathbb{Z}^{N}_{+}$ by (4.1) $$\begin{split}&\displaystyle\text{ $\widetilde{f}_{i}{\bf c}={\bf c}^{\prime}$ % if and only if $\widetilde{f}_{i}b_{\bf i}({\bf c})\equiv b_{\bf i}({\bf c}^{% \prime})\!\!\mod qL(\infty)$ for ${\bf c},{\bf c}^{\prime}\in\mathbb{Z}_{+}^{N% }$ and $i\in I$},\end{split}$$ with ${\rm wt}({\bf c})=-(c_{1}\beta_{1}+\cdots+c_{N}\beta_{N})$, for ${\bf c}=(c_{k})_{1\leq k\leq N}\in\mathbb{Z}_{+}^{N}$. We call the crystal $\mathbb{Z}_{+}^{N}$ the crystal of ${\bf i}$-Lusztig data, and denote it by $\mathcal{B}_{\bf i}$. Let $\Omega$ be a Dynkin quiver of type $A_{n-1}$. We call a vertex $i\in I$ a sink (resp. source) of $\Omega$ if there is no arrow going out of $i$ (resp. coming into $i$). For $i\in I$, let $s_{i}\Omega$ be the quiver given by reversing the arrows which end or start at $i$. We say that ${\bf i}\in R(w_{0})$ is adapted to $\Omega$ if $i_{1}$ is a sink of $\Omega$, and $i_{k}$ is a sink of $s_{i_{k-1}}\cdots s_{i_{2}}s_{i_{1}}\Omega$ for $2\leq k\leq N$. Let $\mathcal{B}_{\Omega}$ be the crystal $\mathcal{B}_{\bf i}$ for ${\bf i}\in R(w_{0})$ which is adapted to $\Omega$. Note that $\mathcal{B}_{\Omega}$ is independent of the choice of ${\bf i}$ [13]. We write $c_{ij}=c_{k}$ for ${\bf c}=(c_{k})\in\mathcal{B}_{\bf i}$, if $\beta_{k}=\epsilon_{i}-\epsilon_{j}$ for $1\leq i<j\leq N$. In the next subsections, we consider some special cases of $\Omega$, which give simple descriptions of the crystal $\mathcal{B}_{\Omega}$. 4.2. We first consider the quiver $\Omega$ where all the arrows are of the same direction. Suppose first that $\Omega=\Omega^{+}$, where (4.2) $$\Omega^{+}\quad:\quad\xymatrixcolsep{2pc}\xymatrixrowsep{0pc}\xymatrix{\bullet% &\ar@{->}[l]\bullet&\ar@{->}[l]\cdots&\ar@{->}[l]\bullet\\ {}_{1}&{}_{2}&&{}_{n-1}}$$ For example, ${\bf i}=(1,2,1,3,2,1,\ldots,n-1,n-2,\ldots,2,1)$ is adapted to $\Omega^{+}$. We assume that $\mathbb{A}=\mathbb{B}=[n]$ and define an injective map (4.3) $$\xymatrixcolsep{3pc}\xymatrixrowsep{0pc}\xymatrix{\mathcal{B}_{\Omega^{+}}\ % \ar@{{}^{(}->}[r]&\ {\mathcal{M}}_{\mathbb{A}\times\mathbb{B}}\\ {\bf c}\ \ar@{|->}[r]&M^{+}({\bf c})}$$ where $M^{+}({\bf c})=(m^{+}_{ij})$ is a strictly upper triangular matrix given by $m^{+}_{ij}=c_{ij}$ when $1\leq i<j\leq n$ and $0$ otherwise, for ${\bf c}=(c_{ij})\in\mathcal{B}_{\Omega^{+}}$. Also, for $M\in{\mathcal{M}}_{\mathbb{A}\times\mathbb{B}}$, let $M^{+}=(m^{+}_{ij})$ be the projection of $M=(m_{ij})$ onto the image of $\mathcal{B}_{\Omega^{+}}$ under (4.3), that is, $m^{+}_{ij}=m_{ij}$ for $1\leq i<j\leq n$, and $0$ otherwise. Let us define $\widetilde{e}_{i}$ and $\widetilde{f}_{i}$ for $i\in I$ on the image of $\mathcal{B}_{\Omega^{+}}$ in ${\mathcal{M}}_{\mathbb{A}\times\mathbb{B}}$ under (4.3). Given ${\bf c}\in\mathcal{B}_{\Omega^{+}}$, suppose that $M^{+}({\bf c})=M(\boldsymbol{\rm a},\boldsymbol{\rm b})$ for some $(\boldsymbol{\rm a},\boldsymbol{\rm b})\in\mathcal{I}_{\mathbb{A}\times\mathbb% {B}}$ under (3.2). Recall that ${\boldsymbol{\rm b}}$ is an element in a crystal $\mathcal{W}_{[n]}$. For $i\in I$, we define (4.4) $$\begin{split}\displaystyle\widetilde{e}_{i}M^{+}({\bf c})&\displaystyle=\begin% {cases}M(\boldsymbol{\rm a},\widetilde{e}_{i}\boldsymbol{\rm b})^{+},&\text{if% $\widetilde{e}_{i}\boldsymbol{\rm b}\neq{\bf 0}$},\\ {\bf 0},&\text{if $\widetilde{e}_{i}\boldsymbol{\rm b}={\bf 0}$},\end{cases}\\ \displaystyle\widetilde{f}_{i}M^{+}({\bf c})&\displaystyle=\begin{cases}M\left% (\boldsymbol{\rm a},\widetilde{f}_{i}\boldsymbol{\rm b}\right),&\text{if $% \widetilde{f}_{i}\boldsymbol{\rm b}\neq{\bf 0}$},\\ M(\boldsymbol{\rm a},\boldsymbol{\rm b})+E_{i\,i+1},&\text{if $\widetilde{f}_{% i}\boldsymbol{\rm b}={\bf 0}$},\end{cases}\end{split}$$ where $E_{i\,i+1}$ is an elementary matrix in ${\mathcal{M}}_{\mathbb{A}\times\mathbb{B}}$. Note that ${\bf 0}$ is a formal symbol, not the zero matrix. Next suppose that $\Omega=\Omega^{-}$, where (4.5) $$\Omega^{-}\quad:\quad\xymatrixcolsep{2pc}\xymatrixrowsep{0pc}\xymatrix{\bullet% \ar@{->}[r]&\bullet\ar@{->}[r]&\cdots\ar@{->}[r]&\bullet\\ {}_{1}&{}_{2}&&{}_{n-1}}$$ In this case, we assume that $\mathbb{A}=[n]$ and $\mathbb{B}=[\overline{n}]$, and define an injective map (4.6) $$\xymatrixcolsep{3pc}\xymatrixrowsep{0pc}\xymatrix{\mathcal{B}_{\Omega^{-}}\ % \ar@{{}^{(}->}[r]&\ {\mathcal{M}}_{\mathbb{A}\times\mathbb{B}}\\ {\bf c}\ \ar@{|->}[r]&M^{-}({\bf c})}$$ where $M^{-}({\bf c})=(m^{-}_{ab})$ is a strictly upper triangular matrix given by $m^{-}_{n-j+1\,\overline{i}}=c_{i\,j}$ when $1\leq i<j\leq n$, and $0$ otherwise, for ${\bf c}=(c_{ij})\in\mathcal{B}_{\Omega^{-}}$. For $M=(m_{i\overline{j}})\in{\mathcal{M}}_{\mathbb{A}\times\mathbb{B}}$, let $M^{-}=(m^{-}_{i\overline{j}})$ be the projection of $M$ onto the image of $\mathcal{B}_{\Omega^{-}}$ under (4.6), that is, $m^{-}_{i\overline{j}}=m_{i\overline{j}}$ for $i+j\leq n$, and $0$ otherwise. Let us define $\widetilde{e}_{i}$ and $\widetilde{f}_{i}$ for $i\in I$ on the image of $\mathcal{B}_{\Omega^{-}}$ in ${\mathcal{M}}_{\mathbb{A}\times\mathbb{B}}$ under (4.6). Given ${\bf c}=(c_{ij})\in\mathcal{B}_{\Omega^{-}}$, suppose that $M^{-}({\bf c})=M(\boldsymbol{\rm a},\boldsymbol{\rm b})$ for some $(\boldsymbol{\rm a},\boldsymbol{\rm b})\in\mathcal{I}_{\mathbb{A}\times\mathbb% {B}}$. For $i\in I$, we define (4.7) $$\begin{split}\displaystyle\widetilde{e}_{i}M^{-}({\bf c})&\displaystyle=\begin% {cases}M(\boldsymbol{\rm a},\widetilde{e}_{i}\boldsymbol{\rm b})^{-},&\text{if% $\widetilde{e}_{i}\boldsymbol{\rm b}\neq{\bf 0}$},\\ {\bf 0},&\text{if $\widetilde{e}_{i}\boldsymbol{\rm b}={\bf 0}$},\end{cases}\\ \displaystyle\widetilde{f}_{i}M^{-}({\bf c})&\displaystyle=\begin{cases}M\left% (\boldsymbol{\rm a},\widetilde{f}_{i}\boldsymbol{\rm b}\right),&\text{if $% \widetilde{f}_{i}\boldsymbol{\rm b}\neq{\bf 0}$},\\ M(\boldsymbol{\rm a},\boldsymbol{\rm b})+E_{n-i\,\overline{i}},&\text{if $% \widetilde{f}_{i}\boldsymbol{\rm b}={\bf 0}$},\end{cases}\end{split}$$ where $E_{n-i\,\overline{i}}$ is an elementary matrix in ${\mathcal{M}}_{\mathbb{A}\times\mathbb{B}}$. Proposition 4.1. Suppose that $\Omega$ is either $\Omega^{+}$ or $\Omega^{-}$. The operators $\widetilde{e}_{i}$ and $\widetilde{f}_{i}$ for $i\in I$ on $\mathcal{B}_{\Omega}$ in (4.4) and (4.7) coincide with those in (4.1), that is, for $i\in I$ and ${\bf c}\in\mathcal{B}_{\Omega}$ $$\widetilde{e}_{i}M^{\pm}({\bf c})=M^{\pm}\left(\widetilde{e}_{i}{\bf c}\right)% \,\quad\quad\widetilde{f}_{i}M^{\pm}\left({\bf c}\right)=M^{\pm}\left(% \widetilde{f}_{i}{\bf c}\right).$$ Proof. We will consider only the case when $\Omega=\Omega^{+}$, since the proof for the case when $\Omega=\Omega^{-}$ is similar. Let us first recall the description of (4.1) on $\mathcal{B}_{\Omega}$ in [18, Theorem 7.1] (see also [20, Section 4.1]). Let ${\bf c}=(c_{ij})\in\mathcal{B}_{\Omega}$ be given. For $i\in I$, put (4.8) $$\begin{split}\displaystyle c_{k}^{(i)}&\displaystyle=\sum_{s=1}^{k}(c_{s\,i+1}% -c_{s-1\,i})\quad\quad(1\leq k\leq i),\\ \displaystyle c^{(i)}&\displaystyle=\max\{\,c_{k}^{(i)}\,|\,1\leq k\leq i\,\},% \\ \displaystyle k_{0}&\displaystyle=\min\{\,1\leq k\leq i\,|\,c^{(i)}_{k}=c^{(i)% }\,\},\\ \displaystyle k_{1}&\displaystyle=\max\{\,1\leq k\leq i\,|\,c^{(i)}_{k}=c^{(i)% }\,\},\end{split}$$ where we assume that $c_{0i}=0$. Let $M=M^{+}({\bf c})=(m^{+}_{ij})$ with $m^{+}_{ij}$ as in (4.3). Then one can compute from [18, Theorem 7.1] (4.9) $$\begin{split}\displaystyle\widetilde{e}_{i}M&\displaystyle=\begin{cases}M+E_{k% _{0}\,i}-E_{k_{0}\,i+1},&\text{if $c^{(i)}>0$ and $k_{0}<i$},\\ M-E_{i\,i+1},&\text{if $c^{(i)}>0$ and $k_{0}=i$},\\ {\bf 0},&\text{if $c^{(i)}=0$},\end{cases}\\ \displaystyle\widetilde{f}_{i}M&\displaystyle=\begin{cases}M-E_{k_{1}\,i}+E_{k% _{1}\,i+1},&\text{if $k_{1}<i$},\\ M+E_{i\,i+1},&\text{if $k_{1}=i$}.\end{cases}\end{split}$$ On the other hand, suppose that $M=M(\boldsymbol{\rm a},\boldsymbol{\rm b})$ for some $(\boldsymbol{\rm a},\boldsymbol{\rm b})\in\mathcal{I}_{\mathbb{A}\times\mathbb% {B}}$ under (3.2), where ${\boldsymbol{\rm b}}=b_{1}\ldots b_{r}$. By definition of $(\boldsymbol{\rm a},\boldsymbol{\rm b})$, we have $${\bf b}=\underbrace{i+1\cdots i+1}_{m^{+}_{1\,i+1}}\ \underbrace{i\cdots i}_{m% ^{+}_{1\,i}}\ \underbrace{i+1\cdots i+1}_{m^{+}_{2\,i+1}}\ \underbrace{i\cdots i% }_{m^{+}_{2\,i}}\ \cdots\ \underbrace{i+1\cdots i+1}_{m^{+}_{i\,i+1}}.$$ By the tensor product rule of crystals (2.2) (cf. [8, Proposition 2.1.1]), it is straightforward to see that $c^{(i)}>0$ if and only if $\varepsilon_{i}({\boldsymbol{\rm b}})>0$ and $\widetilde{e}_{i}{\boldsymbol{\rm b}}=b_{1}\cdots(\widetilde{e}_{i}b_{s})% \cdots b_{r}$ for some $1\leq s\leq r$ with $b_{s}=i+1$. This implies that $\widetilde{e}_{i}M$ in (4.9) and (4.4) coincides. Similarly, we see that (1) $k_{1}<i$ if and only if $\varphi_{i}({\boldsymbol{\rm b}})>0$ and $\widetilde{f}_{i}{\boldsymbol{\rm b}}=b_{1}\cdots(\widetilde{f}_{i}b_{s})% \cdots b_{r}$ with $b_{s}=i$ for some $1\leq s\leq r$, (2) $k_{1}=i$ if and only if $\varphi_{i}({\boldsymbol{\rm b}})=0$ or $\widetilde{f}_{i}{\boldsymbol{\rm b}}={\bf 0}$, which implies that $\widetilde{f}_{i}M$ in (4.9) and (4.4) coincides. ∎ 4.3. Now we suppose that $\Omega$ is a quiver with a single sink, that is, (4.10) $$\xymatrixcolsep{2pc}\xymatrixrowsep{0pc}\xymatrix{\bullet\ar@{->}[r]&\cdots\ar% @{->}[r]&\bullet&\ar@{->}[l]\cdots&\ar@{->}[l]\bullet\\ {}_{1}&&{}_{r}&&{}_{n-1}}$$ for some $r\in I$. Note that we have $\Omega=\Omega^{+}$ if $r=1$, and $\Omega=\Omega^{-}$ when $r=n-1$. So we assume that $r\in I\setminus\{1,n-1\}$. Put (4.11) $$\begin{split}&\displaystyle J=I\setminus\{r\},\quad J_{1}=\{\,j\in J\,|\,j<r\,% \},\quad J_{2}=\{\,j\in J\,|\,j>r\,\},\end{split}$$ where $J=J_{1}\sqcup J_{2}$. Then we have $\Phi^{+}=\Phi^{+}(J)\sqcup\Phi^{+}_{J_{1}}\sqcup\Phi^{+}_{J_{2}}$ where (4.12) $$\begin{split}&\displaystyle\Phi^{+}_{J_{1}}=\{\,\epsilon_{i}-\epsilon_{j}\,|\,% 1\leq i<j\leq r\,\},\\ &\displaystyle\Phi^{+}_{J_{2}}=\{\,\epsilon_{i}-\epsilon_{j}\,|\,r\leq i<j\leq n% \,\},\\ &\displaystyle\Phi^{+}(J)=\{\,\epsilon_{i}-\epsilon_{j}\,|\,1\leq i<r<j\leq n% \,\}.\end{split}$$ We put $$\begin{split}\displaystyle\mathcal{B}^{J}_{\Omega}&\displaystyle=\left\{\,{\bf c% }=(c_{ij})\in\mathcal{B}_{\Omega}\,\big{|}\,c_{ij}=0\text{ unless $\epsilon_{i% }-\epsilon_{j}\in\Phi^{+}(J)$}\,\right\},\end{split}$$ which we regard as a subcrystal of $\mathcal{B}_{\Omega}$. Let $\Omega_{J_{k}}$ be the quiver corresponding to the vertices in $J_{k}$ ($k=1,2$) in $\Omega$. Then $\mathcal{B}_{\Omega_{J_{k}}}$ is the crystal of the negative part of the quantum group $U_{q}(\mathfrak{g}_{J_{k}})$, whose crystal structure is described in (4.4) and (4.7), respectively. We identify $\mathcal{B}_{\Omega_{J_{k}}}$ with the subset of $\mathcal{B}_{\Omega}$ consisting of ${\bf c}=(c_{ij})$ with $c_{ij}=0$ for $\epsilon_{i}-\epsilon_{j}\not\in\Phi^{+}_{J_{k}}$, and then regard it as a subcrystal of $\mathcal{B}_{\Omega}$ where $\widetilde{e}_{i}{\bf c}=\widetilde{f}_{i}{\bf c}={\bf 0}$ with $\varepsilon_{i}({\bf c})=\varphi_{i}({\bf c})=-\infty$ for $i\in J\setminus J_{k}$. We assume that $\mathbb{A}=[\overline{r}]$ and $\mathbb{B}=[n]\setminus[r]$, and define a bijection (4.13) $$\xymatrixcolsep{3pc}\xymatrixrowsep{0pc}\xymatrix{\mathcal{B}^{J}_{\Omega}\ % \ar@{->}[r]&\ {\mathcal{M}}_{\mathbb{A}\times\mathbb{B}}\ ,\\ {\bf c}\ar@{|->}[r]&M({\bf c})}$$ where $M({\bf c})=(m_{ab})$ is given by $m_{\overline{i}\,j}=c_{ij}$ $(1\leq i<r<j\leq n)$ for ${\bf c}=(c_{ij})\in\mathcal{B}_{\Omega}^{J}$. Given ${\bf c}\in\mathcal{B}_{\Omega}^{J}$, suppose that $M({\bf c})=M(\boldsymbol{\rm a},\boldsymbol{\rm b})=M\left[\boldsymbol{\rm a}^% {\tau},\boldsymbol{\rm b}^{\tau}\right]=(m_{ab})$, for some $(\boldsymbol{\rm a},\boldsymbol{\rm b})\in\mathcal{I}_{\mathbb{A}\times\mathbb% {B}}$ (see (3.3)). For $i\in I$, we define (4.14) $$\begin{split}\displaystyle\widetilde{e}_{i}M({\bf c})&\displaystyle=\begin{% cases}M[\widetilde{e}_{i}\boldsymbol{\rm a}^{\tau},\boldsymbol{\rm b}^{\tau}],% &\text{if $i\in J_{1}$ and $\widetilde{e}_{i}\boldsymbol{\rm a}^{\tau}\neq{\bf 0% }$},\\ M(\boldsymbol{\rm a},\widetilde{e}_{i}\boldsymbol{\rm b}),&\text{if $i\in J_{2% }$ and $\widetilde{e}_{i}\boldsymbol{\rm b}\neq{\bf 0}$},\\ M(\boldsymbol{\rm a},\boldsymbol{\rm b})-E_{\overline{r}\,r+1},&\text{if $i=r$% and $m_{\overline{r}\,r+1}>0$},\\ {\bf 0},&\text{otherwise},\end{cases}\\ \displaystyle\widetilde{f}_{i}M({\bf c})&\displaystyle=\begin{cases}M\left[% \widetilde{f}_{i}\boldsymbol{\rm a}^{\tau},\boldsymbol{\rm b}^{\tau}\right],&% \text{if $i\in J_{1}$ and $\widetilde{f}_{i}\boldsymbol{\rm a}^{\tau}\neq{\bf 0% }$},\\ M\left(\boldsymbol{\rm a},\widetilde{f}_{i}\boldsymbol{\rm b}\right),&\text{if% $i\in J_{2}$ and $\widetilde{f}_{i}\boldsymbol{\rm b}\neq{\bf 0}$},\\ M(\boldsymbol{\rm a},\boldsymbol{\rm b})+E_{\overline{r}\,r+1},&\text{if $i=r$% },\\ {\bf 0},&\text{otherwise}.\end{cases}\end{split}$$ For ${\bf c}\in\mathcal{B}_{\Omega}$, let ${\bf c}^{J}$ and ${\bf c}_{J_{k}}$ be the restrictions of ${\bf c}$ to $\mathcal{B}^{J}_{\Omega}$ and $\mathcal{B}_{\Omega_{J_{k}}}$ $(k=1,2)$, respectively. Then we have the following decomposition of $\mathcal{B}_{\Omega}$ as a tensor product of its subcrystals. Theorem 4.2. The map $$\xymatrixcolsep{3pc}\xymatrixrowsep{0pc}\xymatrix{\mathcal{B}_{\Omega}\ar@{->}% [r]&\ \mathcal{B}_{\Omega}^{J}\,\otimes\mathcal{B}_{\Omega_{J_{1}}}\!\otimes% \mathcal{B}_{\Omega_{J_{2}}}\\ {\bf c}\ar@{|->}[r]&{\bf c}^{J}\otimes{\bf c}_{J_{1}}\otimes{\bf c}_{J_{2}}}$$ is an isomorphism of crystals. Moreover, the operators $\widetilde{e}_{i}$ and $\widetilde{f}_{i}$ on $\mathcal{B}^{J}_{\Omega}$ in (4.14) coincide with those in (4.1) for $i\in I$, that is, $$\widetilde{e}_{i}M\left({\bf c}^{J}\right)=M\left(\widetilde{e}_{i}{\bf c}^{J}% \right)\,\quad\quad\widetilde{f}_{i}M\left({\bf c}^{J}\right)=M\left(% \widetilde{f}_{i}{\bf c}^{J}\right).$$ Proof. As in Proposition 4.1, it is done by comparing with the description of (4.1) on $\mathcal{B}_{\Omega}$ using [18, Theorem 7.1]. For ${\bf c}=(c_{ij})$ and ${\bf c}^{\prime}=(c^{\prime}_{ij})\in\mathcal{B}_{\Omega}$, put ${\bf c}\pm{\bf c}^{\prime}=(c_{ij}\pm c_{ij}^{\prime})$. For $1\leq k<l\leq n$, let ${\bf c}_{kl}=(c^{kl}_{ij})\in\mathcal{B}_{\Omega}$ be such that $c^{kl}_{ij}=\delta_{ik}\delta_{jl}$. For ${\bf c}\in\mathcal{B}_{\Omega}$, let $\psi({\bf c})={\bf c}^{J}\otimes{\bf c}_{J_{1}}\otimes{\bf c}_{J_{2}}$. It is clear that $\psi$ is a bijection. So it remains to show that $\psi$ commutes with $\widetilde{e}_{i}$ and $\widetilde{f}_{i}$ for $i\in I$. Suppose that ${\bf c}=(c_{ij})\in\mathcal{B}_{\Omega}$ is given. First, it is not difficult to see that $$\begin{split}\displaystyle\widetilde{e}_{r}{\bf c}&\displaystyle=\begin{cases}% {\bf c}-{\bf c}_{r\,r+1},&\text{if $c_{r\,r+1}>0$},\\ {\bf 0},&\text{if $c_{r\,r+1}=0$},\end{cases}\quad\quad\widetilde{f}_{r}{\bf c% }={\bf c}+{\bf c}_{r\,r+1}\end{split}$$ which immediately implies $$\widetilde{e}_{r}M\left({\bf c}^{J}\right)=M\left(\widetilde{e}_{r}{\bf c}^{J}% \right),\quad\widetilde{f}_{r}M\left({\bf c}^{J}\right)=M\left(\widetilde{f}_{% r}{\bf c}^{J}\right),$$ and hence $\psi$ commutes with $\widetilde{e}_{r}$ and $\widetilde{f}_{r}$. Next, we fix $i\in J_{1}$. Put $$\begin{split}\displaystyle c_{k}^{(i)}&\displaystyle=\begin{cases}c_{i\,r+1},&% \text{if $k=1$},\\ c_{1}^{(i)}+\sum_{s=1}^{k}(c_{i\,r+s}-c_{i+1\,r+s-1}),&\text{if $2\leq k\leq n% -r$},\\ c^{(i)}_{n-r}+(c_{i\,r}-c_{i+1\,n}),&\text{if $k=n-r+1$},\\ c^{(i)}_{n-r+1}+\sum_{s=1}^{k-n+r-1}(c_{i\,r-s}-c_{i+1\,r-s+1}),&\text{if $n-r% +2\leq k\leq n-i$},\end{cases}\end{split}$$ (see, for example, the Auslander-Reiten quiver in Example 5.6) and (4.15) $$\begin{split}\displaystyle c^{(i)}&\displaystyle=\max\{\,c_{k}^{(i)}\,|\,1\leq k% \leq n-i\,\},\\ \displaystyle k_{0}&\displaystyle=\min\{\,1\leq k\leq n-i\,|\,c^{(i)}_{k}=c^{(% i)}\,\},\\ \displaystyle k_{1}&\displaystyle=\max\{\,1\leq k\leq n-i\,|\,c^{(i)}_{k}=c^{(% i)}\,\}.\end{split}$$ Note that if $c^{(i)}>0$, then we have $c_{i\,k_{0}+r}>0$ when $k_{0}\leq n-r$, and $c_{i\,n-k_{0}+1}>0$ when $k>n-r$. Also if $k_{0}>n-r$, then we necessarily have $c^{(i)}>0$. By [18, Theorem 7.1], one can compute that (4.16) $$\begin{split}\displaystyle\widetilde{e}_{i}{\bf c}&\displaystyle=\begin{cases}% {\bf c}-{\bf c}_{i\,k_{0}+r}+{\bf c}_{i+1\,k_{0}+r},&\text{if $c^{(i)}>0$ and % $k_{0}\leq n-r$},\\ {\bf c}-{\bf c}_{i\,n-k_{0}+1}+{\bf c}_{i+1\,n-k_{0}+1},&\text{if $n-r+1\leq k% _{0}\leq n-i-1$},\\ {\bf c}-{\bf c}_{i\,i+1},&\text{if $k_{0}=n-i$},\\ {\bf 0},&\text{if $c^{(i)}=0$},\end{cases}\\ \displaystyle\widetilde{f}_{i}{\bf c}&\displaystyle=\begin{cases}{\bf c}+{\bf c% }_{i\,k_{1}+r}-{\bf c}_{i+1\,k_{1}+r},&\text{if $k_{1}\leq n-r$},\\ {\bf c}+{\bf c}_{i\,n-k_{1}+1}-{\bf c}_{i+1\,n-k_{1}+1},&\text{if $n-r+1\leq k% _{1}\leq n-i-1$},\\ {\bf c}+{\bf c}_{i\,i+1},&\text{if $k_{1}=n-i$.}\end{cases}\\ \end{split}$$ Case 1. Suppose that ${\bf c}={\bf c}^{J}\in\mathcal{B}^{J}_{\Omega}$, that is, $c_{ij}=0$ unless $\epsilon_{i}-\epsilon_{j}\in\Phi^{+}(J)$. We have $$c^{(i)}_{n-r}\geq c^{(i)}_{n-r+1}=\cdots=c^{(i)}_{n-i},$$ which implies that $k_{0}\leq n-r$. Note that if $k_{1}>n-r$, then we should have $c_{i+1\,n}=0$ and hence $k_{1}=n-i$. Let $(\boldsymbol{\rm a},\boldsymbol{\rm b})\in\mathcal{I}_{[\overline{r}]\times([n% ]\setminus[r])}$ be such that $M({\bf c})=M[\boldsymbol{\rm a}^{\tau},\boldsymbol{\rm b}^{\tau}]$. Note that $$\begin{split}\displaystyle{\bf a}^{\tau}=&\displaystyle\underbrace{\overline{i% }\cdots\overline{i}}_{c_{i,r+1}}\ \underbrace{\overline{i+1}\cdots\overline{i+% 1}}_{c_{i+1,r+1}}\ \cdots\ \underbrace{\overline{i}\cdots\overline{i}}_{c_{i\,% n}}\ \underbrace{\overline{i+1}\cdots\overline{i+1}}_{c_{i+1\,n}}\\ &\displaystyle\underbrace{\overline{i}\cdots\overline{i}}_{c_{i\,r}}\ % \underbrace{\overline{i+1}\cdots\overline{i+1}}_{c_{i+1\,r-1}}\ \cdots\ % \underbrace{\overline{i}\cdots\overline{i}}_{c_{i\,i+2}}\ \underbrace{% \overline{i+1}\cdots\overline{i+1}}_{c_{i+1\,i+2}}\ \underbrace{\overline{i}% \cdots\overline{i}}_{c_{i\,i+1}}.\end{split}$$ By the tensor product rule (2.2), we have $\varepsilon_{i}({\bf a}^{\tau})=c^{(i)}$ and $$M(\widetilde{e}_{i}{\bf c})=M\left[\widetilde{e}_{i}\boldsymbol{\rm a}^{\tau},% {\boldsymbol{\rm b}}^{\tau}\right]=\widetilde{e}_{i}M({\bf c}),$$ If $k_{1}\leq n-r$, then $\widetilde{f}_{i}{\bf a}^{\tau}\neq{\bf 0}$ and $$M\left(\widetilde{f}_{i}{\bf c}\right)=M\left[\widetilde{f}_{i}\boldsymbol{\rm a% }^{\tau},{\boldsymbol{\rm b}}^{\tau}\right]=\widetilde{f}_{i}M({\bf c}).$$ If $k_{1}>n-r$, then $\widetilde{f}_{i}{\bf a}^{\tau}={\bf 0}$ and $\widetilde{f}_{i}{\bf c}\not\in\mathcal{B}^{J}_{\Omega}$, which implies that $\widetilde{f}_{i}M({\bf c})={\bf 0}$ and $\widetilde{f}_{i}{\bf c}={\bf 0}$ in $\mathcal{B}^{J}_{\Omega}$, respectively. Hence we have $\widetilde{f}_{i}M({\bf c})=M\left(\widetilde{f}_{i}{\bf c}\right)={\bf 0}$. Case 2. Suppose that ${\bf c}\in\mathcal{B}_{\Omega}$ is arbitrary. We assume that $$M_{1}:=M\left({\bf c}^{J}\right)=M[\boldsymbol{\rm a}^{\tau},\boldsymbol{\rm b% }^{\tau}],\quad M_{2}:=M^{-}({\bf c}_{J_{1}})=M({\bf a}^{\prime},{\bf b}^{% \prime}),$$ for some $(\boldsymbol{\rm a},\boldsymbol{\rm b})\in\mathcal{I}_{[\overline{r}]\times([n% ]\setminus[r])}$ and $(\boldsymbol{\rm a}^{\prime},\boldsymbol{\rm b}^{\prime})\in\mathcal{I}_{[r]% \times[\overline{r}]}$. By Proposition 4.1 and the arguments in Case 1, we see that (4.17) $$\varepsilon_{i}(M_{1})=\varepsilon_{i}(\boldsymbol{\rm a}^{\tau})\quad% \varepsilon_{i}(M_{2})=\varepsilon_{i}(\boldsymbol{\rm b}^{\prime}).$$ Since $\langle{\rm wt}(M_{1}),h_{i}\rangle=\langle{\rm wt}(\boldsymbol{\rm a}^{\tau})% ,h_{i}\rangle$ and $\langle{\rm wt}(M_{2}),h_{i}\rangle=\langle{\rm wt}(\boldsymbol{\rm b}^{\prime% }),h_{i}\rangle$, we also have (4.18) $$\varphi_{i}(M_{1})=\varphi_{i}(\boldsymbol{\rm a}^{\tau}),\quad\varphi_{i}(M_{% 2})=\varphi_{i}(\boldsymbol{\rm b}^{\prime}).$$ By (4.15) and (4.16), we have $$\begin{split}\displaystyle\psi(\widetilde{e}_{i}{\bf c})=\left(\widetilde{e}_{% i}{\bf c}^{J}\right)\otimes{\bf c}_{J_{1}}\otimes{\bf c}_{J_{2}}&\displaystyle% \Longleftrightarrow\quad k_{0}\leq n-r\\ &\displaystyle\Longleftrightarrow\quad\widetilde{e}_{i}(\boldsymbol{\rm a}^{% \tau}\otimes\boldsymbol{\rm b}^{\prime})=(\widetilde{e}_{i}\boldsymbol{\rm a}^% {\tau})\otimes\boldsymbol{\rm b}^{\prime}\\ &\displaystyle\Longleftrightarrow\quad\varphi_{i}(\boldsymbol{\rm a}^{\tau})% \geq\varepsilon_{i}(\boldsymbol{\rm b}^{\prime}),\\ \end{split}$$ $$\begin{split}\displaystyle\psi(\widetilde{e}_{i}{\bf c})={\bf c}^{J}\otimes(% \widetilde{e}_{i}{\bf c}_{J_{1}})\otimes{\bf c}_{J_{2}}&\displaystyle% \Longleftrightarrow\quad k_{0}>n-r\\ &\displaystyle\Longleftrightarrow\quad\widetilde{e}_{i}(\boldsymbol{\rm a}^{% \tau}\otimes\boldsymbol{\rm b}^{\prime})=\boldsymbol{\rm a}^{\tau}\otimes(% \widetilde{e}_{i}\boldsymbol{\rm b}^{\prime})\\ &\displaystyle\Longleftrightarrow\quad\varphi_{i}(\boldsymbol{\rm a}^{\tau})<% \varepsilon_{i}(\boldsymbol{\rm b}^{\prime}).\\ \end{split}$$ Therefore, we have by (4.17) and (4.18), $$\psi\left(\widetilde{e}_{i}{\bf c}\right)=\begin{cases}(\widetilde{e}_{i}{\bf c% }^{J})\otimes{\bf c}_{J_{1}}\otimes{\bf c}_{J_{2}},&\text{if $\varphi_{i}(M_{1% })\geq\varepsilon_{i}(M_{2})$},\\ {\bf c}^{J}\otimes(\widetilde{e}_{i}{\bf c}_{J_{1}})\otimes{\bf c}_{J_{2}},&% \text{if $\varphi_{i}(M_{1})<\varepsilon_{i}(M_{2})$}.\end{cases}$$ Similarly, we have $$\psi\left(\widetilde{f}_{i}{\bf c}\right)=\begin{cases}\left(\widetilde{f}_{i}% {\bf c}^{J}\right)\otimes{\bf c}_{J_{1}}\otimes{\bf c}_{J_{2}},&\text{if $% \varphi_{i}(M_{1})>\varepsilon_{i}(M_{2})$},\\ {\bf c}^{J}\otimes\left(\widetilde{f}_{i}{\bf c}_{J_{1}}\right)\otimes{\bf c}_% {J_{2}},&\text{if $\varphi_{i}(M_{1})\leq\varepsilon_{i}(M_{2})$}.\end{cases}$$ It follows that $\psi$ commutes with $\widetilde{e}_{i}$ and $\widetilde{f}_{i}$ for $i\in J_{1}$. By the same arguments, we can show that $\psi$ commutes with $\widetilde{e}_{i}$ and $\widetilde{f}_{i}$ for $i\in J_{2}$. This completes the proof. ∎ Let $B_{J}(\infty)$ denote the $\mathfrak{g}_{J}$-crystal of the negative part of $U_{q}(\mathfrak{g}_{J})$, and regard it as a $\mathfrak{g}$-crystal where $\widetilde{e}_{i}b=\widetilde{f}_{i}b={\bf 0}$ and $\varepsilon_{i}(b)=\varphi_{i}(b)=-\infty$ for $i\in I\setminus J$ and $b\in B_{J}(\infty)$. Let $W_{J}$ be the Weyl group of $\mathfrak{g}_{J}$ generated by $s_{j}$ for $j\in J$, and let $w^{J}$ be the longest element in the set of coset representatives of minimal length in $W/W_{J}$. Consider $\boldsymbol{\rm i}\in R(w_{0})$ corresponding to $w_{0}=w^{J}w_{J}$, where $w_{J}$ is the longest element in $W_{j}$. Let $U^{-}\left(J\right)$ be the $\mathbb{Q}(q)$-subspace of $U^{-}$ spanned by $b_{\boldsymbol{\rm i}}({\bf c})\in B_{\boldsymbol{\rm i}}$ for ${\bf c}\in\mathbb{Z}_{+}^{N}$ such that $c_{k}=0$ unless $\beta_{k}\in\Phi^{+}(J)$. Then $U^{-}\left(J\right)$ is independent of the choice of $\boldsymbol{\rm i}$, and forms a subalgebra of $U^{-}$ called the quantum nilpotent subalgebra associated to $w^{J}$ [17, Proposition 8.2]. By using a PBW-type basis, we see that the multiplication in $U^{-}$ gives an isomorphism of a $\mathbb{Q}(q)$-vector space (4.19) $$U^{-}\cong U^{-}\left(J\right)\otimes U^{-}_{J}$$ (see [9, 22] for its generalization to the case of a symmetrizable Kac-Moody algebra). The image of PBW-type basis of $U^{-}(J)$ under $\pi$ forms a subcrystal of $\pi(B_{\boldsymbol{\rm i}})$ in $L(\infty)/qL(\infty)$, which we denote by $B^{J}(\infty)$. Then we have the following tensor product decomposition of $B(\infty)$, which is a crystal version of (4.19). Corollary 4.3. As a $\mathfrak{g}$-crystal, we have $$B(\infty)\cong B^{J}(\infty)\otimes B_{J}(\infty).$$ Proof. We have $B_{J}(\infty)\cong B_{J_{1}}(\infty)\otimes B_{J_{2}}(\infty)\cong\mathcal{B}_% {\Omega_{J_{1}}}\!\otimes\mathcal{B}_{\Omega_{J_{2}}}$ and $B^{J}(\infty)\cong\mathcal{B}_{\Omega}^{J}$. Hence it follows from Theorem 4.2. ∎ Remark 4.4. The isomorphism in Theorem 4.2 is a special case of the bijection $\Omega_{w}$ for $w\in W$ in [1, Proposition 5.25] when $w=w^{J}$ (see also [9, Proposition 3.14]). We should remark that $\Omega_{w}$ is not in general a crystal isomorphism for arbitrary $w\in W$. 5. Crystal embedding 5.1. Let $\lambda=(\lambda_{1},\ldots,\lambda_{n})\in\mathscr{P}_{n}$ be given. For $S\in SST_{[n]}(\lambda)$, we define (5.1) $${\bf c}^{+}(S)=(c_{ij})\in\mathcal{B}_{\Omega^{+}},$$ where $c_{ij}$ is given by the number of $j$’s appearing in the $i$th row of $S$ for $1\leq i<j\leq n$. Then we have the following, which is already well-known to experts in this area. Proposition 5.1. For $\lambda=(\lambda_{1},\ldots,\lambda_{n})\in\mathscr{P}_{n}$, the map $$\xymatrixcolsep{3pc}\xymatrixrowsep{0pc}\xymatrix{SST_{[n]}(\lambda)\otimes T_% {-\lambda}\ \ar@{->}[r]&\ \mathcal{B}_{\Omega^{+}},\\ S\otimes t_{-\lambda}\ar@{|->}[r]&{\bf c}^{+}(S)}$$ is an embedding of crystals. Proof. It follows immediately from comparing the crystal structures on $SST_{[n]}(\lambda)$ and $\mathcal{B}_{\Omega^{+}}$ described in Proposition 4.1. ∎ We also have embeddings into $\mathcal{B}_{\Omega^{-}}$. For $T\in SST_{[\overline{n}]}(\lambda)$, we define (5.2) $${\bf c}^{-}(T)=(c_{ij})\in\mathcal{B}_{\Omega^{-}},$$ where $c_{ij}$ is given by the number of $\overline{i}$’s appearing in the $(n-j+1)$th row of $T$ for $1\leq i<j\leq n$. Similarly, for $S\in SST_{[n]}(\lambda)$, we define (5.3) $${\bf c}_{-}(S)={\bf c}^{-}\left(\sigma^{-d}(S)^{{}^{\nwarrow}}\right)\in% \mathcal{B}_{\Omega^{-}},$$ for some $d\geq\lambda_{1}$. Note that ${\bf c}_{-}(S)$ does not depend on the choice of $d\geq\lambda_{1}$. Proposition 5.2. For $\lambda=(\lambda_{1},\ldots,\lambda_{n})\in\mathscr{P}_{n}$, the maps $$\xymatrixcolsep{3pc}\xymatrixrowsep{0pc}\xymatrix{SST_{[\overline{n}]}(\lambda% )\otimes T_{w_{0}\lambda}\ \ar@{->}[r]&\ \mathcal{B}_{\Omega^{-}},\\ T\otimes t_{w_{0}\lambda}\ar@{->}[r]&{\bf c}^{-}(T)}\quad\quad\xymatrixcolsep{% 3pc}\xymatrixrowsep{0pc}\xymatrix{SST_{[n]}(\lambda)\otimes T_{-\lambda}\ \ar@% {->}[r]&\ \mathcal{B}_{\Omega^{-}},\\ S\otimes t_{-\lambda}\ar@{->}[r]&{\bf c}_{-}(S)}$$ are embeddings of crystals. Proof. The proof also follows immediately from (3.5) and Proposition 4.1.∎ Example 5.3. Suppose that $n=6$ and let $$S\ =\ \resizebox{84.5559pt}{}{${\raisebox{-2.58pt}{$\begin{array}[]{cccccc}% \cline{1-6}\omit\hskip 3.225pt\raisebox{-0.172pt}{$1$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$1$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$1$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{% $2$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$2$}\hskip 3.225pt&% \omit\hskip 3.225pt\raisebox{-0.172pt}{$3$}\hskip 3.225pt\\ \cline{1-6}\omit\hskip 3.225pt\raisebox{-0.172pt}{$2$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$3$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$3$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{% $5$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$6$}\hskip 3.225pt\\ \cline{1-5}\omit\hskip 3.225pt\raisebox{-0.172pt}{$4$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$4$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$4$}\hskip 3.225pt\\ \cline{1-3}\omit\hskip 3.225pt\raisebox{-0.172pt}{$5$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$5$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$6$}\hskip 3.225pt\\ \cline{1-3}\omit\hskip 3.225pt\raisebox{-0.172pt}{$6$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$6$}\hskip 3.225pt\\ \cline{1-2}\end{array}$}}$}\ \in SST_{[6]}(6,5,3,3,2)\ .$$ Then we have by (5.1) $${\bf c}^{+}(S)=\begin{bmatrix}c_{12}&c_{13}&c_{14}&c_{15}&c_{16}\\ &c_{23}&c_{24}&c_{25}&c_{26}\\ &&c_{34}&c_{35}&c_{36}\\ &&&c_{45}&c_{46}\\ &&&&c_{56}\\ \end{bmatrix}=\begin{bmatrix}2&1&0&0&0\\ &2&0&1&1\\ &&3&0&0\\ &&&2&1\\ &&&&2\\ \end{bmatrix}\ \in\mathcal{B}_{\Omega^{+}}\ .$$ On the other hand, $$\sigma^{-6}(S)\ =\ \resizebox{79.85835pt}{}{${\raisebox{-2.58pt}{$\begin{array% }[]{cccccc}\cline{6-6}&&&&&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{6% }$}\hskip 3.225pt\\ \cline{4-6}&&&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{6}$}\hskip 3.2% 25pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{5}$}\hskip 3.225pt&% \omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{5}$}\hskip 3.225pt\\ \cline{4-6}&&&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{4}$}\hskip 3.2% 25pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{4}$}\hskip 3.225pt&% \omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{4}$}\hskip 3.225pt\\ \cline{3-6}&&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{5}$}\hskip 3.22% 5pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{3}$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$\overline{3}$}\hskip 3.225pt&\omit\hskip 3.% 225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225pt\\ \cline{1-6}\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{3}$}\hskip 3.225% pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225pt&\omit\hskip 3.% 225pt\raisebox{-0.172pt}{$\overline{1}$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$\overline{1}$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox% {-0.172pt}{$\overline{1}$}\hskip 3.225pt\\ \cline{1-6}\end{array}$}}$}\ \ \ \ \ \ \ \ \sigma^{-6}(S)^{{}^{\nwarrow}}\ =\ % \resizebox{79.85835pt}{}{${\raisebox{-2.58pt}{$\begin{array}[]{cccccc}\cline{1% -6}\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{6}$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$\overline{6}$}\hskip 3.225pt&\omit\hskip 3.% 225pt\raisebox{-0.172pt}{$\overline{5}$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$\overline{4}$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox% {-0.172pt}{$\overline{1}$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt% }{$\overline{1}$}\hskip 3.225pt\\ \cline{1-6}\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{5}$}\hskip 3.225% pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{5}$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$\overline{4}$}\hskip 3.225pt&\omit\hskip 3.% 225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225pt\\ \cline{1-4}\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{4}$}\hskip 3.225% pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{3}$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$\overline{3}$}\hskip 3.225pt\\ \cline{1-3}\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{3}$}\hskip 3.225% pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$\overline{1}$}\hskip 3.225pt\\ \cline{1-3}\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225% pt\\ \cline{1-1}\end{array}$}}$}\ .$$ Hence by (5.3), we have $${\bf c}_{-}(S)=\begin{bmatrix}c_{56}&c_{46}&c_{36}&c_{26}&c_{16}\\ &c_{45}&c_{35}&c_{25}&c_{15}\\ &&c_{34}&c_{24}&c_{14}\\ &&&c_{23}&c_{13}\\ &&&&c_{12}\\ \end{bmatrix}=\begin{bmatrix}1&1&0&0&2\\ &1&0&1&0\\ &&2&0&0\\ &&&1&1\\ &&&&0\\ \end{bmatrix}\ \in\mathcal{B}_{\Omega^{-}}\ .$$ 5.2. Let $\Omega$ be an orientation with a single sink as in (4.10): $$\xymatrixcolsep{2pc}\xymatrixrowsep{0pc}\xymatrix{\bullet\ar@{->}[r]&\cdots\ar% @{->}[r]&\bullet&\ar@{->}[l]\cdots&\ar@{->}[l]\bullet\\ {}_{1}&&{}_{r}&&{}_{n-1}}$$ for some $r\in I\setminus\{1,n-1\}$. We keep the notations in Section 4.3. Fix $\lambda=(\lambda_{1},\ldots,\lambda_{n})\in\mathscr{P}_{n}$. Choose $d\geq\lambda_{1}$ and put (5.4) $$\eta=(d-\lambda_{r},\ldots,d-\lambda_{1}),\quad\zeta=(\lambda_{r+1},\ldots,% \lambda_{n}).$$ We define a map (5.5) $$\xymatrixcolsep{3pc}\xymatrixrowsep{0pc}\xymatrix{SST_{[n]}(\lambda)\ \ar@{->}% [r]&\ SST_{[n]\setminus[r]}(\zeta)\times SST_{[\overline{r}]}(\eta)\times{% \mathcal{M}}_{[\overline{r}]\times([n]\setminus[r])}\\ S\ \ar@{|->}[r]&\ (S^{+},S^{-},M)}$$ where $(S^{+},S^{-},M)$ is determined in the following steps: (i) let $S^{+}\in\ SST_{[n]\setminus[r]}(\zeta)$ given by removing the first $r$ rows in $S$, (ii) let $S\setminus S^{+}$ denotes the subtableau of $S$ obtained by removing $S^{+}$, and put $$\begin{split}\displaystyle P^{\prime}&\displaystyle=\text{the subtableau of $S% \setminus S^{+}$ with entries in $[r]$},\\ \displaystyle Q&\displaystyle=\text{the subtableau of $S\setminus S^{+}$ with % entries in $[n]\setminus[r]$},\\ \end{split}$$ (iii) putting $P=\sigma^{-d}(P^{\prime})$ (see (3.5)), we have for some $\nu\in\mathscr{P}_{r}$ with $\eta\subset\nu$ $$\begin{split}\displaystyle(P,Q)\in SST_{[\overline{r}]}(\nu^{\pi})\times SST_{% [n]\setminus[r]}(\left(\nu/\eta\right)^{\pi}),\end{split}$$ (iv) applying $\kappa^{-1}$ in (3.4), we get $$\begin{split}\displaystyle(T,M)=\kappa^{-1}(P,Q)\in SST_{[\overline{r}]}(\eta^% {\pi})\times{\mathcal{M}}_{[\overline{r}]\times([n]\setminus[r])}.\end{split}$$ (v) let $S^{-}=T^{{}^{\nwarrow}}\in SST_{[\overline{r}]}(\eta)$. It can be summarized as follows: (5.6) $$\xymatrixcolsep{3pc}\xymatrixrowsep{2pc}\xymatrix{S\ \ar@{->}[r]^{{\rm(i)}}&(S% ^{+},S\setminus S^{+})\ \ar@{->}[r]^{{\rm(ii)}}&(S^{+},P^{\prime},Q)\ar@{->}[d% ]^{{\rm(iii)}}\\ (S^{+},S^{-},M)&\ar@{->}[l]^{{\rm(v)}}(S^{+},T,M)&\ar@{->}[l]^{{\rm(iv)}}(S^{+% },P,Q)}$$ We note that $S$ is uniquely determined by the triple $(S^{+},S^{-},M)$ by construction, and hence the map (5.6) is injective. Now for $S\in SST_{[n]}(\lambda)$ which is mapped to $(S^{+},S^{-},M)$ under (5.5), we define (5.7) $${\bf c}(S)\in\mathcal{B}_{\Omega},$$ to be the unique ${\bf c}\in\mathcal{B}_{\Omega}$ such that (1) $M\left({\bf c}^{J}\right)=M$ under (4.13), (2) ${\bf c}_{J_{1}}={\bf c}^{-}(S^{-})$ and ${\bf c}_{J_{2}}={\bf c}^{+}(S^{+})$ under (5.1) and (5.2), respectively. Then we have the following, which is the main result in this paper. Theorem 5.4. For $\lambda\in\mathscr{P}_{n}$, the map $$\xymatrixcolsep{3pc}\xymatrixrowsep{0pc}\xymatrix{SST_{[n]}(\lambda)\otimes T_% {-\lambda}\ \ar@{->}[r]&\ \mathcal{B}_{\Omega}\\ S\otimes t_{-\lambda}\ar@{|->}[r]&{\bf c}(S)}$$ is an embedding of crystals, where ${\bf c}(S)$ is given in (5.7). Proof. Put ${\mathcal{M}}={\mathcal{M}}_{[\overline{r}]\times([n]\setminus[r])}$. Since the map ${\bf c}\mapsto M({\bf c})$ is a bijection from $\mathcal{B}^{J}_{\Omega}$ to ${\mathcal{M}}_{[\overline{r}]\times([n]\setminus[r])}$ (4.13), ${\mathcal{M}}$ has a crystal structure induced from (4.14). Let $\eta$ and $\zeta$ be as in (5.4). Then we define a $\mathfrak{g}$-crystal $${\mathcal{M}}_{\lambda}={\mathcal{M}}\otimes SST_{[\overline{r}]}(\eta)\otimes SST% _{[n]\setminus[r]}(\zeta),$$ where we extend a $\mathfrak{g}_{J_{1}}$-crystal $SST_{[\overline{r}]}(\eta)$, and a $\mathfrak{g}_{J_{2}}$-crystal $SST_{[n]\setminus[r]}(\zeta)$ to $\mathfrak{g}$-crystals in a trivial way. Put $\xi=-d(\epsilon_{1}+\cdots+\epsilon_{r})$. By (4.14) and Theorem 4.2, we see that the crystal structure on $\mathcal{M}_{\lambda}$ coincides with the one given in [10, Section 4.2]. By [10, Proposition 4.6], the map (5.8) $$\xymatrixcolsep{3pc}\xymatrixrowsep{0pc}\xymatrix{SST_{[n]}(\lambda)\otimes T_% {\xi}\ \ar@{->}[r]&\ {\mathcal{M}}_{\lambda}\ ,\\ S\otimes t_{\xi}\ar@{|->}[r]&M\otimes S^{-}\otimes S^{+}}$$ is an embedding of $\mathfrak{g}$-crystals, where $(S^{+},S^{-},M)$ is the triple associated to $S$ in (5.6). Finally, taking tensor product by $T_{-\eta-\zeta}$ and then applying (4.13), Propositions 5.1 and 5.2, we have an embedding (5.9) $$\xymatrixcolsep{3pc}\xymatrixrowsep{0pc}\xymatrix{{\mathcal{M}}_{\lambda}% \otimes T_{-\xi-\lambda}\ \ar@{->}[r]&\ \mathcal{B}_{\Omega}^{J}\,\otimes% \mathcal{B}_{\Omega_{J_{1}}}\!\otimes\mathcal{B}_{\Omega_{J_{2}}}\ ,\\ M\otimes S^{-}\otimes S^{+}\otimes t_{-\xi-\lambda}\ar@{|->}[r]&{\bf c}^{J}% \otimes{\bf c}_{J_{1}}\otimes{\bf c}_{J_{2}}}$$ where ${\bf c}={\bf c}(S)$ is given in (5.7). Finally composing (5.8), (5.9), and the inverse map in Theorem 4.2, we obtain the required embedding. Note that $S^{-}$ depends on $d$, but ${\bf c}^{-}(S^{-})$ does not. ∎ Remark 5.5. Suppose that $\Omega$ is a quiver of type $A_{n-1}$ with a single source. Let $\mathcal{B}_{\Omega}^{\ast}$ be the set $\mathcal{B}_{\Omega}$ with the $\ast$-crystal structure. Then by similar methods as in Theorem 5.4, we can also construct an embedding of $SST_{[n]}(\lambda)$ into $\mathcal{B}_{\Omega}^{\ast}$ for $\lambda\in\mathscr{P}_{n}$. Example 5.6. Suppose that $\Omega$ is given by $$\xymatrixcolsep{2pc}\xymatrixrowsep{0pc}\xymatrix{\bullet\ar@{->}[r]&\bullet% \ar@{->}[r]&\bullet&\ar@{->}[l]\bullet&\ar@{->}[l]\bullet\\ {}_{1}&{}_{2}&{}_{3}&{}_{4}&{}_{5}}$$ Recall that the Auslander-Reiten quiver of the representations of $\Omega$ is given by $$\xymatrixcolsep{0.5pc}\xymatrixrowsep{1pc}\xymatrix{&&{}_{36}\ar@{->}[rd]&&{}_% {23}\ar@{.>}[ll]\ar@{->}[rd]&&{}_{12}\ar@{.>}[ll]\\ &{}_{35}\ar@{->}[ur]\ar@{->}[rd]&&{}_{26}\ar@{.>}[ll]\ar@{->}[ur]\ar@{->}[rd]&% &{}_{13}\ar@{.>}[ll]\ar@{->}[ur]\\ {}_{34}\ar@{->}[ur]\ar@{->}[rd]&&{}_{25}\ar@{.>}[ll]\ar@{->}[ur]\ar@{->}[rd]&&% {}_{16}\ar@{.>}[ll]\ar@{->}[ur]\ar@{->}[rd]\\ &{}_{24}\ar@{->}[ur]\ar@{->}[rd]&&{}_{15}\ar@{.>}[ll]\ar@{->}[ur]\ar@{->}[rd]&% &{}_{46}\ar@{.>}[ll]\ar@{->}[rd]\\ &&{}_{14}\ar@{->}[ur]&&{}_{45}\ar@{.>}[ll]\ar@{->}[ur]&&{}_{56}\ar@{.>}[ll]}$$ where the vertex $``ij"$ denotes the indecomposable representation of $\Omega$ corresponding to the positive root $\epsilon_{i}-\epsilon_{j}\in\Phi^{+}$ for $1\leq i<j\leq 6$, the solid arrows denote the morphisms between them, and the dashed arrows denote the Auslander-Reiten translation functor denoted by $\tau$ in [18]. Let $S$ be as in Example 5.3. Let us apply the map (5.5) to $S$ following (5.6). First, we have $$S^{+}\ =\ \resizebox{42.27795pt}{}{${\raisebox{-2.58pt}{$\begin{array}[]{ccc}% \cline{1-3}\omit\hskip 3.225pt\raisebox{-0.172pt}{$5$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$5$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$6$}\hskip 3.225pt\\ \cline{1-3}\omit\hskip 3.225pt\raisebox{-0.172pt}{$6$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$6$}\hskip 3.225pt\\ \cline{1-2}\end{array}$}}$}\ \ \ \ \ \ \ \ \ S\setminus S^{+}\ =\ \resizebox{8% 4.5559pt}{}{${\raisebox{-2.58pt}{$\begin{array}[]{cccccc}\cline{1-6}\omit% \hskip 3.225pt\raisebox{-0.172pt}{$1$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$1$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{% $1$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$2$}\hskip 3.225pt&% \omit\hskip 3.225pt\raisebox{-0.172pt}{$2$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$3$}\hskip 3.225pt\\ \cline{1-6}\omit\hskip 3.225pt\raisebox{-0.172pt}{$2$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$3$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$3$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{% $5$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$6$}\hskip 3.225pt\\ \cline{1-5}\omit\hskip 3.225pt\raisebox{-0.172pt}{$4$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$4$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$4$}\hskip 3.225pt\\ \cline{1-3}\end{array}$}}$}$$ Separating $S\setminus S^{+}$ into subtableaux with entries in $\{1,2,3\}$ and $\{4,5,6\}$, we get $$P^{\prime}\ =\ \resizebox{84.5559pt}{}{${\raisebox{-2.58pt}{$\begin{array}[]{% cccccc}\cline{1-6}\omit\hskip 3.225pt\raisebox{-0.172pt}{$1$}\hskip 3.225pt&% \omit\hskip 3.225pt\raisebox{-0.172pt}{$1$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$1$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{% $2$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$2$}\hskip 3.225pt&% \omit\hskip 3.225pt\raisebox{-0.172pt}{$3$}\hskip 3.225pt\\ \cline{1-6}\omit\hskip 3.225pt\raisebox{-0.172pt}{$2$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$3$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$3$}\hskip 3.225pt&{\cdot}&{\cdot}\\ \cline{1-3}{\cdot}&{\cdot}&{\cdot}\\ \end{array}$}}$}\ \ \ \ \ \ \ \ \ Q\ =\ \resizebox{84.5559pt}{}{${\raisebox{-2% .58pt}{$\begin{array}[]{cccccc}\cdot&\cdot&\cdot&\cdot&\cdot&\cdot\\ \cline{4-5}\cdot&\cdot&\cdot&\omit\hskip 3.225pt\raisebox{-0.172pt}{$5$}\hskip 3% .225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$6$}\hskip 3.225pt\\ \cline{1-5}\omit\hskip 3.225pt\raisebox{-0.172pt}{$4$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$4$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$4$}\hskip 3.225pt\\ \cline{1-3}\end{array}$}}$}$$ and $$P\ =\ \sigma^{-6}(P^{\prime})\ =\ \resizebox{84.5559pt}{}{${\raisebox{-2.58pt}% {$\begin{array}[]{cccccc}&&&&&\\ \cline{4-6}&&&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{3}$}\hskip 3.2% 25pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{3}$}\hskip 3.225pt&% \omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225pt\\ \cline{1-6}\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{3}$}\hskip 3.225% pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225pt&\omit\hskip 3.% 225pt\raisebox{-0.172pt}{$\overline{1}$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$\overline{1}$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox% {-0.172pt}{$\overline{1}$}\hskip 3.225pt\\ \cline{1-6}\end{array}$}}$}$$ Applying $\kappa$ to the pair $$(P,Q)\ =\left(\ \ \resizebox{84.5559pt}{}{${\raisebox{-2.58pt}{$\begin{array}[% ]{cccccc}\cline{4-6}&&&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{3}$}% \hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{3}$}\hskip 3.% 225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225pt\\ \cline{1-6}\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{3}$}\hskip 3.225% pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225pt&\omit\hskip 3.% 225pt\raisebox{-0.172pt}{$\overline{1}$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$\overline{1}$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox% {-0.172pt}{$\overline{1}$}\hskip 3.225pt\\ \cline{1-6}\end{array}$}}$}\ \ \ ,\ \ \ \resizebox{84.5559pt}{}{${\raisebox{-2% .58pt}{$\begin{array}[]{cccccc}\cline{4-5}&&&\omit\hskip 3.225pt\raisebox{-0.1% 72pt}{$5$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$6$}\hskip 3.2% 25pt&\cdot\\ \cline{1-5}\omit\hskip 3.225pt\raisebox{-0.172pt}{$4$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$4$}\hskip 3.225pt&\omit\hskip 3.225pt% \raisebox{-0.172pt}{$4$}\hskip 3.225pt&\cdot&\cdot&\cdot\\ \cline{1-3}\end{array}$}}$}\ \ \right),$$ where ${\rm sh}(P)=(6,3)^{\pi}$ and ${\rm sh}(Q)=\left((6,3)/(3,1)\right)^{\pi}$, we have $(T,M)$ where $$T\ =\ \resizebox{42.27795pt}{}{${\raisebox{-2.58pt}{$\begin{array}[]{ccc}% \cline{3-3}&&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{3}$}\hskip 3.22% 5pt\\ \cline{1-3}\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225% pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$\overline{1}$}\hskip 3.225pt\\ \cline{1-3}\end{array}$}}$}\quad\in SST_{[\overline{3}]}((3,1)^{\pi}),\quad\ M% \ =\ \begin{bmatrix}0&1&1\\ 1&0&0\\ 2&0&0\\ \end{bmatrix}\ \in{\mathcal{M}}_{[\overline{3}]\times([6]\setminus[3])},$$ with $$S^{-}=T^{{}^{\nwarrow}}=\resizebox{42.27795pt}{}{${\raisebox{-2.58pt}{$\begin{% array}[]{ccc}\cline{1-3}\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{3}$}% \hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.% 225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{1}$}\hskip 3.225pt\\ \cline{1-3}\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225% pt&&\\ \cline{1-1}\end{array}$}}$}\ .$$ Therefore, we have a triple $(S^{+},S^{-},M)$ associated to $S$: $$(S^{+},S^{-},M)\ =\ \left(\ \ \resizebox{42.27795pt}{}{${\raisebox{-2.58pt}{$% \begin{array}[]{ccc}\cline{1-3}\omit\hskip 3.225pt\raisebox{-0.172pt}{$5$}% \hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$5$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$6$}\hskip 3.225pt\\ \cline{1-3}\omit\hskip 3.225pt\raisebox{-0.172pt}{$6$}\hskip 3.225pt&\omit% \hskip 3.225pt\raisebox{-0.172pt}{$6$}\hskip 3.225pt\\ \cline{1-2}\end{array}$}}$}\ \ ,\ \ \resizebox{42.27795pt}{}{${\raisebox{-2.58% pt}{$\begin{array}[]{ccc}\cline{1-3}\omit\hskip 3.225pt\raisebox{-0.172pt}{$% \overline{3}$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline% {2}$}\hskip 3.225pt&\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{1}$}% \hskip 3.225pt\\ \cline{1-3}\omit\hskip 3.225pt\raisebox{-0.172pt}{$\overline{2}$}\hskip 3.225% pt&&\\ \cline{1-1}\end{array}$}}$}\ \ ,\ \ \begin{bmatrix}0&1&1\\ 1&0&0\\ 2&0&0\\ \end{bmatrix}\ \ \right).$$ Finally, the corresponding ${\bf c}(S)=({\bf c}^{J},{\bf c}_{J_{1}},{\bf c}_{J_{2}})\in\mathcal{B}_{\Omega}$ in (5.7) is given by $$\begin{split}\displaystyle{\bf c}^{J}&\displaystyle=\begin{bmatrix}c_{34}&c_{3% 5}&c_{36}\\ c_{24}&c_{25}&c_{26}\\ c_{14}&c_{15}&c_{16}\\ \end{bmatrix}=\begin{bmatrix}0&1&1\\ 1&0&0\\ 2&0&0\\ \end{bmatrix},\end{split}$$ and $$\begin{split}\displaystyle{\bf c}_{J_{1}}&\displaystyle=\begin{bmatrix}c_{23}&% c_{13}\\ &c_{12}\end{bmatrix}=\begin{bmatrix}1&1\\ &0\end{bmatrix},\ \ \ \ \ {\bf c}_{J_{2}}=\begin{bmatrix}c_{45}&c_{46}\\ &c_{56}\end{bmatrix}=\begin{bmatrix}2&1\\ &2\end{bmatrix}.\end{split}$$ Remark 5.7. Let $\Omega^{\prime}$ be another quiver of type $A_{n-1}$ with a single sink. Using Theorem 5.4, one can describe the transition map $R_{\Omega}^{\Omega^{\prime}}:\mathcal{B}_{\Omega}\rightarrow\mathcal{B}_{% \Omega^{\prime}}$ as follows. Let ${\bf c}\in\mathcal{B}_{\Omega}$ be given. There exist a pair of Young tableaux $(S^{+},S^{-})$ (but not necessarily unique) such that ${\bf c}_{J_{1}}={\bf c}^{-}(S^{-})$ and ${\bf c}_{J_{2}}={\bf c}^{+}(S^{+})$. We can apply the inverse algorithm of (5.6) to $(S^{+},S^{-},{\bf c}^{J})$ to obtain a Young tableaux $S\in SST_{[n]}(\lambda)$ for some $\lambda\in\mathscr{P}_{n}$ such that each $\lambda_{i}-\lambda_{i+1}$ is sufficiently large. Let ${\bf c}^{\prime}$ be the Lusztig datum of $S$ with respect to $\Omega^{\prime}$, which is also obtained by the algorithm (5.6). Then we have ${\bf c}^{\prime}=R_{\Omega}^{\Omega^{\prime}}(\bf c)$. Note that if either one of $\Omega$ and $\Omega^{\prime}$ is $\Omega^{\pm}$, then one may apply only Propositions 5.1 and 5.2 to have $R_{\Omega}^{\Omega^{\prime}}$. We also refer to [2, Section 4] for a closed-form formula for $R_{\Omega}^{\Omega^{\prime}}$, which is a tropicalization of a subtraction-free rational function connecting two parametrizations of a totally positive variety. It would be interesting to compare these two algortithms. References [1] P. Baumann, J. Kamnitzer, P. Tingley, Affine Mirković-Vilonen polytopes, Publ. Math. Inst. Hautes Études Sci. 120 (2014), 113–205. [2] A. Berenstein, S. Fomin, A. Zelevinsky, Parametrization of canonical bases and totally positive matrices, Adv. in Math. 122 (1996) 49–149. [3] J. Hong, S.-J. Kang, Introduction to Quantum Groups and Crystal Bases, Graduate Studies in Mathematics 42, Amer. Math. Soc., 2002. [4] W. Fulton, Young tableaux, with Application to Representation theory and Geometry, Cambridge Univ. Press, 1997. [5] V. G. Kac, Infinite-dimensional Lie algebras, Third edition, Cambridge University Press, Cambridge, 1990. [6] M. Kashiwara, On crystal bases of the $q$-analogue of universal enveloping algebras, Duke Math. J. 63 (1991) 465–516. [7] M. Kashiwara, On crystal bases, Representations of groups, CMS Conf. Proc., 16, Amer. Math. Soc., Providence, RI, (1995) 155–197. [8] M. Kashiwara, T. Nakashima, Crystal graphs for representations of the $q$-analogue of classical Lie algebras, J. Algebra 165 (1994) 295–345. [9] Y. Kimura, Remarks on quantum unipotent subgroup and dual canonical basis, preprint (2015) arXiv:1506.07912. [10] J.-H. Kwon, Demazure crystals of generalized Verma modules and a flagged RSK correspondence, J. Algebra 322 (2009) 2150–2179. [11] J.-H. Kwon, RSK correspondence and classically irreducible Kirillov-Reshetikhin crystals, J. Combin. Theory Ser. A 120 (2013) 433–452. [12] J.-H. Kwon, Crystal Bases of q-deformed Kac Modules, Int. Math. Res. Not. (2014) 512–550. [13] G. Lusztig, Canonical bases arising from quantized universal enveloping algebras, J. Amer. Math. Soc. 3 (1990) 447–498. [14] G. Lusztig, Canonical bases arising from quantized enveloping algebras II, Progr. Theor. Phys. Suppl., 102 (1990) 175–201. [15] G. Lusztig, Quivers, perverse sheaves, and quantized enveloping algebras, J. Amer. Math. Soc. 4 (1991) 365–421. [16] G. Lusztig, Introduction to quantum groups, Progress in Math. 110, Birkhäuser, 1993. [17] G. Lusztig, Braid group action and canonical bases, Adv. Math. 122 (1996) 237–261. [18] M. Reineke, On the coloured graph structure of Lusztig’s canonical basis, Math. Ann. 307 (1997) 705–723. [19] Y. Saito, PBW basis of quantized universal enveloping algebras, Publ. Res. Inst. Math. Sci. 30 (1994) 209–232. [20] Y. Saito, Mirković-Vilonen polytopes and a quiver construction of crystal basis in type A, Int. Math. Res. Not. IMRN 2012, 3877–3928. [21] B. E. Sagan, R. Stanley, Robinson-Schensted algorithms for skew tableaux, J. Combin. Theory Ser. A 55 (1990) 161–193. [22] T. Tanisaki, Modules over quantized coordinate algebras and PBW-bases, preprint (2014) arXiv:1409.7973.
Equation of state for asymmetric nuclear matter with infinite-order summation of ring diagrams J. Shamanna jaya@vbphysics.net.in Physics Department, Visva Bharati University, Santiniketan 731235, India    T.T.S. Kuo Physics Department, State University of New York at Stony Brook Stony Brook, NY 11794-3800, USA    I. Bombaci Dipartimento di Fisica, Universita di Pisa, Italy    Subhankar Ray Dept. of Physics, Jadavpur University, Calcutta 700032, India (18 August 2005) Abstract The particle-particle hole-hole ring-diagram summation method is employed to obtain the equation of state of asymmetric nuclear matter over a wide range of asymmetry fraction. Compared with Brueckner Hartree-Fock and model-space Brueckner Hartree-Fock calculations, this approach gives a softer equation of state, increased symmetry energy and a lower value for the incompressibility modulus which agrees quite well with the values used in the hydrodynamical model for the supernovae explosion. 1 Introduction A primary aim of microscopic nuclear theories is to derive the various nuclear properties such as the binding energy per nucleon $(BE/A)$ and saturation density $(\rho_{0})$ of nuclear matter, starting from fundamental nucleon-nucleon (NN) interactions. The well-known BHF approach is one such standard nuclear matter theory. In terms of G-matrix diagrams, the BHF theory is, however, only a lowest-order approximation; the ground-state energy shift $\Delta E_{0}$ for nuclear matter, due to the NN interaction, is given merely by the first-order G-matrix diagram, fig. 1(a), namely $$\Delta E_{0}^{BHF}=\sum_{ab}n_{a}n_{b}\langle ab|G^{BHF}(\omega=\epsilon_{a}+% \epsilon_{b})|ab\rangle$$ (1) In the above the $n$’s are the unperturbed Fermi-Dirac distribution functions, $n_{k}$=1 if $k\leq k_{F}$ and =0 if $k>k_{F}$. $k_{F}$ is the Fermi momentum. The single-particle (s.p.) energies denoted by $\epsilon$, are determined self-consistently using the BHF theory. As is well known, this approach has not been very successful; it has, in general, not been able to simultaneously reproduce the binding energy per nucleon ($BE/A=-16\pm 1$ MeV) and the saturation Fermi momentum ($k_{F}^{sat}=1.35\pm 0.05$ fm${}^{-1}$). When plotted on a energy-density plane, results of various BHF calculations for nuclear matter invariably lie, more or less, on a band, the Coester band ccdv70 , which significantly misses the ”experimental (empirical) box” for $BE/A$ and $k_{F}^{sat}$. The ground state of nuclear matter in BHF theories is treated as merely a Fermi sea. Particle-hole fluctuations near the Fermi sea which represent the long-range correlations are not considered. It should be of interest to allow for such Fermi-sea fluctuations, and they may be important in determining the stiffness of the nuclear-matter equation of state (EOS). Yang, Heyer and Kuoyhk86 proposed an elegant and rigorous method for summing up particle-particle hole-hole (pphh) and particel-hole (ph) ring diagrams to all orders for the calculation of ground state energies of general many-body system. Inclusion of these classes of diagrams to all orders takes into account the Fermi sea fluctuations and long-range nuclear correlations. With this motivation, several calculations for symmetric nuclear matter have been carried out with the inclusion of certain class of ring diagrams to all orderssyk87 ; jkm88 ; jmk89 . In comparison with conventional BHF b71 ; bbn85 calculations of nuclear matter, the inclusion of the particle-particle hole-hole (pphh) ring diagrams to all orders has the desirable effect of both increasing the nuclear matter binding energy and lowering its saturation density. The final expression for the ground-state energy shift in the pphh ring diagram summation with a model-space reaction G-matrix ($G^{M}$) is given as $$\Delta E^{ring}_{0}=\int^{1}_{0}\frac{d\lambda}{\lambda}{\rm tr}\{Y_{m}(% \lambda)Y^{*}_{m}(\lambda)G^{M}(\omega=\Delta^{A-2}_{m}(\lambda))\lambda\}$$ (2) Comparing with the corresponding BHF result of eq. (1), the main difference between the two methods is the replacement of the unperturbed occupation factors $n$, BHF G-matrix $G^{BHF}$ and starting energy ($\epsilon_{a}+\epsilon_{b}$) in the BHF expression by the RPA amplitudes $Y$, model-space G-matrix $G^{M}$ and RPA energies $\Delta$, respectively. In the above $\lambda$ is a strength parameter introduced to facilitate the calculation, as will be discussed later. In the present paper, we would like to extend the above ring-diagram scheme to asymmetric nuclear matter, which is in several ways of more physical importance than symmetric (N=Z) nuclear matter. The study of the EOS of asymmetric nuclear matter has become, in the last few years, a subject of renewed interest particularly in connection with astrophysical problemsbck85 ; bgh85 , such as supernovae explosions and the evolution of neutron stars. For these physical processes, the nuclear matter involved is predominantly not symmetric, and it is the EOS of asymmetric nuclear matter (with a large neutron excess) which plays an important role for them. Furthermore, the nuclear matter probed by heavy-ion experiments is also generally asymmetric. Here the proton-neutron ratio is about 2/3 for both the target and the projectile, and thus the resulting nuclear matter is likely to be asymmetric with the same proton-neutron ratio. The range of densities sampled by astrophysical systems such as the supernovae and the neutron star vary over an appreciably wide range as do their isospin asymmetries. According to the model of prompt explosionk89 , which has been widely employed to explain the explosion mechanism of a supernova, the nuclear-matter EOS needs to be relatively softbck85 for the explosion to take place. It should be of interest to investigate the effect of ring diagrams on the stiffness of asymmetric nuclear matter, as we shall carry out later. To our knowledge, such investigations have not yet been performed. Asymmetric nuclear matter calculations have been done using the Skyrme interactionsjmz83 ; sylk86 , the Gogny interactionszs90 and using the Brueckner-Bethe-Goldstone (BBG) approachbl91 ; bkl94 . Our present calculation is a continuation of the model-space BHF (MBHF) calculations for asymmetric nuclear matter carried out by Song, Wang and Kuoswk92 . In the past, the asymmetric nuclear matter properties were often extracted by interpolating the two extreme cases of symmetric and pure neutron matter, with an empirical parabolic approximationcdl87 ; wff88 . The validity of this empirical practice seems to have not been investigated, and we would like to carry out such an investigation by carrying out a sequence of ring-diagram nuclear matter calculations, covering a wide range of proton-neutron ratios and baryon densities. 2 Formalism Asymmetric nuclear matter is a system consisting of $N$ neutrons an $Z$ protons with $N\neq Z$. For symmetric nuclear matter $N$ and $Z$ are identical with the same Fermi momenta. For asymmetric matter however, the neutrons and protons are treated as non-identical particles with different Fermi mommenta. We introduce a parameter $\alpha$ as a measure of the asymmetry in nuclear matter, namely $$\alpha=\frac{(\rho_{n}-\rho_{p})}{\rho}=\frac{(N-Z)}{A}$$ (3) where $\rho$, $\rho_{n}$ and $\rho_{p}$ are respectively the nuclear, the neutron and the proton densities. The neutron and the proton Fermi momentum are $$k_{F}^{n}={(3\pi^{2}\rho_{n})}^{1/3}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}k_{F}^{p}={(% 3\pi^{2}\rho_{p})}^{1/3}$$ (4) An average Fermi momentum is defined as $$k_{F}={(\frac{3}{2}\pi^{2}\rho)}^{1/3}$$ (5) 3 Model-space G-matrix and single-particle spectrum As with the symmetric case we start with a Hamiltonian $H=T+V$. Introducing a single particle (s. p.) potential $U$ we rewrite it as $H=(T+U)+(V-U)$. $V$ is the NN potential such as the Parisparispot or the Bonnbonnpot potential. A model space P is defined as a configuration space where all nucleons are restricted to have the momentum $k\leq k_{M}$, $k_{M}$ being the momentum space boundary of P. A typical value for $k_{M}$ is $2k_{F}$ where $k_{F}$ is the Fermi momentum. As we shall discuss later, most of the calculations in the present work are performed using $k_{M}$=3.0 fm${}^{-1}$. Similar to the case of symmetric nuclear matter we use a model-space Hartree-Fock method mk83 ; kmw84 ; kmm86 to determine $U$. This leads to the following self-consistent equations for the s.p. spectrum $\epsilon_{i}^{M}$, namely $$\displaystyle\epsilon^{M}_{i}$$ $$\displaystyle=$$ $$\displaystyle t_{k_{i}}+\Gamma_{k_{i}}(k_{i},\tau_{i})$$ (6) $$\displaystyle\Gamma_{k_{i}}(\omega,\tau_{i})$$ $$\displaystyle=$$ $$\displaystyle 2\sum_{\stackrel{{\scriptstyle\tau_{j},s_{i},s_{j}}}{{k_{j}\leq k% _{F}^{\tau_{j}}}}}\langle k_{i}k_{j}|G^{M}(\omega+\epsilon_{j})|k_{i}k_{j}% \rangle\;\;\;\;\;\;\;\;\;k_{i}\leq k_{M}$$ (7) $$\displaystyle\Gamma_{k_{i}}(\omega,\tau_{i})$$ $$\displaystyle=$$ $$\displaystyle 0\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;k_{i}>k_{M}$$ (8) Note that the subscript $i$ represents both momentum and isospin, namely $i\equiv({\bf k}_{i},\tau_{i})$ with k${}_{i}$ denoting the momentum and $\tau_{i}$ the $z$ component of the isospin of the $i$th nucleon. The single particle kinetic energy is $t_{k_{i}}=\hbar^{2}k_{i}^{2}/2m$, $G^{M}$ is the reaction matrix to be specified later. The Fermi momentum is represented by $k_{F}^{\tau_{j}}$ with neutron Fermi momentum $k_{F}^{n}$ for $\tau_{j}=-\frac{1}{2}$, and proton Fermi momentum $k_{F}^{p}$ for $\tau_{j}=\frac{1}{2}$. The model-space momentum boundary is $k_{M}$ and is taken to be the same for neutrons and protons. The s.p. potential is the one-body vertex function $\Gamma$ evaluated at the self-consistent energy $\omega=\epsilon_{k_{i}}$ $$U(k_{i},\tau_{i})=\Gamma_{k_{i}}(\epsilon_{k_{i}},\tau_{i})~{},~{}~{}~{}~{}~{}% ~{}~{}~{}~{}~{}i\equiv n,p$$ (9) $U(k_{i},\tau_{i})$ is determined self-consistently for $k_{i}\leq k_{M}$, and for $k_{i}>k_{M}$ we set $U=0$. We also use an effective mass description for the single particle spectrum as $$\epsilon^{M}_{k_{i}}=\left\{\begin{array}[]{ll}(\hbar^{2}/2m_{q}^{*})k_{i}^{2}% +\bigtriangleup_{q},&\;\;\;\;\;k_{i}\leq k_{M}\\ (\hbar^{2}/2m)k_{i}^{2},&\;\;\;\;\;k>k_{M}\end{array}\right.$$ (10) with four parameters $m^{*}_{q},\bigtriangleup_{q}$, $(q=n,p)$. The effective mass $m^{*}$ and the zero point energy $\bigtriangleup$ are determined self-consistently. The model-space reaction matrix $G^{M}$ satisfies the Bethe-Goldstone equation $$\langle ij|G^{M}(\omega)|mn\rangle=\langle ij|V|mn\rangle+\sum_{k,l}\frac{% \langle ij|V|kl\rangle Q^{M}(k,l)\langle kl|G^{M}(\omega)|mn\rangle}{\omega-% \epsilon_{k}-\epsilon_{l}}$$ (11) In the above equation $i,j,m,n,k$ and $l$ are single particle states, each with momentum k, isospin $\tau$. $V$ is the nucleon-nucleon interaction. The energy variable $\omega$ in the denominator of eq. (11) is given by $$\omega=\epsilon_{k}^{M}+\epsilon_{l}^{M}$$ (12) The two particle correlations considered are those where at least one particle is out of the model space. Hence the Pauli operator $Q^{M}$ is defined as $$Q^{M}(k,l)=\left\{\begin{array}[]{ll}1,&\mbox{if max}(k_{k},k_{l})>k_{M}\;\;% \mbox{and}\;\;k_{k}>k_{F}^{\tau_{k}},k_{l}>k_{F}^{\tau_{l}}\\ 0,&\mbox{otherwise}\end{array}\right.$$ (13) Note that $Q^{M}$ is different for each of the following three cases of the intermediate two nucleon state. For the $nn$ ($pp$) case (fig.2(a)), the intermediate nucleons are identical, and only $k_{M}$ and $k_{F}^{q}$ , $q=n~{}or~{}p$, enter the calculation. For the $np$ or $pn$ case however, $k_{M}$, $k_{F}^{n}$ and $k_{F}^{p}$ all play a role in determining $Q^{M}$ (fig.2(b)). It is convenient to carry out the above calculation in the relative and center of mass (RCM) frame. We choose our relative momentum k and center of mass momentum K as $${\bf k}=\frac{1}{2}{\bf(k_{k}-k_{l})}\;,\;\;\;\;\;\;\;{\bf K}={\bf(k_{k}+k_{l})}$$ (14) First we replace the Pauli exclusion operator $Q^{M}$, which is a function of the laboratory momenta by its angle average approximation $\bar{Q}^{M}$. We divide the plane of the two laboratory momenta into three regions A, B and C as shown in fig.3. The values of $Q^{M}$ in the three regions are shown in the figure. Each of the regions is transformed into the RCM frame. Then the angle-averaged value of $Q^{M}$ is $$\bar{Q}^{M}=\bar{Q}^{M}_{A}+\bar{Q}^{M}_{B}+\bar{Q}^{M}_{C}$$ (15) where, for region B, we have $$\bar{Q}^{M}_{B}(k,K)=\left\{\begin{array}[]{ll}0&\mbox{regions}\;\;a,b\\ ((k+\frac{1}{4}K)^{2}-k_{M}^{2})/2kK&\mbox{region}\;\;c\\ (k_{M}^{2}-(k-\frac{1}{2}K)^{2})/2kK&\mbox{region}\;\;d\\ (2k^{2}+\frac{1}{2}K^{2}-k_{M}^{2}-(k_{F}^{n})^{2})/2kK&\mbox{region}\;\;e\\ (k_{M}^{2}-(k_{F}^{n})^{2})/2kK&\mbox{region}\;\;f\end{array}\right.$$ (16) The subdomains a to e are shown in fig.4 and $2(k_{C}^{n})^{2}=[(k_{F}^{n})^{2}+k_{M}^{2}]$. Angle average approximations are standard (and indispensible) in treating Pauli exclusion operators in nuclera matter calculations and are generally considered to be accuratebethe71 ; sp72 . This technique has the advantage of making the model-space reaction matrix diagonal in the RCM vriables.Recent studies of an exact pauli operator calculation have been reportedsmc99 . It is not very clear whether such calculations show an appreciable difference in the binding energy per nucleon and the saturation density from previous studies. Angle averages for region C may be obtained from the above by substituting $k_{F}^{p}$ for $k_{F}^{n}$. The calculation above was an illustration of the case where the intermediate state contains one neutron and one proton. The other two possibilities may be readily obtained from above by suitable substitution of the relevant Fermi momentum. A similar RCM mapping of region A in fig.3 gives $$\bar{Q}^{M}_{A}(k,K)=\left\{\begin{array}[]{ll}0&\mbox{region}\;\;a\\ 1&\mbox{region}\;\;b\\ (k^{2}+\frac{1}{4}K^{2}-k_{M}^{2})/kK&\mbox{region}\;\;c\end{array}\right.$$ (17) For symmetric nuclear matter $k_{F}^{n}=k_{F}^{p}$ so that $\bar{Q}^{M}_{B}=\bar{Q}^{M}_{C}$ and the angle averaged $Q^{M}$ is the same as that given in eq. (16) Using the angle averaged Pauli operator the model space reaction matrix can be decomposed into separate partial wave channels as $$\langle kl|G^{M}(\omega,Kw)|k^{\prime}l^{\prime}\rangle=\langle kl|V|k^{\prime% }l^{\prime}\rangle+\frac{2}{\pi}\int_{0}^{\infty}dk^{\prime\prime}~{}{k^{% \prime\prime}}^{2}\sum_{l^{\prime\prime}}\frac{\langle kl|V|k^{\prime\prime}l^% {\prime\prime}\rangle\bar{Q}^{M}(k^{\prime\prime},K)\langle k^{\prime\prime}l^% {\prime\prime}|G^{M}(\omega,Kw)|k^{\prime}l^{\prime}\rangle}{\omega-H_{0}(k^{% \prime\prime},K)}$$ (18) where $w$ denotes the partial wave quantum numbers $(ll^{\prime}SJT)$, and $K$ is the center of mass momentum. The $K$ and $(SJT)$ quantum numbers associated with the bra and ket vectors have been suppressed for simplicity. For example, $\langle kl|$ should in fact be $\langle klSJT,K|$. Our convention for plane waves is $$\langle{\bf r}|klSJ\rangle=j_{l}(kr){\cal Y}_{lSJ}({\bf\hat{r}})$$ (19) where $j_{l}(kr)$ is the spherical Bessel function, and $\cal Y$ is the vector spherical harmonics corresponding to l=S+J. The angle averaged reaction matrix $G^{M}(\omega,Kw)$ is diagonal in $K$ and $w$. This is a consequence of using angle averages for the projection operator $Q^{M}$ and the energy denominator. From eq. (12) the energy variable in the laboratory frame is $\epsilon_{i}+\epsilon_{h}$. Using RCM variables, the energy denominator $\omega$ for the neutron spectrum is given by $$\omega=\frac{\hbar^{2}}{m_{p}^{*}}k^{2}+\frac{\hbar^{2}}{4m_{p}^{*}}K^{2}+% \bigtriangleup_{n}+\bigtriangleup_{p}+\left[\frac{\hbar^{2}}{2m_{n}^{*}}-\frac% {\hbar^{2}}{2m_{p}^{*}}\right](k^{n})^{2}$$ (20) Similar expression is obtained for the proton spectrum by replacing the subscripts and superscript $n$ by $p$. The other term in the denominator $H_{0}=\epsilon_{m}+\epsilon_{n}$ is the unperturbed energy of the intermediate states and is also angle dependent. The momentum variables $k_{m}$ and $k_{n}$ corresponding to $k$ and $K$ may be in either of the three regions A, B or C. In region A, both intermediate particles have momentum larger than $k_{M}$. Therefore we have $$H^{\rm A}_{0}(k,K)=\frac{\hbar^{2}}{m}k^{2}+\frac{\hbar^{2}}{4m}K^{2}$$ (21) In region B, we have a proton with momentum larger than $k_{M}$ and a neutron with momentum between $k_{F}^{n}$ and $k_{M}$. We use an angle average approximation for $k_{m}^{2}$ i.e., $\langle k_{m}^{2}\rangle=\frac{1}{2}[(k_{F}^{n})^{2}+k_{M}^{2}]$ and obtain $$H^{\rm B}_{0}(k,K)=\frac{\hbar^{2}}{m}k^{2}+\frac{\hbar^{2}}{4m}K^{2}+% \bigtriangleup_{n}+\frac{\hbar^{2}}{4m}[\frac{m}{m^{*}_{n}}-1][(k_{F}^{n})^{2}% +k_{M}^{2}]$$ (22) The spectrum of $H_{0}$ for region C is the same as that of region B with the subscripts and superscript $n$ changed to $p$. Finally the single particle potential in the (RCM) frame for $k_{1}\leq k_{F}^{\tau_{q}}$ with $(q\equiv n,p)$ is given as $$\displaystyle U(k_{1},\tau_{q})$$ $$\displaystyle=$$ $$\displaystyle\frac{16}{\pi}\sum_{lSJ\tau_{j}}(2J+1)\int_{0}^{k_{-}}dk~{}k^{2}G% ^{M}_{lSJ\tau_{q}\tau_{j}}(k,\bar{K}_{1})$$ $$\displaystyle+\frac{2}{\pi k_{1}}\sum_{lSJ\tau_{j}}(2J+1)\int_{k_{-}}^{k_{+}}% dk~{}k[(k_{F}^{\tau_{j}})^{2}-k_{1}^{2}+4k(k_{1}-k)]G^{M}_{lSJ\tau_{q}\tau_{j}% }(k,\bar{K}_{2})$$ And for $k_{1}>k_{F}^{\tau_{q}}$ but less than $k_{M}$ we have $$U(k_{1},\tau_{q})=\frac{2}{\pi k_{1}}\sum_{lSJ\tau_{j}}(2J+1)\int_{k_{-}}^{k_{% +}}dk~{}k[(k_{F}^{\tau_{j}})^{2}-k_{1}^{2}+4k(k_{1}-k)]G^{M}_{lSJ\tau_{q}\tau_% {j}}(k,\bar{K}_{2})$$ (24) where $$\displaystyle k_{-}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}|k_{F}^{\tau_{j}}-k_{1}|$$ $$\displaystyle k_{+}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}(k_{F}^{\tau_{j}}+k_{1})$$ $$\displaystyle\bar{K^{2}_{1}}$$ $$\displaystyle=$$ $$\displaystyle 4(k_{1}^{2}+k^{2})$$ $$\displaystyle\bar{K^{2}_{2}}$$ $$\displaystyle=$$ $$\displaystyle 4(k_{1}^{2}+k^{2})-(2k+k_{1}-k_{F}^{\tau_{j}})(2k+k_{1}+k_{F}^{% \tau_{j}})$$ (25) and the partial wave reaction matrix elements are given by $$G^{M}_{lSJ\tau_{i}\tau_{j}}(k,K)=\langle kl\tau_{i}\tau_{j}|G^{M}(\omega,KlSJ)% |kl\tau_{i}\tau_{j}\rangle$$ (26) Eq. (18) is in isospin representation with well defined total isospin $T$ of the two nucleon state. The reaction matrix elements in the neutron-proton representation i.e, eq. (26), are related to those in the isospin representation by Clebsch-Gordon coefficients; which explains the factor of $2$ in the expression for the s.p. potential. The s.p. potentials $U(k_{1},\tau_{n})$, $U(k_{1},\tau_{p})$ and the spectra $\epsilon_{n}$, $\epsilon_{p}$ are calculated self-consistently as described previously. A main purpose here is to convert the strong $V$ interaction to a well-behaved G-matrix interaction. In so doing, one must make sure that there is no double counting. Thus a double-partitioned approach is adopted, treating the nucleon-nucleon correlations with high momentum (i.e. those with $Q^{M}=1$) within the $G^{M}$ matrix, while taking care of the low-momentum correlations by the pphh ring diagrams. We would like to express the energy shift in terms of the model space G-matrixsyk87 . Proceeding as in Ref. (3) we define a model-space G-matrix and formulate an expression for the energy shift in terms of the transition amplitudes as $$\Delta E^{pp}_{0}=\int^{1}_{0}\frac{d\lambda}{\lambda}\sum_{\stackrel{{% \scriptstyle m}}{{(A-2)}}}\sum_{\stackrel{{\scriptstyle i>j,k>l}}{{\in P}}}Y_{% m}(ij,\lambda)Y_{m}^{*}(kl,\lambda)G^{M}_{klij}([\omega=\Delta^{A-2}_{m}(% \omega)])\lambda$$ (27) with $$\displaystyle\sum_{e>f}\{\delta_{ij,ef}(\varepsilon_{i}+\varepsilon_{j})+(1-n_% {i}-n_{j})\lambda L_{ijef}(\omega)\}Y_{m}(ef,\lambda)$$ $$\displaystyle=$$ $$\displaystyle\Delta^{A-2}_{m}(\omega)Y_{m}(ij,\lambda)$$ $$\displaystyle\omega$$ $$\displaystyle=$$ $$\displaystyle\Delta^{A-2}_{m}(\omega)$$ (28) For the neutron-neutron ($nn$) or the proton-proton ($pp$) cases, the inequality signs for the summation indices $i,~{}j,~{}k,~{}l$ restrict the momentum $k_{i}\geq k_{j}$, similarly for $k_{k}$ and $k_{l}$ to avoid double counting. The neutron-proton ($np$) or the proton-neutron ($pn$) case is more complicated. Here a free summation over the indices gives four terms with identical contribution (two for each of $np$ and $pn$ cases) and a factor of (1/2) would be needed for correct counting. Thus in our calculation for this case we have confined the indices $i,~{}k,~{}e$ to neutrons and $j,~{}l,~{}f$ to protons with their momentum summations unrestricted. 4 Results of the s.p. spectrum calculation Fig.(5) shows the neutron (upper part) and the proton (lower part) s.p. potentials for different $\alpha$. As the proton fraction decreases the s.p. potentials become less deep and $U_{p}$ vanishes when the proton fraction becomes zero. For the asymmetric case as well, the spectrum is continuous from momentum $0$ to $k_{M}$ as shown in fig.6, unlike the usual BHF spectrum which has a large discontinuity at $k_{F}$. Beyond $k_{M}$ we use a free s.p. spectrum since our method is designed for determination of the s.p. spectrum within the model space only. Our spectrum has a small discontinuity of around $4-5$ MeV at $k_{M}$. Table 1 lists the $m/m^{*}$ values and zero point energies at $\alpha=0.2$ and $\alpha=0.8$ for various average Fermi momentum at a model space boundary of $3.0$ fm${}^{-1}$. With increase in $\alpha$, $\bigtriangleup_{n}$ increases and $\bigtriangleup_{p}$ decreases for small $k_{F}$. For the same asymmetry fraction, both $\bigtriangleup_{n}$ and $\bigtriangleup_{p}$ become deeper with increase in $k_{F}$. The change in the effective mass with the asymmetry fraction is very small though for the same asymmetry fraction it increases with $k_{F}$. For $\alpha=0.0$, the s.p potential, the effective masses and the zero point energies match the ones calculated previously for the symmetric case which also serves as a test for the reliability of our asymmetric matter calculation. The calculated binding energy for a given combination of the asymmetry parameter $\alpha$ and the Fermi momentum $k_{F}$ depends on the model-space size. On minimizing the energy against the model-space momentum, Song, Wang and Kuoswk92 found for their MBHF calculations that for each combination of $\alpha$ and $k_{F}$ there was one value of $k_{M}$ for which the energy was a minimum. For the same $\alpha$, the energy minimum shifted towards smaller values of $k_{M}$. For values of $k_{F}$ ranging between $0.50$ to $1.80$ the minimum values of $k_{M}$ were between $2.80$ to $3.4$. Based on this we have made our choice of $k_{M}$ to be $3.0$ fm${}^{-1}$. 5 The RPA equation Let us recall our RPA-type secular equation $$\sum_{e>f}\{(\epsilon_{i}+\epsilon_{j})\delta_{ij,ef}+(\bar{n}_{i}\bar{n}_{j}-% n_{i}n_{j})\lambda L_{ijef}(\omega)\}Y_{m}(ef,\lambda)=\mu_{m}(\omega,\lambda)% Y_{m}(ij,\lambda)$$ (29) with the self consistent condition $$\displaystyle\mu_{m}(\omega,\lambda)$$ $$\displaystyle=$$ $$\displaystyle\omega\equiv\omega_{m}^{-}(\lambda)$$ $$\displaystyle L_{ijef}(\omega)$$ $$\displaystyle\equiv$$ $$\displaystyle\bar{G}^{M}_{ijef}(\omega)$$ (30) The above RPA-type equation is in laboratory momentum variables. As for the symmetric case, we transform the above equation into its RCM representation. We also do an angle average for the occupation factor $(\bar{n}_{i}\bar{n}_{j}-n_{i}n_{j})=1-(n_{i}+n_{j})$. We define a function $Q_{R}(k_{i},k_{j})=1-(n_{i}+n_{j})$. It is equal to +1 or -1 depending on the values of $k_{i}$ and $k_{j}$. $Q_{R}$ is $+1$ in regions A and B ($i,j$ both are particles); is $-1$ in region C ($i,j$ are both holes) and is equal to zero for all other regions. The value of $Q_{R}$ depends on the angle between k and K. Let us denote by $\tau_{z}$ the $z$-component of the total isospin $T$ of the two nucleon state i.e., $\tau_{z}=\tau_{i}+\tau_{j}$. As the two nucleon state can be either $nn$, $pp$ or $np~{}(pn)$, $\tau_{z}$ can take the values -1,1 or 0. For the case when $\tau_{z}=+1~{}{\rm or}~{}-1$ the two nucleons are identical and the situation is no different from the symmetric case ($\bar{Q}_{R}$ is the same as eq. (4.9) in Ref. (3). For $\tau_{z}=0$ we obtain the angle averaged $\bar{Q}_{R}$ as $$\bar{Q}_{R}(k,K)=\left\{\begin{array}[]{lr}-1&\mbox{region}\;\;1\\ -|x_{1}|&2\\ 1&3\\ |x_{1}|&4\\ |x_{2}|&5\\ \mbox{min}(|x_{1}|,|x_{2}|)&6\end{array}\right.$$ (31) where $$\displaystyle k_{a}$$ $$\displaystyle=$$ $$\displaystyle\frac{k_{F}^{n}+k_{F}^{p}}{2}$$ $$\displaystyle x_{1}$$ $$\displaystyle=$$ $$\displaystyle\frac{k^{2}+\frac{1}{4}K^{2}-k_{a}^{2}}{kK}$$ $$\displaystyle x_{2}$$ $$\displaystyle=$$ $$\displaystyle\frac{k_{M}^{2}-k^{2}-\frac{1}{4}K^{2}}{kK}$$ The above angle averages are obtained under the assumption that all values for the angle between k and K are equally likely. The regions 1 to 6 refer to the regions in the $(k,K)$ plane as shown in the figure. The replacement of the occupation factor $Q_{R}$ by its angle-averaged quantity greatly simplifies the RPA-type secular equation we started with. Now it can be decomposed into separate partial wave channels $$\displaystyle\sum_{l^{\prime}}\int dk^{\prime}\{\epsilon_{kK}~{}\delta(k-k^{% \prime})\delta_{ll^{\prime}}+\lambda\frac{2{k^{\prime}}^{2}}{\pi}\bar{Q}_{R}(k% ,K)\langle kl|L(\omega,K)|k^{\prime}l^{\prime}\rangle\}Y_{m}(k^{\prime}l^{% \prime}k,\lambda)$$ $$\displaystyle=\mu_{m}(\omega,\lambda)Y_{m}(klK,\lambda)$$ (32) where $\bar{Q}_{R}(k,K)$ is actually $\bar{Q}_{R}(k,K,k_{F}^{\tau_{i}},k_{F}^{\tau_{j}},k_{M})$. $\epsilon_{kK}$ is the unperturbed energy. There is a subtle point. For the $nn$ or $pp$ cases where the two nucleons are identical this unperturbed energy may be taken simply as $\frac{\hbar^{2}}{m}k^{2}+\frac{\hbar^{2}}{4m}K^{2}+2U(\bar{k_{1}},\tau_{q})$, where $\bar{k_{1}}^{2}=(k^{2}+\frac{1}{4}K^{2})$ is the angle averaged value for the momenta of the two nucleons and $q=n$ or $p$. For the $np~{}(pn)$ case, this averaging is more complicated. One way around this is in the choice of relative $k$ mesh points. For each $K$ mesh point we choose the $k$ such that the RCM values of $k_{n}$ and $k_{p}$ are either both in the hole region or both in the particle region. With such a choice of mesh points the sum of the squares of the individual neutron and proton momenta in the RCM frame, both being either holes or particles is $\bar{k}^{2}_{n}+\bar{k}^{2}_{p}=(k^{2}+\frac{1}{4}K^{2})$. Hence the value of the unperturbed energy is $$\frac{\hbar^{2}}{m}k^{2}+\frac{\hbar^{2}}{4m}K^{2}+U(\bar{k_{1}},\tau_{n})+U(% \bar{k_{1}},\tau_{p})$$ with ${\bar{k_{1}}}^{2}=(k^{2}+\frac{1}{4}K^{2})/2$. The wave function $(kl)$ denotes the RCM partial wave function $(klSJ\tau_{1}\tau_{2},K)$ and similarly for $(k^{\prime}l^{\prime})$. The above equation is to be solved together with the self consistent condition $\mu_{m}(\omega,\lambda)=\omega_{m}^{-}(\lambda)$ giving the self-consistent solution $\omega_{m}^{-}(\lambda)$. The vertex function $L(\omega,\tau_{1}\tau_{2},K)$ in the above equation is the irreducible vertex function which has both one-body and two-body G-matrix diagramssyk87 . These contribute significantly to the depletion of s. p. orbits below $k_{F}$, especially at high density. As we are working in the RCM frame an angle average approximation is employed to obtain $L$ in the RCM frame for the $\tau_{z}=0$ case as $$\displaystyle\langle kl|L(\omega,\tau_{n}\tau_{p},K)|k^{\prime}l^{\prime}\rangle$$ $$\displaystyle=$$ $$\displaystyle\langle kl\tau_{n}\tau_{p}|\bar{G}^{M}(\omega,Kw)|k^{\prime}l^{% \prime}\tau_{n}\tau_{p}\rangle$$ (33) $$\displaystyle+\delta_{kk^{\prime}}\delta_{ll^{\prime}}\{\Gamma_{\bar{k}_{n}}(% \omega-\epsilon_{\bar{k}_{p}},\tau_{n})-U(\bar{k}_{n},\tau_{n})$$ $$\displaystyle+\Gamma_{\bar{k}_{p}}(\omega-\epsilon_{\bar{k}_{n}},\tau_{p})-U(% \bar{k}_{p},\tau_{p})\}$$ where $$\displaystyle{\bar{k}_{n}}^{2}={\bar{k}_{p}}^{2}=(k^{2}+\frac{1}{4}K^{2})/2$$ $$\displaystyle\epsilon_{\bar{k}_{n}}=\frac{{\bar{k}_{n}}^{2}}{2m}+U(\bar{k}_{n}% ,\tau_{n})$$ $$\displaystyle\epsilon_{\bar{k}_{p}}=\frac{{\bar{k}_{p}}^{2}}{2m}+U(\bar{k}_{p}% ,\tau_{p})$$ (34) The other two cases for $\tau_{z}=-1$ or 1 can be obtained by suitable substitution of $\tau_{p}$ and $\tau_{n}$. 6 Results and discussions 6.1 Binding energy The energy shift is given as $$\displaystyle\frac{\Delta E_{0}^{pp}}{A}$$ $$\displaystyle=$$ $$\displaystyle\frac{6}{\pi^{2}[(k_{F}^{n})^{3}+(k_{F}^{p})^{3}]}\sum_{w}\sum_{% \tau_{z}}(2J+1)\int_{0}^{1}d\lambda~{}\int_{0}^{2k_{F}^{\tau_{z}}}dK~{}K^{2}$$ (35) $$\displaystyle\times\sum_{mll^{\prime}}\int_{0}^{k_{M}}dk\;k^{2}\int_{0}^{k_{M}% }dk^{\prime}~{}{k^{\prime}}^{2}Y^{*}_{m}(klK,\lambda)\times\langle kl\tau_{1}% \tau_{2}|G^{M}(\omega,Kw)|k^{\prime}l^{\prime}\tau_{1}\tau_{2}\rangle Y_{m}(k^% {\prime}l^{\prime}K,\lambda)$$ where $\tau_{z}=-1,0,1$ for $T=1$ and $\tau_{z}=0$ for $T=0$ and $$k_{F}^{\tau_{z}}=\left\{\begin{array}[]{ll}k_{F}^{n}&\mbox{for}\;\;\;\tau_{z}=% -1\\ (k_{F}^{n}+k_{F}^{p})/2&\mbox{for}\;\;\;\tau_{z}=0\\ k_{F}^{p}&\mbox{for}\;\;\;\tau_{z}=1\end{array}\right.$$ (36) The binding energy per nucleon is given as $$\frac{-BE}{A}=\frac{3{\hbar}^{2}}{20m}[(1+\alpha)(k_{F}^{n})^{2}+(1-\alpha)(k_% {F}^{p})^{2}]+\frac{\Delta E_{0}^{pp}}{A}$$ (37) We present the results of our ring calculation in table 2 and in figs.8 and 9 using Paris NN interactions, for various values of the Fermi momentum and the asymmetry fraction. The binding energy at $\alpha=0.0$ (symmetric nuclear matter) by our ring calculation is -15.93 MeV which is good agreement with the empirical value. The saturation Fermi momentum is 1.43 fm${}^{-1}$. As the asymmetry fraction increases ( the proton fraction decreases), both the binding energy and the saturation Fermi momentum drop till at $\alpha=1.0$ (zero proton fraction) there is no saturation. An interesting result of the ring calculation is the existence of saturation at $\alpha=0.8$ which is not present in the MBHF calculation. 6.2 Symmetry energy The symmetry energy is defined as $$E_{sym}^{Ring}(k_{F})=\frac{1}{2}~{}\frac{\partial^{2}W(\alpha,k_{F})}{% \partial\alpha^{2}}|_{\alpha=0}$$ (38) where $W(\alpha,k_{F})$ is the binding energy at a given $\alpha$ and $k_{F}$. To ascertain the nature of the dependence of the binding energy on the asymmtery parameter we have tried fitting a polynomial curve of leading order $\alpha^{2}$ and higher for the residuals upto to the order of $\alpha^{6}$ to our data. Using eq. (38 and the parameters of the above fit we obtain the $E_{sym}^{Ring}$ for various values of $k_{F}$ as shown in table 3. The corresponding values obtained by the MBHF calculationswk92 are also shown for comparison. The value of the symmetry energy calculated by our method is consitently higher than the corresponding MBHF calculations as is shown by fig.10. Thus the ring diagram summation improves the symmetry energy values as well. At the saturation density($\rho_{0}=0.17~{}{\rm fm}^{-3}$) $E_{sym}^{Ring}$ is 28 MeV which is close to the value of 31 Mev reported in an independent calculations74 . Empirically,$E_{sym}^{Ring}$ may be simply evaluated from the two extreme cases of pure neutron matter and symmetric nuclear matter shown in the fourth column of table 3. $$E_{sym}(k_{F})=W(1,k_{F})-W(0,k_{F})$$ (39) The $E_{sym}$ calculated with eq. (38) and eq. (39) differ by about .5 MeV to 1.8 MeV . 6.3 Incompressibility In describing the iron-core collapse of a presupernova using hydrodynamical models, the basic physical inputs are the initial mass of the core and the nuclear EOS. The EOS of asymmetric nuclear matter exhibits a minimum which disappears before pure neutron matter is reached. Therefore we expect that the incompressibility modulus decreases with respect to the symmetric case and vanishes before the proton fraction vanishes. The incompressibility modulus is defined as $$\kappa_{0}(\alpha)=k_{F_{0}}^{2}(\alpha)~{}\frac{d^{2}W(\alpha,k_{F})}{dk_{F}^% {2}}~{}|_{k_{F}=k_{F_{0}}(\alpha)}$$ (40) where $k_{F_{0}}(\alpha)$ is the saturation Fermi momentum at the given $\alpha$. One of the most sophisticated investigation of the $\alpha$ dependence of $\kappa_{0}$ and $\rho_{0}$ so far has been done in Ref. (14) in the framework of BBG (Brueckner-Bethe- Goldstone) theory. They found that for low $\alpha$ values ($\alpha\leq 0.4$) $\kappa_{0}$ and $\rho_{0}$ show a parabolic dependence on $\alpha$ $$\displaystyle\kappa_{0}(\alpha)$$ $$\displaystyle=$$ $$\displaystyle\kappa_{0}(0)(1-a~{}\alpha^{2})$$ (41) $$\displaystyle\rho_{0}(\alpha)$$ $$\displaystyle=$$ $$\displaystyle\rho_{0}(0)(1-b~{}\alpha^{2})$$ (42) with $\kappa_{0}(0)=185$ MeV, $a=2.027$ and $\rho_{0}(0)=0.289$ fm${}^{-3}$, $b=1.115$. In table 4 we give the $\kappa_{0}$ values obtained from the ring calculation. Our $\kappa_{0}$ values also seem to obey a parabolic dependence on $\alpha$ but with $a=1.21$ and $\kappa_{0}(0)=112$ MeV. We may note here that the values of $\kappa_{0}(0)$ and $a$ in Ref. (14) were obtained by means of a least square polynomial fit to the BHF values of the binding energy and the values are quite sensitive to the degree of the polynomial used for the fit. Our value of $\kappa_{0}(0)=112$ MeV is in good agreement with the one suggested by Brown and Osnesbo85 for symmetric nuclear matter. At $\alpha=0.33$ the value of $\kappa_{0}$ from our fit is 97 MeV which is close to the empirical value of 90 MeV used by Baron, Cooperstein and Kahanabck85 to get the maximum explosion energy in their hydrodynamical calculations. From their numerical investigation the authors of Ref. (8) concluded that the softening of the EOS plays a crucial role in generating prompt explosion for stars in the mass range of $12-15~{}M_{\odot}$ (where $M_{\odot}\sim 2\times 10^{23}~{}g$ is the sun’s mass). This conclusion has been questioned by more refined calculationsb85 ; mb89 ; bc90 . However, even in models where the direct explosion mechanism fails, a softer EOS is helpful to the shock. The decrease in incompressibility with increase in $\alpha$ is quite intuitive when we consider that going from bound symmetric matter to unbound neutron matter, the minimum of the binding energy gradually disappears. Therefore from eq. (40) the nuclear incompressibility decreases and vanishes for a certain value of $\alpha$. Conclusion In conclusion, a fully microscopic calculation has been done using the ring-diagram summation for the EOS of asymmetric nuclear matter. The numerical computation has been done with both the Parisparispot and the Bonnbonnpot potentials and the results are in satisfactory agreement with the empirical values. The model-space size is treated as a parameter. The inclusion of ring-diagrams increases the role of tensor forces and both the binding energy and the saturation density values are lower and closer to the empirical values than those obtained with previous calculations. The symmetry energy values also show an improvement. and is in better agreement with other independent calculations. The derived incompressibility exhibits a parabolic dependence on the asymmetry fraction, in qualitative agreement with the empirical asymmetry dependence used in the literature. The ring diagram approach employs an infinite order summation of particle-particle hole-hole ring diagrams. The infinite oder summation technique of Yang,Heyer and Kuoyhk86 is applicable to particle-hole ring diagrams as well. But these diagrams have not been included in nuclear matter calculations. There is reason to believe that the effect of the particle-hole ring diagrams on binding energy calculatins is not very appreciable and they are less important than the particle-particle ring diagrams. The lowest-order ring diagram is the pphh diagram of first order in $G^{M}$. The second-order diagram may be taken as either pphh or ph and is second order in $G^{M}$. Thus the contribution to the ground state energy shift from the ph diagrams comes from the three vertex diagram which is third order in $G^{M}$ (fig. 1(d)). Studiesdfm82 ; mbook ; day81 have indicated that the particle-hole diagrams converge rapidly and they may not be impotant in nuclear matterbinding energy calculations. This view is also supported by some Lipkin model calculationsyhk86 . As noted earlier, an interesting result of our ring calculation is the existence of saturation at $\alpha=0.8$ which is not present in the MBHF calculation. This behaviour would be of relevance in astrophysical systems which are essentially neutron rich. It would interesting to study the neutron star properties with this method. To do that one needs to extend the present calculation to higher densities in a relativistic frame work. The authors would like to thank Prof. G. E. Brown for his support and encouragement throughout the course of this work. References (1) T. Coester, S. Cohen, B. Day and C.M. Vincent, Phys. Rev. C 1 (1970) 769. (2) S. D. Yang, J. Heyer and T. T. S. Kuo, Nucl. Phys. A 448 (1986) 420. (3) H. Q. Song, S. D. Yang and T. T. S. Kuo, Nucl. Phys. A 462 (1987) 491. (4) M. F. Jiang, T. T .S. Kuo and H. M$\ddot{u}$ther, Phys. Rev. C 38 (1988) 2408. (5) M. F. Jiang, R. Machleidt and T. T. S. Kuo, Phys. Rev. C 41 (1989) 2346. (6) H. A. Bethe, Ann. Rev. Nucl. Sci. 21 (1971) 93. (7) S. O. Bäckman, G. E. Brown and J. A. Niskanen, Phys. Rep. 124 (1985) 1. (8) E. Baron, J. Cooperstein and S. Kahana, Phys. Rev. Lett. 55 (1985) 126. (9) M. Brack, C. Guet and H. B. Hakansson, Phys. Rep. 123 (1985) 277. (10) S. H. Kahana, Ann. Rev. Nucl. Part. Sci. 39, (1989) 231. (11) H. Jaqaman, A. Z. Mekjian and L. Zamick, Phys. Rev. C 27 (1983) 2782. (12) R. K. Su, S. D. Yang, G. L. Li and T. T. S. Kuo, Mod. Phys. Lett. A 1 (1986) 71. (13) H. Q. Song, G. D. Zheng and R. K. Su, J. Phys. G 16 (1990) 1861. (14) I. Bombaci and U. Lombardo, Phys. Rev. C 44 (1991) 1892. (15) I. Bombaci, T. T. S. Kuo and U. Lombardo, Phys. Rep. 242 (1994) 165. (16) H. Q. Song, Z. X. Wang and T.T.S. Kuo, Phys. Rev. C 46 (1992) 1788. (17) J. Cugnon, P. Deneye and A. Lejeune, Z. Phys. A 328 (1987) 409. (18) R. B. Wiringa, V. Fiks and A. Fabrocini, Phys. Rev. C 38 (1988) 1010. (19) M. Lacombe, B. Loiseau, J.M. Richard, R. Vinh Mau, J. Côte, P. Pires and R. de Tourreil, Phys. Rev. C 21 (1980) 861. (20) R. Machleidt, K. Hollinde and Ch. Elster, Phys. Rep. 149 (1987) 1. (21) Z. Y. Ma and T. T. S. Kuo, Phys. Lett. B 127 (1983) 137. (22) T. T. S. Kuo and Z. Y. Ma, Nucleon-Nucleon Interaction and Nuclear Many-body Problems, eds. S. S. Wu and T. T. S. Kuo (World Scientific, Singapore, 1984) p. 178. (23) T. T. S. Kuo, Z. Y. Ma and R. Vinh Mau, Phys. Rev. C 33 (1986) 717. (24) H. A. Bethe, Ann. Rev. Nuc. Sci. 21 (1971) 93. (25) D. W. L. Sprung, Adv. Nucl. Phys. 5 (1972) 225. (26) E. Schiller, H. Muether, P. Czerski, Phys. Rev. C 59 (1999) 2934. (27) O. Sjöberg, Nucl. Phys. A 222 (1974) 161. (28) G. E. Brown and E. Osnes, Phys. Lett B 159 (1985) 223. (29) S. W. Bruenn, Astrophys. J. (Suppl.) 58 (1985) 771. (30) E. S. Myra and S. A. Bludman, Astrophys. J. 340 (1989) 384. (31) E. Baron and J. Cooperstein, Astrophys. J. 353 (1990) 597. (32) W. H. Dickhoff, A. Faessler, H. Muether, Nuc. Phys. A 389 (1982) 492. (33) H. Muether, NN interactions and the nuclear many-body problem, ed. S. S. Wu, T. T. S. Kuo, pg 490. (34) B. D. Day, Phys. Rev.C 24 (1981) 1203.
Nonequilibrium theory of the conversion-efficiency limit of solar cells including thermalization and extraction of carriers Kenji Kamide kenji.kamide@aist.go.jp Renewable Energy Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Koriyama, Fukushima, 963-0298, Japan    Toshimitsu Mochizuki Renewable Energy Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Koriyama, Fukushima, 963-0298, Japan    Hidefumi Akiyama The Institute for Solid State Physics (ISSP), The University of Tokyo, Kashiwa, Chiba 277-8581, Japan OPERANDO-OIL, Kashiwa, Chiba 277-8581, Japan    Hidetaka Takato Renewable Energy Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Koriyama, Fukushima, 963-0298, Japan (December 2, 2020) Abstract The ideal solar cell conversion efficiency limit known as the Shockley-Queisser (SQ) limit, which is based on a detailed balance between absorption and radiation, has long been a target for solar cell researchers. While the theory for this limit uses several assumptions, the requirements in real devices have not been discussed fully. Given the current situation in which research-level cell efficiencies are approaching the SQ limit, a quantitative argument with regard to these requirements is worthwhile in terms of understanding of the remaining loss mechanisms in current devices and the device characteristics of solar cells that are operating outside the detailed balance conditions. Here we examine two basic assumptions: (1) that the photo-generated carriers lose their kinetic energy via phonon emission in a moment (fast thermalization), and (2) that the photo-generated carriers are extracted into carrier reservoirs in a moment (fast extraction). Using a model that accounts for the carrier relaxation and extraction dynamics, we reformulate the nonequilibrium theory for solar cells in a manner that covers both the equilibrium and nonequilibrium regimes. Using a simple planar solar cell as an example, we address the parameter regime in terms of the carrier extraction time and then consider where the conventional SQ theory applies and what could happen outside the applicable range. pacs: 84.60.Jt, 88.40.-j, 85.30.-z I Introduction Shockley and Queisser (SQ) determined a theoretical estimate for the upper limit of the conversion efficiency in an ideal solar cell SQ . The original SQ theory takes radiative recombination into account as a main cause of the current loss in solar cells in a simple manner. The energy distributions of the carriers, denoted by $n^{e}_{E_{e}}$ for electrons and $n^{h}_{E_{h}}$ for holes, must be known to evaluate the radiative recombination rate because it is proportional to the sum of their product. SQ theory assumes that the carriers in the absorber are in thermal and chemical equilibrium with both the lattice phonons and the carriers in the electrodes at an ambient temperature $T_{c}$ and with chemical potentials of $\mu_{c}$ for the conduction electrons and $\mu_{v}$ for the valence electrons. The resulting current-voltage relationship given by $I=I(V)=I_{\rm sun}-I_{\rm rad}^{0}e^{|{\rm e}|V/k_{\rm B}T_{c}}$, where the voltage between the electrodes is equal to the Fermi level separation within the absorber, $|{\rm e}|V=\mu_{c}-\mu_{v}$, ultimately determines the conversion efficiency during maximum power operation, where $I_{\rm sun}$ is the photo-generated current produced by absorption of sunlight. The detailed balance, which is the essential aspect of the SQ assumptions, has been used routinely in later analyses that further incorporated various additional factors (including Auger recombination, light trapping, photon recycling, and Coulomb interactions  Richter ). The requirements for the assumptions used in the SQ theory to be justified are commonly described as follows: 1) the photo-generated carriers lose their kinetic energy via phonon emission and rapidly establish their thermal equilibrium distribution in a moment (which is called fast thermalization); 2) the carriers are extracted rapidly into carrier reservoirs immediately after they are produced (which is called fast extraction). The latter assumption 2) is actually given explicitly in the original paper  SQ . However, the above requirements are not sufficiently clear and thus some quantitative issues remain. While the two time scales, i.e., the carrier thermalization time, $\tau_{\rm ph}$, and the carrier extraction time, $\tau_{\rm out}$, are assumed to be short, the following questions are not addressed: first, how short should these times be, i.e., which timescales from other processes should be compared with these times, and second, how do $\tau_{\rm ph}$ and $\tau_{\rm out}$ compare? The latter question relates directly to the concept of hot carrier solar cells operating out of equilibrium Nozik ; Wurfel2 ; Takeda1 ; Takeda2 ; Suchet , where fast carrier extraction before the thermalization is complete can reduce the thermalization losses and ensure that device performance is not limited by a detailed balance. Record efficiencies of recent cell research are gradually approaching the SQ limits in nonconcentrator-type single-junction solar cells, e.g., Kaneka’s Si-based cell with 26.7 percent efficiency and Alta Devices’ thin-film GaAs-based cell with 28.8 percent efficiency EffTab50 . It is therefore important to have a more precise understanding of the situation in which detailed balance theory provides a reliable estimate of the attainable upper efficiency limit. Quantitative estimation of the parameters to which the SQ theory applies will help to clarify the remaining energy losses and push current device performance towards the SQ limit. Additionally, a more precise understanding of the energy conversion mechanisms from the detailed balance will lead to new strategies for future improvements that are intended to go beyond the SQ limit. The nonequilibrium dynamics of many particle systems can be described in general terms using nonequilibrium Green’s functions (NEGFs) Martin-Schwinger ; Kadanof-Baym ; Keldysh . These functions were initially applied to study electron transport in solids and in mesoscopic devices Rammer ; Datta , and later in semiconductor light-emitting devices (e.g. light-emitting diodes or LEDs Steiger , semiconductor lasers Henneberger , quantum cascade lasers Lee , and polariton condensates Szymanska ; Yamaguchi ). More recently, the NEGF formalism was also used to study solar cells with nanostructured absorbers Aeberhard1 ; Aeberhard2 ; Cavassilas , where the device characteristics are likely to be affected strongly by the quantum transport of the carriers. NEGFs were also used to study the conditions required to validate use of luminescence-based characterization of solar cells Aeberhard3 , which is justifiable in terms of photovoltaic reciprocity under the detailed balance principle Rau ; Kirchartz . Despite the sound theoretical basis that is available, the device characteristics have not been explored for a sufficiently wide range of parameters via the NEGF approach, particularly for solar cells. This seems to be related to the complexity of the theory and high computational costs. Similar issues were found with an ab initio approach Bernardi In this work, we present a nonequilibrium theory that does not assume any form for the distribution functions used for the carriers in the absorber, in a manner similar to the NEGF formulation. The carrier distribution functions in the absorber are determined using a set of rate equations that is derived from second-order perturbation theory based on the coupling between the absorber carriers and three baths (the phonon, electron, and hole reservoirs). Spectral broadening of the microscopic states of the carriers is also included in the relevant cases. As a result, the theory describes solar cell operation for a wide range of parameters, including the situations where the photo-generated carriers are either in or out of thermal equilibrium. This paper is organized as follows. In section II, we formulate a nonequilibrium theory for solar cells based on the model shown in Fig. 1, and derive a set of rate equations for the microscopic carrier distribution function in the absorber. The microscopic carrier distribution function is then determined as a steady-state solution to the rate equation. At the end of this section, general expressions are given for the total output current and the total output power to enable simulation of the solar cell device performance. In section III, the basic properties required by the solution to the set of rate equations are presented before the numerical analysis begins. These properties are useful when verifying the accuracy of the simulation. Classification of the parameter regime, specifically in terms of the carrier extraction time $\tau_{\rm out}$, is also presented in Sec. III. Before the set of equations is solved, the equations themselves can be used to indicate the parameter regime where the assumptions of the SQ theory fail. In section IV, a device performance simulation based on our formulation is presented for a simple planar single-junction solar cell. The numerical simulations show what physically happens in the photovoltaic energy conversion processes in each of the regimes that were classified in Sec. III. In section V, we summarize these findings and discuss future issues and future applications of the nonequilibrium theory. Finally, in this section, we list definitions for the symbols used in this paper. Parameters for bulk semiconductors can be found in the standard textbook Cardona . • $c=$ speed of light $=3\times 10^{8}$ m/s • $\hbar=$Planck constant/2$\pi=1.0545718\times 10^{-34}$ J s • $R_{S}=$ Sun’s radius $=0.696\times 10^{6}$ km • $L_{ES}=$ average distance from the Earth to the Sun $=1.496\times 10^{8}$ km • $\mathcal{CR}\left(\leq(L_{ES}/R_{S})^{2}=46200\right)=$ concentration ratio • $w=$ absorber thickness in a planar solar cell • $\mathcal{A}=$ absorber area in a planar solar cell • $\mathcal{V}=\mathcal{A}w=$ absorber volume in a planar solar cell • $T_{S}=$ surface temperature of the Sun $=6000$ K • $T_{c}=$ ambient temperature = room temperature $=300$ K • $T_{\rm ph}(=T_{c})=$ absorber lattice temperature • $k_{B}=$ Boltzmann constant $=8.6\times 10^{-5}$ eV/K • $\beta_{S}=1/(k_{B}T_{S})$, $\beta_{c}=1/(k_{B}T_{c})$, $\beta_{\rm ph}=1/(k_{B}T_{\rm ph})$ • $m^{\ast}_{e(h)}=$ effective mass of electrons (holes) in the absorber (Si: $m^{\ast}_{e}/m_{e}=(\nu_{\rm valley}^{2}m_{\perp}^{2}m_{\parallel})^{1/3}/m_{e% }=1.08$, $m^{\ast}_{h}/m_{e}=(m_{hh}^{3/2}+m_{lh}^{3/2})^{2/3}/m_{e}=0.55$ with $\nu_{\rm valley}=6$, $m_{\perp}/m_{e}=0.19$, $m_{\parallel}/m_{e}=0.98$, $m_{hh}/m_{e}=0.49$, $m_{lh}/m_{e}=0.16$) • $m_{e}=$ bare electron mass = 9.1 $\times 10^{-31}$ kg • $\mathcal{D}_{e(h)}(E_{e(h)})=d_{e(h)}\sqrt{E_{e(h)}}=$ density of states per unit volume for electrons (holes) in the absorber with kinetic energy $E_{e}$ ($E_{h}$) where $d_{e(h)}(=\frac{(2m^{\ast}_{e(h)})^{3/2}}{2\pi^{2}\hbar^{3}})$ • $E_{g}=$ absorber bandgap (Si: 1.12 eV, GaAs: 1.42 eV) • $\tau_{\rm out}=$ carrier extraction time • $\mu_{c}=$ Fermi level of electrons in electron reservoir (Bath 1) $=E_{g}/2+\beta_{c}^{-1}/2\ln\frac{d_{h}}{d_{e}}+|e|V/2$ (charge neutrality condition) • $\mu_{v}=$ Fermi level of electrons in hole reservoir (Bath 2) $=E_{g}/2+\beta_{c}^{-1}/2\ln\frac{d_{h}}{d_{e}}-|e|V/2$ (charge neutrality condition) • $g_{q}^{c(v)}=$ electron-phonon coupling constant for conduction (valence) band carriers ($=a_{{\rm def},c}\sqrt{\frac{\hbar q}{2Vv_{A}\rho_{A}}}$ for longitudinal-acoustic (LA) phonons with $q$=phonon wave number) • $a_{{\rm def},c}=$ deformation potential for electrons in the bottom conduction band in the absorber (Si: $\sim 10$ eV) • $a_{{\rm def},v}=$ deformation potential for electrons in the top valence band in the absorber (Si: $\sim a_{{\rm def},c}/10\sim 1$ eV) • $v_{A}=$ LA phonon velocity (Si: $\sim 10^{4}$ m/s) • $\rho_{A}=$ absorber mass density (Si: 2.3 g/cm${}^{3}$) • $f^{F(B)}_{\mu,\beta}(E)=$ Fermi-Dirac (Bose-Einstein) distribution at chemical potential $\mu$ and inverse temperature $\beta$. II Nonequilibrium theory (Formulation) In this section, we formulate the nonequilibrium theory for solar cells. As shown below, a set of rate equations for the microscopic distribution functions for the electrons ($e$) and holes ($h$) in the absorber, ($\{n^{e}_{E_{e}},n^{h}_{E_{h}}\}$), are given in the following form: $$\displaystyle\frac{d}{dt}n^{e}_{E_{e}}$$ $$\displaystyle=$$ $$\displaystyle J^{e,{\rm sun}}_{E_{e}}-J^{e,{\rm rad}}_{E_{e}}-J^{e,{\rm out}}_% {E_{e}}+\left.\frac{d}{dt}n^{e}_{E_{e}}\right|_{\rm phonon},$$ (1) $$\displaystyle\frac{d}{dt}n^{h}_{E_{h}}$$ $$\displaystyle=$$ $$\displaystyle J^{h,{\rm sun}}_{E_{h}}-J^{h,{\rm rad}}_{E_{h}}-J^{h,{\rm out}}_% {E_{h}}+\left.\frac{d}{dt}n^{h}_{E_{h}}\right|_{\rm phonon}.$$ (2) Here, $E_{e}$ and $E_{h}$ are the carrier kinetic energies measured from the bottom of the bands (Fig. 2). The first term, $J^{e(h),{\rm sun}}_{E_{e(h)}}$, on the right-hand side of the rate equations represents the carrier generation rate due to sunlight absorption. The second term, $J^{e,{\rm rad}}_{E_{e}(h)}$, represents the carrier loss rate due to radiative carrier recombination. The third term, $J^{e(h),{\rm out}}_{E_{e(h)}}$, represents the rate of carrier extraction to the electrodes. The last term, $\left.\frac{d}{dt}n^{e(h)}_{E_{e(h)}}\right|_{\rm phonon}$, represents the rate of electron scattering to other microscopic states within the same band due to phonon emission or absorption. For the solar cell characteristics simulation, this equation will be solved under the steady-state condition: $$\displaystyle\frac{d}{dt}n^{e}_{E_{e}}=\frac{d}{dt}n^{h}_{E_{h}}=0.$$ (3) In the following subsections, we will derive explicit expressions for, $J^{e(h),{\rm sun}}_{E_{e(h)}}$, $J^{e(h),{\rm rad}}_{E_{e(h)}}$, $J^{e(h),{\rm out}}_{E_{e(h)}}$, and $\left.\frac{d}{dt}n^{e(h)}_{E_{e(h)}}\right|_{\rm phonon}$ via microscopic modeling of the carriers in the simple planar solar cell (thickness $w$, surface area $\mathcal{A}$, volume $\mathcal{V}=\mathcal{A}w$) shown in Fig. 3. The assumptions that were made in the original SQ model SQ are also used here to simplify the discussion but will not alter the main conclusion of this paper. For example, we consider the absorber thickness $w$ to be larger than the absorption length but less than the minority carrier diffusion length. This allows us to consider perfect absorption of sunlight above the absorption edge $(E>E_{g})$ and a homogeneous carrier distribution in the absorber. Perfect anti-reflection behavior at the front surface and perfect passivation with zero surface recombination are also assumed here. An additional simplification is made in this work to the band structure of the carriers in the absorber. An effective two-band model is used to describe the microscopic carrier states under an effective mass approximation (with infinite bandwidths), in which the effective masses for the electrons ($m_{e}^{\ast}$) and holes ($m_{h}^{\ast}$) were selected to reproduce the densities of states near the band extrema. Therefore, the effects of the band anisotropy, the valley degree of freedom within the degenerate bands (particularly in Si), and the contributions from other bands located away from the extrema were not taken into account correctly. In this sense, our analysis is far from but is not intended to be quantitatively accurate in simulations for specific systems, but is rather intended to produce a general picture of the main issue, i.e., nonequilibrium aspects of solar cells. II.1 Generation rate due to sunlight absorption: $J^{e(h),{\rm sun}}_{E_{e(h)}}$ Under the assumption of perfect absorption (where $w\gg$ absorption length), the number of photons absorbed into the absorber per unit time through surface area $\mathcal{A}$ is given by $$\displaystyle\mathcal{A}\times\int_{E_{g}}^{\infty}j^{\rm sun}(E){\rm d}E.$$ (4) The solar spectrum for the photon number current (per unit area, per unit time, and per unit energy), $j^{\rm sun}(E)$, is simply approximated using blackbody radiation at $T_{S}=6000K$ under the AM0 condition Wurfel text , $$\displaystyle j^{\rm sun}(E)=\mathcal{CR}\times\frac{c}{4}\left(\frac{R_{S}}{L% _{ES}}\right)^{2}\mathcal{D}^{0}_{\gamma}(E)\times f^{B}_{0,\beta_{S}}(E),$$ (5) where $\mathcal{CR}$ is the concentration ratio, $\mathcal{D}^{0}_{\gamma}(E)=\frac{1}{3\pi^{2}}(\hbar c)^{-3}\times 3E^{2}$ is the photonic density of states in a vacuum, and $f^{B}_{0,\beta_{S}}(E)=(\exp(\beta_{S}E)-1)^{-1}$ is the Bose-Einstein distribution function at energy $E$ with inverse temperature $\beta_{S}=1/(k_{B}T_{S})$. If necessary, for practical device simulations, the solar spectrum may be replaced appropriately, e.g., using the AM1.5 spectrum normalized at a total power of 1 kWm${}^{-2}$, which is not the case here (the 6000 K blackbody spectrum in Eq. (4) and Eq. (5) approximates the AM0 spectrum at a total power of 1.6 kW/m${}^{2}$ at 1 sun, with $\mathcal{CR}=1$). In the rate equations in Eq. (1) and Eq. (2), the generation rates of the microscopic carrier distribution function $J^{e(h),{\rm sun}}_{E_{e(h)}}$ should be expressed using the solar spectrum, $j^{\rm sun}(E)$. The expression is dependent on whether the absorber is made from direct or indirect gap semiconductors (Fig. 4). For direct gap semiconductor absorbers (Fig. 4 (a)) — by considering momentum and energy conservations, we can equate the number of carriers that are generated in energy ranges of $E_{e}<E^{\prime}<E_{e}+{\rm d}E_{e}$ for electrons and $E_{h}<E^{\prime}<E_{h}+{\rm d}E_{h}$ for holes with the number of photons absorbed in the energy range $E<E^{\prime}<E+{\rm d}E$ per unit time in the absorber as follows: $$\displaystyle\mathcal{A}j^{\rm sun}(E){\rm d}E$$ $$\displaystyle=$$ $$\displaystyle\mathcal{D}_{e}(E_{e})\mathcal{V}J^{e,{\rm sun}}_{E_{e}}{\rm d}E_% {e}$$ (6) $$\displaystyle=$$ $$\displaystyle\mathcal{D}_{h}(E_{h})\mathcal{V}J^{h,{\rm sun}}_{E_{h}}{\rm d}E_% {h},$$ where the energy conservation law gives $E=E_{g}+E_{e}+E_{h}$, and momentum conservation under the effective mass approximation gives $E_{e}=\frac{m_{h}^{\ast}}{m_{e}^{\ast}}E_{h}$. Here $\mathcal{D}_{e}(=d_{e}\sqrt{E_{e}})$ and $\mathcal{D}_{h}(=d_{h}\sqrt{E_{h}})$ are the densities of states of electrons and holes per unit volume, respectively. The equation relates $J^{e(h),{\rm sun}}_{E_{e}(h)}$ and $j^{\rm sun}(E)$ directly, as follows: $$\displaystyle J^{e,{\rm sun}}_{E_{e}}=\frac{j^{\rm sun}\left(E=E_{g}+(1+\frac{% m_{e}^{\ast}}{m_{h}^{\ast}})E_{e}\right)}{w\mathcal{D}_{e}(E_{e})}\left(1+% \frac{m_{e}^{\ast}}{m_{h}^{\ast}}\right),$$ (7) $$\displaystyle J^{h,{\rm sun}}_{E_{h}}=\frac{j^{\rm sun}\left(E=E_{g}+(1+\frac{% m_{h}^{\ast}}{m_{e}^{\ast}})E_{h}\right)}{w\mathcal{D}_{h}(E_{h})}\left(1+% \frac{m_{h}^{\ast}}{m_{e}^{\ast}}\right),$$ (8) where we assume that all microscopic states of carriers with the same energy are generated with equal probability, independent of their momentum directions. We therefore assume that the carrier distribution function is solely dependent on the kinetic energy of carriers and independent of the momentum direction. This assumption is used throughout the paper. For indirect gap semiconductor absorbers (Fig. 4 (b)) — the absorption process accompanies photon emission or absorption. The energies of the electron-hole pairs deviate from the photon energy by the energy of one phonon ($(\Omega_{LA},\Omega_{TA},\Omega_{TO})=(50.9,57.4,18.6)$ meV for Si). We simply neglect the energy shift here because the phonon energy is much smaller than the spectral bandwidth, $\sim k_{B}T_{S}$, of the incoming sunlight. However, because the indirect transition accompanies a shift in the carrier momentum corresponding to the momentum carried by the phonons, momentum conservation among the photon and electron-hole pairs is not required in the absorption process. As a result, electron-hole pairs with arbitrary combinations of the energies $E_{e}$ and $E_{h}$ can be created by absorption of one photon with energy $E$, as long as $E=E_{g}+E_{e}+E_{h}$ is satisfied (where the phonon energy shift is neglected). The situation in indirect gap semiconductors means that the expressions for $J^{e(h),{\rm sun}}_{E_{e}(h)}$ from Eq. (7) and Eq. (8) for direct gap semiconductors must be altered. Assuming that all electron-hole pairs with $E_{e}(<E-E_{g})$ and $E_{h}(=E-E_{g}-E_{e})$ are created by absorption of one photon with energy $E$ with equal probability, the probability $p^{e}_{E_{e}}(E){\rm d}E_{e}$ of finding electrons in a small energy window $E_{e}<E^{\prime}<E_{e}+{\rm d}E_{e}$ immediately after absorption is $$\displaystyle p^{e}_{E_{e}}(E){\rm d}E_{e}=\frac{\mathcal{D}_{e}(E_{e})% \mathcal{D}_{h}(\Delta E-E_{e}){\rm d}E_{e}}{\int_{0}^{\Delta E}\mathcal{D}_{e% }(E^{\prime})\mathcal{D}_{h}(\Delta E-E^{\prime}){\rm d}E^{\prime}},$$ (9) where $\Delta E\equiv E-E_{g}$. Because the number of photons absorbed in the absorber per unit time and per unit energy is $\mathcal{A}j^{\rm sun}(E)$ for $E>E_{g}$, we find the following expression for the generation rate of electrons per microscopic state for indirect gap semiconductor absorbers: $$\displaystyle J^{e,{\rm sun}}_{E_{e}}$$ $$\displaystyle=$$ $$\displaystyle\frac{\int_{E_{g}}^{\infty}\mathcal{A}j^{\rm sun}(E)\times p^{e}_% {E_{e}}(E){\rm d}E_{e}{\rm d}E}{\mathcal{V}\mathcal{D}_{e}(E_{e}){\rm d}E_{e}}$$ (10) $$\displaystyle=$$ $$\displaystyle\int_{E_{g}+E_{e}}^{\infty}\frac{j^{\rm sun}(E)\times\mathcal{D}_% {h}(\Delta E-E_{e})/w}{\int_{0}^{\Delta E}\mathcal{D}_{e}(E^{\prime})\mathcal{% D}_{h}(\Delta E-E^{\prime}){\rm d}E^{\prime}}{\rm d}E$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{\infty}\frac{j^{\rm sun}(E_{g}+E_{e}+E_{h})\times\sqrt{% E_{h}}}{\pi wd_{e}(E_{e}+E_{h})^{2}/8}{\rm d}E_{h}.$$ The hole generation rate is given in a similar manner as $$\displaystyle J^{h,{\rm sun}}_{E_{h}}$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{\infty}\frac{j^{\rm sun}(E_{g}+E_{e}+E_{h})\times\sqrt{% E_{e}}}{\pi wd_{h}(E_{e}+E_{h})^{2}/8}{\rm d}E_{e}.$$ (11) II.2 Recombination loss rate: $J^{e(h),{\rm rad}}_{E_{e(h)}}$ The derivation of the expression for $J^{e(h),{\rm rad}}_{E_{e(h)}}$ presented in this subsection largely follows the derivations in the literature Wurfel1 ; Wurfel text . Because the absorber thickness $w$ considered here is much smaller than the minority carrier diffusion length, we can safely assume a homogeneous carrier distribution inside the absorber. For a given set of carrier distribution functions, $\{n^{e}_{E_{e}},n^{h}_{E_{h}}\}$, the recombination radiation rate of photons $R^{sp}(E)$ at photon energy $E$ from the arbitrary position of a small volume inside the absorber into the whole solid angle ($4\pi$), per unit volume, per unit energy, and per unit time, is $$\displaystyle R^{sp}(E)$$ $$\displaystyle=$$ $$\displaystyle(\frac{c}{n})|\mathcal{M}|^{2}\mathcal{D}_{\gamma}^{\rm cell}(E)$$ $$\displaystyle\times$$ $$\displaystyle\int_{0}^{\Delta E}\mathcal{D}_{e}(E^{\prime})\mathcal{D}_{h}(% \Delta E-E^{\prime})n^{e}_{E^{\prime}}n^{h}_{\Delta E-E^{\prime}}{\rm d}E^{% \prime},$$ for an indirect gap semiconductor absorber. Here, $\Delta E\equiv E-E_{g}$, $\frac{c}{n}$ is the speed of light inside the absorber with refractive index $n$, $\mathcal{M}$ is proportional to the phonon-mediated transition matrix element that is approximated using the value at the absorption edge, and $\mathcal{D}_{\gamma}^{\rm cell}(E)=\frac{1}{3\pi^{2}}(\hbar c/n)^{-3}\times 3E% ^{2}$ is the photonic density of states in the absorber. It is important to note that only part of the radiation, i.e., the radiation into the limited solid angle within the critical angle of total reflection, can escape from the absorber, as shown in Fig. 5, and this results in the photovoltaic current loss. The radiation rate $R^{sp(\pm)}(E)$ into the escape cones in the $\pm x$-direction, per unit volume, per unit energy, and per unit time, is then, $$\displaystyle R^{sp(\pm)}(E)=\langle|({\vec{c}}/n)\cdot{\vec{e}}_{\pm x}|% \rangle_{\theta<\theta_{c}}\times|\mathcal{M}|^{2}\mathcal{D}_{\gamma}^{\rm cell% }(E)$$ $$\displaystyle\times\int_{0}^{\Delta E}\mathcal{D}_{e}(E^{\prime})\mathcal{D}_{% h}(\Delta E-E^{\prime})n^{e}_{E^{\prime}}n^{h}_{\Delta E-E^{\prime}}{\rm d}E^{% \prime},$$ (13) where $\langle|\frac{{\vec{c}}}{n}\cdot{\vec{e}}_{\pm x}|\rangle_{\theta<\theta_{c}}$ represents the velocity of light in the $\pm x$-directions when averaged inside the escape cones. The geometric average yields an expression for the rates that is independent of $n$, i.e., $$\displaystyle R^{sp(\pm)}(E)=\frac{c}{4}|\mathcal{M}|^{2}\mathcal{D}_{\gamma}^% {0}(E)$$ $$\displaystyle\times\int_{0}^{\Delta E}\mathcal{D}_{e}(E^{\prime})\mathcal{D}_{% h}(\Delta E-E^{\prime})n^{e}_{E^{\prime}}n^{h}_{\Delta E-E^{\prime}}{\rm d}E^{% \prime}.$$ (14) Under the same approximation using the constant transition matrix element $\mathcal{M}$, the absorption coefficient $\alpha(E)$ is given microscopically by $$\displaystyle\alpha(E)$$ $$\displaystyle=$$ $$\displaystyle|\mathcal{M}|^{2}\int_{0}^{\Delta E}\mathcal{D}_{e}(E^{\prime})% \mathcal{D}_{h}(\Delta E-E^{\prime})$$ (15) $$\displaystyle\times(1-n^{e}_{E^{\prime}}-n^{h}_{\Delta E-E^{\prime}}){\rm d}E^% {\prime}.$$ Using the homogeneous radiation rate and absorption coefficient, we consider continuity equations for the photon number current density inside the absorber. Let $n^{\gamma(\pm)}_{E}$ and $j^{(\pm)}_{E}$ be the photon number density (per unit volume and per unit energy) and the photon number current density (per unit area, per unit time, and per unit energy) with energy $E$ propagating in the $\pm x$-directions. Then, the continuity equations under the steady-state condition are given by $$\displaystyle\partial_{t}n^{\rm\gamma(\pm)}_{E}=-\partial_{x}j^{(\pm)}_{E}(x)+% R^{sp(\pm)}(E)-\alpha(E)j^{(\pm)}_{E}(x)=0,$$ (16) for $0<x<w$ (Fig. 5). The continuity equations under the appropriate boundary conditions, $$\displaystyle j^{(+)}_{E}(x\leq 0)$$ $$\displaystyle=$$ $$\displaystyle j^{\rm sun}_{E},$$ (17) $$\displaystyle j^{(-)}_{E}(x\geq w)$$ $$\displaystyle=$$ $$\displaystyle 0,$$ (18) give a solution for the output photon-number current density, $$\displaystyle j^{(+)}_{E}(x=w)$$ $$\displaystyle=$$ $$\displaystyle R^{sp(+)}(E)/\alpha(E),$$ (19) $$\displaystyle j^{(-)}_{E}(x=0)$$ $$\displaystyle=$$ $$\displaystyle R^{sp(-)}(E)/\alpha(E),$$ (20) where perfect absorption ($\alpha(E)w\gg 1$) and zero refrection at the front surface were assumed again. From the results, the number of photons radiated out from the absorber per unit time ($\equiv{\rm d}N_{\gamma}^{Rad}/{\rm d}t$) through the front and back surfaces is $$\displaystyle\frac{{\rm d}N_{\gamma}^{Rad}}{{\rm d}t}$$ $$\displaystyle=$$ $$\displaystyle\mathcal{A}\int_{E_{g}}^{\infty}(j^{(+)}_{E}(x=w)+j^{(-)}_{E}(x=0% )){\rm d}E$$ (21) $$\displaystyle=$$ $$\displaystyle\frac{\mathcal{A}c}{2\pi^{2}}\left(\frac{1}{\hbar c}\right)^{3}% \int_{E_{g}}^{\infty}\frac{E^{2}\langle n^{e}n^{h}\rangle_{\Delta E}}{\langle 1% -n^{e}-n^{h}\rangle_{\Delta E}}{\rm d}E,$$ where Eq. (14) and Eq. (15) were inserted, and $$\displaystyle\langle n^{e}n^{h}\rangle_{\Delta E}$$ $$\displaystyle\equiv\frac{\int_{0}^{\Delta E}\mathcal{D}_{e}(E^{\prime})% \mathcal{D}_{h}(\Delta E-E^{\prime})n^{e}_{E^{\prime}}n^{h}_{\Delta E-E^{% \prime}}{\rm d}E^{\prime}}{\int_{0}^{\Delta E}\mathcal{D}_{e}(E^{\prime})% \mathcal{D}_{h}(\Delta E-E^{\prime}){\rm d}E^{\prime}},$$ (22) $$\displaystyle\langle 1-n^{e}-n^{h}\rangle_{\Delta E}$$ $$\displaystyle\equiv\frac{\int_{0}^{\Delta E}\mathcal{D}_{e}(E^{\prime})% \mathcal{D}_{h}(\Delta E-E^{\prime})(1-n^{e}_{E^{\prime}}-n^{h}_{\Delta E-E^{% \prime}}){\rm d}E^{\prime}}{\int_{0}^{\Delta E}\mathcal{D}_{e}(E^{\prime})% \mathcal{D}_{h}(\Delta E-E^{\prime}){\rm d}E^{\prime}}.$$ Equation Eq. (21) is a generalized Planck’s law for indirect gap semiconductors. Given that the band filling effect can be neglected with $n^{e(h)}_{E_{e(h)}}\ll 1$, it is approximated using $$\displaystyle\frac{{\rm d}N_{\gamma}^{Rad}}{{\rm d}t}$$ $$\displaystyle\approx$$ $$\displaystyle\frac{\mathcal{A}c}{2\pi^{2}}\left(\frac{1}{\hbar c}\right)^{3}% \int_{E_{g}}^{\infty}E^{2}\langle n^{e}n^{h}\rangle_{\Delta E}{\rm d}E.$$ (24) Because the radiative loss of one photon is equal to the loss of one electron-hole pair, ${\rm d}N_{\gamma}^{Rad}/{\rm d}t$ can be related to $J^{e(h),{\rm rad}}_{E_{e}(h)}$. For indirect gap semiconductor absorbers — part of ${\rm d}N_{\gamma}^{Rad}/{\rm d}t$ in Eq. (24) with Eq. (22), which comes from recombination of the electrons (holes) in a small energy window, $E_{e(h)}<E^{\prime}<E_{e(h)}+{\rm d}E_{e(h)}$, divided by the number of corresponding electron (hole) states, $\mathcal{V}\mathcal{D}^{e(h)}_{E_{e(h)}}{\rm d}E_{e(h)}$, gives the radiation loss rates for the microscopic carrier distribution functions. This produces the following expressions: $$\displaystyle J^{e,{\rm rad}}_{E_{e}}$$ $$\displaystyle=$$ $$\displaystyle\frac{c}{2\pi^{2}w}\left(\frac{1}{\hbar c}\right)^{3}$$ $$\displaystyle\times$$ $$\displaystyle\int_{0}^{\infty}\frac{(E_{g}+E_{e}+E_{h})^{2}\sqrt{E_{h}}\times n% ^{e}_{E_{e}}n^{h}_{E_{h}}}{(\pi/8)d_{e}(E_{e}+E_{h})^{2}}{\rm d}E_{h},$$ $$\displaystyle J^{h,{\rm rad}}_{E_{h}}$$ $$\displaystyle=$$ $$\displaystyle\frac{c}{2\pi^{2}w}\left(\frac{1}{\hbar c}\right)^{3}$$ $$\displaystyle\times$$ $$\displaystyle\int_{0}^{\infty}\frac{(E_{g}+E_{e}+E_{h})^{2}\sqrt{E_{e}}\times n% ^{e}_{E_{e}}n^{h}_{E_{h}}}{(\pi/8)d_{h}(E_{e}+E_{h})^{2}}{\rm d}E_{e}.$$ For direct gap semiconductor absorbers — taking the momentum conservation discussed in Sec.II.1 into account, the radiation loss rates for the microscopic carrier distribution functions are obtained using a similar analysis. Here we simply show the final results: $$\displaystyle J^{e,{\rm rad}}_{E_{e}}$$ $$\displaystyle=$$ $$\displaystyle\frac{c}{2\pi^{2}w}\left(\frac{1}{\hbar c}\right)^{3}\frac{(E_{g}% +E_{e}+E_{h})^{2}}{d_{e}\sqrt{E_{e}}}$$ $$\displaystyle\times$$ $$\displaystyle\left.\left(1+\frac{m_{e}^{\ast}}{m_{h}^{\ast}}\right)\frac{n^{e}% _{E_{e}}n^{h}_{E_{h}}}{1-n^{e}_{E_{e}}-n^{h}_{E_{h}}}\right|_{E_{h}=\frac{m_{e% }^{\ast}}{m_{h}^{\ast}}E_{e}},$$ $$\displaystyle J^{h,{\rm rad}}_{E_{h}}$$ $$\displaystyle=$$ $$\displaystyle\frac{c}{2\pi^{2}w}\left(\frac{1}{\hbar c}\right)^{3}\frac{(E_{g}% +E_{e}+E_{h})^{2}}{d_{h}\sqrt{E_{h}}}$$ $$\displaystyle\times$$ $$\displaystyle\left.\left(1+\frac{m_{h}^{\ast}}{m_{e}^{\ast}}\right)\frac{n^{e}% _{E_{e}}n^{h}_{E_{h}}}{1-n^{e}_{E_{e}}-n^{h}_{E_{h}}}\right|_{E_{e}=\frac{m_{h% }^{\ast}}{m_{e}^{\ast}}E_{h}}.$$ Insertion of $1-n^{e}_{E_{e}}-n^{h}_{E_{h}}\approx 1$ into the denominators in Eq. (II.2) and Eq. (II.2) gives approximate expressions for direct gap semiconductors that correspond to Eq. (II.2) and Eq. (II.2) for indirect gap semiconductors. II.3 Carrier extraction rate: $J^{e(h),{\rm out}}_{E_{e(h)}}$ The relaxation dynamics of a system of interest caused by weak interaction with the environment can be described using a standard approach that is widely used in studies of open quantum systems Carmichael ; Breuer . The carrier extraction rate, $J^{e(h),{\rm out}}_{E_{e(h)}}$, is also obtained using a similar approach. First, as shown in Fig. 1, complete solar cell systems are divided into four parts: the main electron-hole systems in the absorber (Carrier System (sys) ), the electron and hole reservoirs (Bath 1 and Bath 2), and the phonon reservoir (Bath 3). Therefore, the noninteracting Hamiltonian for the whole system, $H_{0}$, can be given as the sum of four parts, i.e., $H_{0}=H_{0,{\rm sys}}+H_{0,{\rm Bath1}}+H_{0,{\rm Bath2}}+H_{0,{\rm Bath3}}$, where $$\displaystyle H_{0,{\rm sys}}=\sum_{k}\left((E_{g}+E_{e}(k))e_{k}^{\dagger}e_{% k}+E_{h}(k)h_{k}^{\dagger}h_{k}\right),$$ (29) $$\displaystyle H_{0,{\rm Bath1}}=\sum_{k^{\prime}}\epsilon^{c}_{k^{\prime}}c_{k% ^{\prime}}^{\dagger}c_{k^{\prime}},$$ (30) $$\displaystyle H_{0,{\rm Bath2}}=\sum_{k^{\prime}}\epsilon^{d}_{k^{\prime}}d_{k% ^{\prime}}^{\dagger}d_{k^{\prime}},$$ (31) $$\displaystyle H_{0,{\rm Bath3}}=\sum_{q}E^{ph}_{q}b_{q}^{\dagger}b_{q}.$$ (32) Here, $e_{k}$ and $h_{k}$ are fermionic annihilation operators that are defined using the anticommutation relations, $[e_{k},e_{k^{\prime}}^{\dagger}]_{+}=[h_{k},h_{k^{\prime}}^{\dagger}]_{+}=% \delta_{k,k^{\prime}}$ ($[X,Y]_{+}\equiv XY+YX$), of the electrons and holes in the carrier system (absorber), with momentum $k$ and energies of $E_{g}+E_{e}(k)$ and $E_{h}(k)$, respectively. $c_{k^{\prime}}$ and $d_{k^{\prime}}$ represent the fermionic annihilation operators when defined using the anticommutation relations $[c_{k},c_{k^{\prime}}^{\dagger}]_{+}=[d_{k},d_{k^{\prime}}^{\dagger}]_{+}=% \delta_{k,k^{\prime}}$ for the electrons in Bath 1 and the holes in Bath 2 with momentum $k^{\prime}$, and energies of $\epsilon^{c}_{k^{\prime}}$ and $\epsilon^{d}_{k^{\prime}}$, respectively. $b_{q}$ represents a bosonic annihilation operator, which is defined using the commutation relation $[b_{q},b_{q^{\prime}}^{\dagger}]_{-}=\delta_{q,q^{\prime}}$ ($[X,Y]_{-}\equiv XY-YX$) for the phonons in Bath 3 (the crystal lattice in the absorber) with momentum $q$ and energy $E^{ph}_{q}$. Under the assumption of weak carrier system interaction with the environments (Bath 1 + Bath 2 + Bath 3), the density matrix of the whole system $\rho$ can be approximated using a product of the matrices for the subsystems: $$\displaystyle\rho=\rho_{\rm sys}\otimes\rho_{\rm Bath1}\otimes\rho_{\rm Bath2}% \otimes\rho_{\rm Bath3}.$$ (33) With this density matrix, the quantum and statistical average for any physical quantity $O$ is given by $\langle O\rangle={\rm Tr}(O\rho)$. Here, the density matrix for the Carrier System, denoted by $\rho_{\rm sys}$, is the matrix of interest and will be determined using the von Neumann equation Breuer . Additionally, the density matrices for the environments are assumed to be $$\displaystyle\rho_{\rm Bath1}=\exp\left(-\beta_{c}(H_{0,{\rm Bath1}}-\mu_{c}N_% {c})\right)/Z_{\rm Bath1},$$ (34) $$\displaystyle\rho_{\rm Bath2}=\exp\left(-\beta_{c}(H_{0,{\rm Bath2}}-(-\mu_{v}% )N_{d})\right)/Z_{\rm Bath2},$$ (35) $$\displaystyle\rho_{\rm Bath3}=\exp\left(-\beta_{\rm ph}H_{0,{\rm Bath3}}\right% )/Z_{\rm Bath3},$$ (36) which represent the matrices in their thermal equilibrium states, e.g., with temperature $T_{c}$ and electron chemical potential $\mu_{c}$ for Bath 1, $T_{c}$ and hole chemical potential $\mu_{h}(=-\mu_{v})$ for Bath 2, and phonon temperature $T_{\rm ph}$ and a chemical potential of zero for Bath 3. The $\beta$s represent inverse temperatures. $N_{c}=\sum_{k^{\prime}}c_{k^{\prime}}^{\dagger}c_{k^{\prime}}$ and $N_{d}=\sum_{k^{\prime}}d_{k^{\prime}}^{\dagger}d_{k^{\prime}}$ represent the total numbers of carriers in Bath 1 and Bath 2, respectively, and the $Z$s are normalization factors used to ensure that $$\displaystyle{\rm Tr}(\rho_{\rm Bath1})={\rm Tr}(\rho_{\rm Bath2})={\rm Tr}(% \rho_{\rm Bath3})=1.$$ (37) Using the density matrix, the distribution functions for the carriers in the absorber are defined as $$\displaystyle n^{e}_{k}=n^{e}_{E_{e}(=E_{e}(k))}\equiv\langle e_{k}^{\dagger}e% _{k}\rangle,$$ (38) $$\displaystyle n^{h}_{k}=n^{h}_{E_{h}(=E_{h}(k))}\equiv\langle h_{k}^{\dagger}h% _{k}\rangle,$$ (39) while those for the particles in the baths are given by $$\displaystyle\langle c^{\dagger}_{k^{\prime}}c_{k^{\prime}}\rangle$$ $$\displaystyle=$$ $$\displaystyle f^{F}_{\mu_{c},\beta_{c}}(\epsilon^{c}_{k^{\prime}}),$$ (40) $$\displaystyle\langle d^{\dagger}_{k^{\prime}}d_{k^{\prime}}\rangle$$ $$\displaystyle=$$ $$\displaystyle f^{F}_{-\mu_{v},\beta_{c}}(\epsilon^{v}_{k^{\prime}})$$ (41) $$\displaystyle\langle b^{\dagger}_{q}b_{q}\rangle$$ $$\displaystyle=$$ $$\displaystyle f^{B}_{0,\beta_{\rm ph}}(E^{ph}_{q}),$$ (42) where $f^{F}_{\mu,\beta}(E)(\equiv 1/(e^{\beta(E-\mu)}+1))$ and $f^{B}_{\mu,\beta}(E)(\equiv 1/(e^{\beta(E-\mu)}-1))$ are the Fermi-Dirac and Bose-Einstein distribution functions, respectively, with inverse temperature $\beta$ and chemical potential $\mu$. The electron extraction rate appears as a perturbation expansion to the kinetic motion in the equations for the distribution functions of the second-order with respect to the weak interaction between the Carrier System and Bath 1, $H^{\prime}=H_{{\rm sys}-{\rm Bath1}}$, whereas the interaction Hamiltonian is given using the form $$\displaystyle H_{{\rm sys}-{\rm Bath1}}=\sum_{k,k^{\prime}}(T^{e}_{k,k^{\prime% }}e_{k}c_{k^{\prime}}^{\dagger}+(T^{e}_{k,k^{\prime}})^{\ast}c_{k^{\prime}}e_{% k}^{\dagger}).$$ (43) The coupling parameter $T^{e}_{k,k^{\prime}}$ represents the tunneling amplitude of an electron passing from the absorber through the tunnel barrier to the electron reservoir. Therefore, $T^{e}_{k,k^{\prime}}$, is a function of the overlap integral of the wavefunctions in the absorber and the reservoir, i.e., it is a function of $k$ and $k^{\prime}$, the height and width of the tunnel barrier, and the absorber thickness. Switching to the interaction picture, where $O_{I}(t)\equiv e^{i(H_{0}/\hbar)t}Oe^{-i(H_{0}/\hbar)t}$, the von Neumann equation for the density matrix is given as $$\displaystyle\frac{d}{dt}\rho_{I}(t)=\frac{1}{i\hbar}[H_{I}^{\prime}(t),\rho_{% I}(t)]_{-},$$ (44) where $$\displaystyle H_{I}^{\prime}(t)=\sum_{k,k^{\prime}}(T^{e}_{k,k^{\prime}}e_{k}c% _{k^{\prime}}^{\dagger}e^{-i(E_{g}+E_{e}-\epsilon_{k^{\prime}}^{c})t/\hbar}+{% \rm h.c.})$$ (45) is the interaction Hamiltonian, given by $H^{\prime}(=H_{{\rm sys}-{\rm Bath1}})$, in the interaction picture. Successive iterations and time integration of Eq. (44) gives $$\displaystyle\frac{d}{dt}\rho_{I}$$ $$\displaystyle=$$ $$\displaystyle\left(\frac{1}{i\hbar}\right)^{2}\int_{0}^{\infty}[H_{I}^{\prime}% (t),[H_{I}^{\prime}(t-\tau),\rho_{I}(t-\tau)]_{-}]_{-}{\rm d}\tau$$ (46) $$\displaystyle\sim$$ $$\displaystyle\left(\frac{1}{i\hbar}\right)^{2}\int_{0}^{\infty}[H_{I}^{\prime}% (t),[H_{I}^{\prime}(t-\tau),\rho_{I}(t)]_{-}]_{-}{\rm d}\tau,$$ where the initial time contribution from $t=-\infty$ is neglected in the first equation, and a Markov approximation, under the assumption that the main system dynamics are sufficiently slow when compared with the memory time in the environments, is used in the second equation Carmichael ; Breuer . Because we are considering steady-state operation of the solar cells, the Markov approximation can be used safely. Using Eq. (46), the extraction rate for an electron with kinetic energy $E_{e}(=E_{e}(k))$ for momentum $k$ is given by $$\displaystyle-J^{e,{\rm out}}_{E_{e}}=\frac{d}{dt}\langle e_{k}^{\dagger}e_{k}% \rangle={\rm Tr}\left(e_{k}^{\dagger}e_{k}\dot{\rho}_{I}(t)\right)$$ (47) $$\displaystyle=\left(\frac{1}{i\hbar}\right)^{2}\int_{0}^{\infty}{\rm Tr}\left(% e_{k}^{\dagger}e_{k}[H_{I}^{\prime}(t),[H_{I}^{\prime}(t-\tau),\rho_{I}(t)]_{-% }]_{-}\right){\rm d}\tau$$ Using the cyclic property of the trace, where ${\rm Tr}(XYZ)={\rm Tr}(ZXY)$, the integrand on the right-hand side of the third equation is given explicitly by $$\displaystyle{\rm Tr}\left(\left(e_{k}^{\dagger}e_{k}H_{I}^{\prime}(t)H_{I}^{% \prime}(t-\tau)-H_{I}^{\prime}(t-\tau)H_{I}^{\prime}(t)e_{k}^{\dagger}e_{k}% \right.\right.$$ (48) $$\displaystyle\left.\left.-H_{I}^{\prime}(t-\tau)e_{k}^{\dagger}e_{k}H_{I}^{% \prime}(t)+H_{I}^{\prime}(t)e_{k}^{\dagger}e_{k}H_{I}^{\prime}(t-\tau)\right)% \rho_{I}(t)\right).$$ Considering the assumption in Eq. (33), the trace for the whole system can be provided by successive partial traces of the subsystems. By retaining only the terms that do not vanish after the trace is taken, Eq. (48) can be rewritten as $$\displaystyle\sum_{k^{\prime}}|T^{e}_{k.k^{\prime}}|^{2}(e^{i(E_{g}+E_{e}-% \epsilon_{k^{\prime}}^{c})\tau/\hbar}-e^{-i(E_{g}+E_{e}-\epsilon_{k^{\prime}}^% {c})\tau/\hbar})$$ (49) $$\displaystyle\times{\rm Tr}\left((e_{k}^{\dagger}e_{k}e^{\dagger}_{k}e_{k}c_{k% ^{\prime}}c_{k^{\prime}}^{\dagger}-e_{k}e_{k}^{\dagger}e_{k}e_{k}^{\dagger}c_{% k^{\prime}}^{\dagger}c_{k^{\prime}})\rho_{I}(t)\right)$$ $$\displaystyle=$$ $$\displaystyle\sum_{k^{\prime}}|T^{e}_{k.k^{\prime}}|^{2}(e^{i(E_{g}+E_{e}-% \epsilon_{k^{\prime}}^{c})\tau/\hbar}-e^{-i(E_{g}+E_{e}-\epsilon_{k^{\prime}}^% {c})\tau/\hbar})$$ $$\displaystyle\times\left(n_{E_{e}}^{e}(1-f^{F}_{\mu_{c},\beta_{c}}(\epsilon_{k% ^{\prime}}^{c}))-(1-n_{E_{e}}^{e})f^{F}_{\mu_{c},\beta_{c}}(\epsilon_{k^{% \prime}}^{c})\right).$$ By inserting Eq. (49) into Eq. (47) and performing the integration with respect to time, we obtain $$\displaystyle J^{e,{\rm out}}_{E_{e}}$$ $$\displaystyle=$$ $$\displaystyle\frac{2\pi}{\hbar}\sum_{k^{\prime}}|T^{e}_{k.k^{\prime}}|^{2}% \delta(E_{g}+E_{e}-\epsilon_{k^{\prime}}^{c})$$ (50) $$\displaystyle\times\left(n^{e}_{E_{e}}-f^{F}_{\mu_{c},\beta_{c}}(E_{g}+E_{e})% \right).$$ We therefore derive a simple expression for the extraction rate: $$\displaystyle J^{e,{\rm out}}_{E_{e}}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\tau_{\rm out}^{e}}\left(n^{e}_{E_{e}}-f^{F}_{\mu_{c},% \beta_{c}}(E_{g}+E_{e})\right),$$ (51) with extraction time $\tau_{\rm out}^{e}$ that is defined as $$\displaystyle(\tau_{\rm out}^{e})^{-1}=\left.\frac{2\pi}{\hbar}|T^{e}(E)|^{2}% \mathcal{D}_{c}(E)\right|_{E=E_{g}+E_{e}},$$ (52) where $\mathcal{D}_{c}(E)(=\sum_{k^{\prime}}\delta(E-\epsilon_{k^{\prime}}^{c}))$ is the density of states in Bath 1, and $$\displaystyle|T^{e}(E)|^{2}$$ $$\displaystyle=$$ $$\displaystyle\frac{\sum_{k^{\prime}}|T^{e}_{k,k^{\prime}}|^{2}\delta(E-% \epsilon_{k^{\prime}}^{c})}{\sum_{k^{\prime}}\delta(E-\epsilon_{k^{\prime}}^{c% })},$$ (53) represents the strength of tunneling coupling when averaged over the states in Bath 1 at energy $E$. While $\tau_{\rm out}^{e}$ is dependent on the energy of the carriers, we consider it to be a constant parameter in the following analysis for simplicity. In this sense, the carrier extraction time used here is an effective parameter representative for all the electrons tunneling between the absorber and the electrode. The same argument is also applicable to the hole extraction rate when using $H^{\prime}=H_{\rm sys-Bath2}=\sum_{k,k^{\prime}}(T^{h}_{k,k^{\prime}}h_{k}d_{k% ^{\prime}}^{\dagger}+{\rm h.c.})$. We therefore have a similar expression for holes: $$\displaystyle J^{h,{\rm out}}_{E_{h}}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\tau^{h}_{\rm out}}\left(n^{h}_{E_{h}}-f^{F}_{-\mu_{v},% \beta_{c}}(E_{h})\right),$$ (54) with a hole extraction time of $$\displaystyle(\tau_{\rm out}^{h})^{-1}$$ $$\displaystyle=$$ $$\displaystyle\frac{2\pi}{\hbar}\sum_{k^{\prime}}|T_{k.k^{\prime}}^{h}|^{2}% \delta(E_{h}-\epsilon_{k^{\prime}}^{d})$$ (55) $$\displaystyle\equiv$$ $$\displaystyle\left.\frac{2\pi}{\hbar}|T^{h}(E)|^{2}\mathcal{D}_{d}(E)\right|_{% E=E_{h}},$$ where $|T^{h}(E)|^{2}$ is an effective tunneling probability for a hole tunneling from the absorber to Bath 2, and $\mathcal{D}_{d}(E)(=\sum_{k^{\prime}}\delta(E-\epsilon_{k^{\prime}}^{d}))$ is the density of states in Bath 2 at energy $E$. In the following, we also assume the simplest case, i.e., where $\tau^{e}_{\rm out}=\tau^{h}_{\rm out}\equiv\tau_{\rm out}$, which will not limit the generality of the main conclusion. As mentioned earlier, $\tau_{\rm out}$ should be regarded as the effective time scale used for carrier extraction. In this sense, however, it could also be used to parametrize the time between photogeneration and extraction of the carriers, which may be required for other reasons; e.g., when photogeneration occurs at the center of absorber, $\tau_{\rm out}$ cannot be less than the time delay given by the distance from the point of generation to the contacts divided by the average carrier velocity. Such time delays would be important in thicker solar cells and appear to be critical in solar cells using nanocrystals, organic solar cells Tang ; Heeger (including dye-sensitised Gratzel ), and perovskite solar cells Miyasaka , which have low carrier mobilities caused by disorders and the Frenkel-like localization Frenkel ; MarkFox of excitons. $\tau_{\rm out}$ could be measured using specific characterization methods that are suitable for each system, e.g., by optical characterization of the lifetimes of photo-generated carriers under short-circuit conditions and by transient measurement of the photocurrents Ishii ; Koster . The description based on the tunnel Hamiltonian given in Eq. (43) simply neglects the voltage drop at the contact. Ohmic contacts with energy losses at the contacts are outside the scope of this paper. II.4 Phonon scattering (thermalization) rate: $\left.\frac{d}{dt}n^{e(h)}_{E_{e(h)}}\right|_{\rm phonon}$ The phonon scattering (thermalization) rate in the rate equation for the microscopic carrier distribution functions, given by $\left.\frac{d}{dt}n^{e(h)}_{E_{e(h)}}\right|_{\rm phonon}$, can be obtained using an analysis similar to those presented in Sec. II.3. In this case, interactions between the carriers and the phonons are considered using the perturbation Hamiltonian $H^{\prime}=H_{\rm sys-Bath3}$, which is given by $$\displaystyle H_{\rm sys-Bath3}=\sum_{q,k}g^{c}_{q}(b_{q}+b_{-q}^{\dagger})(e_% {k+q}^{\dagger}e_{k}+{\rm h.c.})$$ $$\displaystyle+\sum_{q,k}g^{v}_{q}(b_{q}+b_{-q}^{\dagger})(h_{k+q}^{\dagger}h_{% k}+{\rm h.c.}),$$ (56) where $g^{c}_{q}$ and $g^{v}_{q}$ are the electron-phonon coupling constants for the bottom-conduction-band and top-valence-band electrons in the absorber, respectively. The $q$-dependence of the coupling constants is dependent on the types of phonons involved. For an order of magnitude-level estimate of the thermalization rate, the LA phonons that originate from the deformation potential are considered for Si absorbers: $$\displaystyle g^{c(v)}_{q}=a_{{\rm def},c(v)}\sqrt{\frac{\hbar q}{2\mathcal{V}% v_{A}\rho_{A}}},$$ (57) where $a_{{\rm def},c(v)}$, $v_{A}$, and $\rho_{A}$ are the deformation potential for the conduction (valence) electrons, the phonon velocity, and the mass density in the absorber, respectively. A realistic estimate requires inclusion of the scattering caused by the other phonon modes, i.e., the transverse acoustic (TA), transverse optical (TO), and LO modes, and the intra-valley scattering (within the degenerate bands) Suzuki , based on realistic electron and phonon band structures Cardona , which is far beyond the scope of this work. The interaction Hamiltonian in the interaction picture is $$\displaystyle H^{\prime}_{I}(t)=\sum_{q,k}\left(g^{c}_{q}e_{k+q}^{\dagger}e_{k% }b_{q}e^{i(E_{e}(k+q)-E_{e}(k)-E^{ph}_{q})t}+{\rm h.c.}\right)$$ $$\displaystyle+\sum_{q,k}\left(g^{c}_{q}e_{k+q}^{\dagger}e_{k}b_{-q}^{\dagger}e% ^{i(E_{e}(k+q)-E_{e}(k)+E^{ph}_{q})t}+{\rm h.c.}\right)$$ $$\displaystyle+\sum_{q,k}\left(g^{v}_{q}h_{k+q}^{\dagger}h_{k}b_{q}e^{i(E_{h}(k% +q)-E_{h}(k)-E^{ph}_{q})t}+{\rm h.c.}\right)$$ (58) $$\displaystyle\ +\sum_{q,k}\left(g^{v}_{q}h_{k+q}^{\dagger}h_{k}b_{-q}^{\dagger% }e^{i(E_{h}(k+q)-E_{h}(k)+E^{ph}_{q})t}+{\rm h.c.}\right).$$ Insertion of Eq. (58) into ${\rm Tr}(e^{\dagger}_{k}e_{k}\dot{\rho}_{I}(t))$ with the second-order Born-Markov approximation given in Eq. (46) and performing a time integration, we obtain the phonon scattering rates as follows: $$\displaystyle\left.\frac{d}{dt}n^{e}_{E_{e}}\right|_{\rm phonon}=-\frac{2\pi}{% \hbar}\sum_{q}(g^{c}_{q})^{2}$$ $$\displaystyle\times\delta\bigl{(}E_{e}(k)-E_{e}(k-q)-E_{q}^{ph}\bigr{)}\Bigl{(% }n^{e}_{k}(1-n_{k-q}^{e})$$ $$\displaystyle\left.\times(f^{B}_{0,\beta_{\rm ph}}(E^{ph}_{q})+1)-n_{k-q}^{e}(% 1-n_{k}^{e})f^{B}_{0,\beta_{\rm ph}}(E^{ph}_{q})\right)$$ $$\displaystyle+\frac{2\pi}{\hbar}\sum_{q}(g^{c}_{q})^{2}\delta\bigl{(}E_{e}(k+q% )-E_{e}(k)-E_{q}^{ph}\bigr{)}$$ $$\displaystyle\times\Bigl{(}n^{e}_{k+q}(1-n_{k}^{e})(f^{B}_{0,\beta_{\rm ph}}(E% ^{ph}_{q})+1)$$ $$\displaystyle\qquad\qquad\qquad-n_{k}^{e}(1-n_{k+q}^{e})f^{B}_{0,\beta_{\rm ph% }}(E^{ph}_{q})\Bigr{)},$$ (59) $$\displaystyle\left.\frac{d}{dt}n^{h}_{E_{h}}\right|_{\rm phonon}=-\frac{2\pi}{% \hbar}\sum_{q}(g^{v}_{q})^{2}$$ $$\displaystyle\times\delta\bigl{(}E_{h}(k)-E_{h}(k-q)-E_{q}^{ph}\bigr{)}\Bigl{(% }n^{h}_{k}(1-n_{k-q}^{h})$$ $$\displaystyle\left.\times(f^{B}_{0,\beta_{\rm ph}}(E^{ph}_{q})+1)-n_{k-q}^{h}(% 1-n_{k}^{h})f^{B}_{0,\beta_{\rm ph}}(E^{ph}_{q})\right)$$ $$\displaystyle+\frac{2\pi}{\hbar}\sum_{q}(g^{v}_{q})^{2}\delta\bigl{(}E_{h}(k+q% )-E_{h}(k)-E_{q}^{ph}\bigr{)}$$ $$\displaystyle\times\Bigl{(}n^{h}_{k+q}(1-n_{k}^{h})(f^{B}_{0,\beta_{\rm ph}}(E% ^{ph}_{q})+1)$$ $$\displaystyle\qquad\qquad\qquad-n_{k}^{h}(1-n_{k+q}^{h})f^{B}_{0,\beta_{\rm ph% }}(E^{ph}_{q})\Bigr{)}.$$ (60) Scattering rates in this form are equivalent to those obtained using Fermi’s golden rule calculation. The expression above includes a momentum representation of the carrier distribution functions that can be further stated using simpler expressions in the energy representation. First, we transform the $q$-summation into an integration over polar coordinates in the form $\sum_{q}=\frac{\mathcal{V}}{(2\pi)^{3}}\int_{0}^{\infty}2\pi|q|^{2}{\rm d}|q|% \int_{-1}^{1}{\rm d}(\cos{\theta})$, where $\theta$ is the angle between $k$ and $q$. The angular integration is then performed using the dispersion relation $E_{q}^{ph}=\hbar v_{A}|q|$, the coupling constants for LA phonons in Eq. (57), and $$\displaystyle E_{e(h)}(k)-E_{e(h)}(k-q)=\frac{\hbar^{2}(2|q||k|\cos{\theta}-|q% |^{2})}{2m^{\ast}_{e(h)}},$$ (61) $$\displaystyle E_{e(h)}(k+q)-E_{e(h)}(k)=\frac{\hbar^{2}(2|q||k|\cos{\theta}+|q% |^{2})}{2m^{\ast}_{e(h)}}.$$ (62) By making a change in the coordinates, where $|q|=\epsilon/(\hbar v_{A})$, we finally obtain $$\displaystyle\left.\frac{d}{dt}n^{e(h)}_{E_{e(h)}}\right|_{\rm phonon}$$ (63) $$\displaystyle=$$ $$\displaystyle-C_{ph}^{e(h)}\int_{0}^{\epsilon_{\rm cut}^{e(h),-}}\frac{% \epsilon^{2}d\epsilon}{\sqrt{E_{e(h)}}}\biggl{(}n^{e(h)}_{E_{e(h)}}(1-n_{E_{e(% h)}-\epsilon}^{e(h)})$$ $$\displaystyle\times$$ $$\displaystyle\bigl{(}1+f^{B}_{0,\beta_{\rm ph}}({\epsilon})\bigr{)}-n_{E_{e(h)% }-\epsilon}^{e(h)}(1-n^{e(h)}_{E_{e(h)}})f^{B}_{0,\beta_{\rm ph}}({\epsilon})% \biggr{)}$$ $$\displaystyle+$$ $$\displaystyle C_{ph}^{e(h)}\int_{0}^{\epsilon_{\rm cut}^{e(h),+}}\frac{% \epsilon^{2}d\epsilon}{\sqrt{E_{e(h)}}}\biggl{(}n^{e(h)}_{E_{e(h)}+\epsilon}(1% -n^{e(h)}_{E_{e(h)}})$$ $$\displaystyle\times$$ $$\displaystyle\bigl{(}1+f^{B}_{0,\beta_{\rm ph}}({\epsilon})\bigr{)}-n_{E_{e(h)% }}^{e(h)}(1-n_{E_{e(h)}+\epsilon}^{e(h)})f^{B}_{0,\beta_{\rm ph}}({\epsilon})% \biggr{)}.$$ Terms proportional to $f^{B}_{0,\beta_{\rm ph}}({\epsilon})$ and $1+f^{B}_{0,\beta_{\rm ph}}({\epsilon})$ represent the scattering rates for phonon absorption and emission, respectively. Here, we newly defined the coefficient $C_{ph}^{e(h)}\equiv\frac{a_{\rm def,c(v)}^{2}\sqrt{m_{e(h)}^{\ast}/2}}{4\pi% \hbar^{4}v_{A}^{4}\rho_{A}}$ and the $E_{e(h)}$-dependent cutoff energies as follows: $$\displaystyle\epsilon_{\rm cut}^{e(h),-}\equiv\min\biggl{(}E_{e(h)},\epsilon_{% \rm cut}^{ph},$$ $$\displaystyle\qquad\qquad 2v_{A}\bigl{(}\sqrt{2m_{e(h)}^{\ast}E_{e(h)}}-m_{e(h% )}^{\ast}v_{A}\bigr{)}\biggr{)},$$ (64) $$\displaystyle\epsilon_{\rm cut}^{e(h),+}\equiv\min\biggl{(}\epsilon_{\rm cut}^% {ph},2v_{A}\bigl{(}\sqrt{2m_{e(h)}^{\ast}E_{e(h)}}+m_{e(h)}^{\ast}v_{A}\bigr{)% }\biggr{)},$$ (65) where $\epsilon_{\rm cut}^{ph}$ is the Debye cutoff energy for the LA phonons ($\sim 50$ meV for Si). The $E_{e(h)}$ dependences in Eq. (64) and Eq. (65) stem from the condition that the arguments in the delta functions in Eq. (59) and Eq. (60) are zero, i.e., from the requirements for energy and momentum conservation during the carrier-phonon scattering processes. A situation also occurs in which the cutoff energy $\epsilon_{\rm cut}^{e(h),-}$ becomes negative. This occurs when $\sqrt{2m_{e(h)}^{\ast}E_{e(h)}}-m_{e(h)}^{\ast}v_{A}<0$ for small $E_{e(h)}$, i.e., when $E_{e(h)}<\frac{1}{2}m_{e(h)}^{\ast}v_{A}^{2}\equiv E^{\rm PB}_{e(h)}$. In this case, carriers with $E_{e(h)}<E^{\rm PB}_{e(h)}$ cannot lose energy via phonon emissions (i.e., carrier cooling does not occur), even at the zero temperature of the lattice ($T_{\rm ph}=0$), which is prohibited by the conservation law. However, this Effect, which is called the phonon bottleneck effect, is normally negligible because the threshold energy $E^{PB}_{e(h)}$, known as the phonon bottleneck energy, is very low (e.g., $v_{A}=10^{4}$ m/sec, $m_{e}^{\ast}/m_{e}=1.08$, and $m_{h}^{\ast}/m_{e}=0.55$ give $E^{\rm PB}_{e}=0.307$ meV and $E^{\rm PB}_{h}=0.156$ meV in the model for Si). We have already implicitly assumed that $E_{e(h)}>E^{\rm PB}_{e(h)}$ in Eq. (63) (which sets $\epsilon_{\rm cut}^{e(h),-}>0$ and the lower domain boundary of the latter integration to zero). The time scale for thermalization of the photogenerated carriers in the absorber can be estimated from Eq. (63). Consider the case where electrons (holes) with energy $E_{e(h)}$ are generated at an initial time $t=0$ under illumination by a narrow-band photon source at the corresponding energy. In this case, $n^{e(h)}_{E_{e(h)}}\neq 0$ and $n^{e(h)}_{E\neq E_{e(h)}}=0$ at $t=0$. When this condition is inserted into Eq. (63), the initial population dynamics are given by $$\displaystyle\frac{d}{dt}n^{e(h)}_{E_{e(h)}}=-\frac{1}{\tau^{e(h)}_{\rm ph}}n^% {e(h)}_{E_{e(h)}},$$ (66) where the relaxation time $\tau^{e(h)}_{\rm ph}$ is given by $$\displaystyle\tau^{e(h)}_{\rm ph}$$ $$\displaystyle=$$ $$\displaystyle C_{ph}^{e(h)}\biggl{(}\int_{0}^{\epsilon_{\rm cut}^{e(h),-}}% \frac{\epsilon^{2}\bigl{(}1+f^{B}_{0,\beta_{\rm ph}}({\epsilon})\bigr{)}}{% \sqrt{E_{e(h)}}}d\epsilon$$ (67) $$\displaystyle+\int_{0}^{\epsilon_{\rm cut}^{e(h),+}}\frac{\epsilon^{2}f^{B}_{0% ,\beta_{\rm ph}}({\epsilon})}{\sqrt{E_{e(h)}}}d\epsilon\biggr{)}.$$ At the zero temperature of the lattice, insertion of $f_{0,\beta_{\rm ph}=+\infty}^{B}(\epsilon)=0$ into Eq. (67) gives the simple expression $$\displaystyle\frac{1}{\tau^{e(h)}_{\rm ph}}=C_{ph}^{e(h)}\frac{(\epsilon_{\rm cut% }^{e(h),-})^{3}}{3\sqrt{E_{e}(h)}}.$$ (68) Of course, the time constant $\tau^{e(h)}_{\rm ph}$ gives a timescale for the initial relaxation dynamics for such an ideal situation, It therefore seems reasonable to consider that the carrier thermalization time in solar cells could also be estimated using $\tau^{e(h)}_{\rm ph}$ in Eq. (67). Fig. 6 shows the phonon relaxation rate that was estimated for Si as a function of the kinetic energies of the carriers. In our Si model, we found that the relaxation time ranges between $10^{-10}$ and $10^{-13.5}$ s in the relevant energy window for solar cells (as shown in Fig. 5 that corresponds to the bandwidth of the solar spectra, $k_{B}T_{S}$). This result is consistent with the measured timescale for the carrier cooling process, which ranges from sub-picosecond to hundreds of picoseconds Goldman ; Sabbah ; Suzuki and also with the timescales in the literature BookGreen . A major difference (two orders of magnitude) between the results for electrons and holes originates from differences in the deformation potentials for the LA phonons. The situation can therefore change if other phonon modes and the fast electron-hole equilibration that was discussed in Suzuki are taken into account in the calculations. From this perspective, we should stress here again that the results in Fig. 6 were given for an order of magnitude-level estimation. II.5 Effects of spectral broadening of the microscopic states In the previous four subsections (Sec. II.1, Sec. II.2, Sec. II.3, Sec. II.4), explicit forms of the rate equation for the microscopic carrier distribution function in both Eq. (1) and Eq. (2) appear to have been found. Actually, as shown in the simulations below, direct use of the equations that have been derived thus far can be justified in most cases. However, in certain situations, the equations should be modified slightly. Modifications are required when the spectral broadening of the microscopic carrier states becomes important. These situations occur when the carriers are far out of equilibrium. The spectral broadening $\Gamma$ is taken into account in the NEGF formalism automatically as an imaginary part of the (retarded) self-energy ($\Gamma=-{\rm Im}\Sigma^{\rm r}$) Martin-Schwinger ; Kadanof-Baym ; Keldysh . Therefore, we also take the broadening effect into account in a satisfactory manner while keeping the calculations as simple as possible. Several factors can broaden the spectra of the single particle states in many-particle systems. One factor comes from Coulomb interaction between the carriers. However, we consider the Coulomb interaction to have a minor or secondary effect on the properties of solar cells, which normally work at low carrier densities (as shown for Si solar cells in Richter ), and we have already neglected to include it in our model. Other factors occur because of interactions between the carrier system and the baths, or more explicitely, because of the carrier-phonon interaction and carrier extraction processes that occur in our model. Spectral broadening of the electrons (holes) with $E_{e(h)}$, given by $\Gamma^{e(h)}_{E_{e(h)}}(\equiv-{\rm Im}\Sigma^{r}(E_{e(h)}))$, is related to the time constants $\tau_{\rm out}^{e(h)}(=\tau_{\rm out})$ and $\tau_{\rm ph}^{e(h)}$, which we already determined in the preceding subsections: $$\displaystyle\Gamma^{e(h)}_{E_{e(h)}}=\frac{\hbar}{2\tau_{\rm ph}^{e(h)}}+% \frac{\hbar}{2\tau_{\rm out}^{e(h)}}.$$ (69) Therefore, the broadening is given by $\hbar/(2\tau_{\rm ph}^{e(h)})$ when $\tau_{\rm ph}^{e(h)}\ll\tau_{\rm out}^{e(h)}$ and $\hbar/(2\tau_{\rm out}^{e(h)})$ when $\tau_{\rm out}^{e(h)}\ll\tau_{\rm ph}^{e(h)}$. In the former case ($\tau_{\rm ph}^{e(h)}\ll\tau_{\rm out}^{e(h)}$), it is natural to consider that the carriers must be fully relaxed to establish thermal equilibrium with the lattice in the steady state. This assumption can be checked by solving the rate equations in Eq. (1) and Eq. (2) as follows. When the thermalization term $\left.\frac{d}{dt}n^{e(h)}_{E_{e(h)}}\right|_{\rm phonon}$ dominates the rate equation, a steady state is achieved when all integrands in Eq. (63) disappear. This condition is fulfilled if $$\displaystyle\left(\frac{n^{e(h)}_{E_{e(h)}+\epsilon}}{1-n^{e(h)}_{E_{e(h)}+% \epsilon}}\right)\Bigg{/}\left(\frac{n^{e(h)}_{E_{e(h)}}}{1-n^{e(h)}_{E_{e(h)}% }}\right)$$ (70) $$\displaystyle=$$ $$\displaystyle\frac{f^{B}_{0,\beta_{\rm ph}}(\epsilon)}{1+f^{B}_{0,\beta_{\rm ph% }}(\epsilon)}=\exp(-\beta_{\rm ph}\epsilon)$$ $$\displaystyle=$$ $$\displaystyle\exp(-\beta_{\rm ph}(E_{e(h)}+\epsilon))/\exp(-\beta_{\rm ph}E_{e% (h)})$$ for every possible choice of $E_{e(h)}$ and $\epsilon$. This means that in the steady state, the distribution function is given by the Fermi-Dirac distribution, with some value for the chemical potential in the absorber $\mu^{\rm cell}_{e(h)}$: $$\displaystyle n^{e(h)}_{E_{e(h)}}=f^{F}_{\mu^{\rm cell}_{e(h)},\beta_{\rm ph}}% (E_{e(h)}).$$ (71) Therefore, these distribution functions are thermally distributed over the energy range $0<E_{e(h)}<k_{B}T_{\rm ph}$. In this case, inclusion of the broadening within the single particle spectra does not modify the distribution function as long as $\Gamma^{e(h)}_{E_{e(h)}}\left(=\hbar/(2\tau^{e(h)}_{\rm ph})\right)<k_{B}T_{% \rm ph}$ (=26 meV for 300 K). The condition appears to be satisfied well when we consider that a phonon relaxation time of 1 ps corresponds to broadening of 0.33 meV. In this way, we confirm that the broadening effect is negligible for $\tau_{\rm ph}^{e(h)}\ll\tau_{\rm out}^{e(h)}$. The above consideration allows us to neglect broadening due to electron-phonon interactions in Eq. (69) over the entire range of $\tau_{\rm out}^{e(h)}$. We can therefore safely use the following approximation: $$\displaystyle\Gamma^{e(h)}_{E_{e(h)}}=\frac{\hbar}{2\tau_{\rm out}^{e(h)}}.$$ (72) This approximation greatly simplifies the calculations because $\tau_{\rm out}^{e(h)}(=\tau_{\rm out})$ is a given parameter that is identical for every carrier state in our model, whereas $\tau^{e(h)}_{\rm ph}$, which should be defined precisely from Eq. (63), is a function of the carrier distribution functions that are to be determined finally. We therefore use an approximation based on use of Eq. (72) for broadening of the single-particle states in this work. We define the lineshape function of the states using $\mathcal{A}_{\Gamma}(x)$, which is a Gaussian function with a half width at half maximum that is equal to $\Gamma^{e(h)}_{E_{e(h)}}$. Using $\Gamma\equiv(\log{2})^{-1/2}\Gamma^{e(h)}_{E_{e(h)}}$, the function is given by $$\displaystyle\mathcal{A}_{\Gamma}(x)=\sqrt{\frac{1}{\pi\Gamma^{2}}}\exp\left(-% (x/\Gamma)^{2}\right).$$ (73) A Lorentzian function is normally used for the lineshape of a single particle state (this also applies in the NEGF formalism Datta ) rather than a Gaussian spectral profile because the Gaussian distribution reflects the statistical fluctuations of the system; it is not derived naturally for ideal models without structural imperfections. We use the Gaussian function for a reason that is demonstrated later in the paper; however, it will not change our main conclusion. Using the line function, expressions for the generation and loss rates in the rate equation that has been derived thus far are modified as follows, where the modification becomes important when $\tau_{\rm out}\ll\tau_{\rm ph}^{e(h)}$. For the generation rate $J^{e(h),{\rm sun}}_{E_{e(h)}}$ — because the Sun’s spectrum is much broader than the broadening of the states, $k_{B}T_{S}\gg\Gamma$, we can neglect the effects of broadening on $J^{e(h),{\rm sun}}_{E_{e(h)}}$. We therefore use Eq. (7) and Eq. (8) for direct gap semiconductors and Eq. (II.2) and Eq. (II.2) for direct gap semiconductors. For the radiative recombination loss rate $J^{e(h),{\rm rad}}_{E_{e(h)}}$ — inclusion of the spectral broadening modifies the density of states of the carriers. This effect replaces $\mathcal{D}_{e(h)}(E)$ with $\tilde{\mathcal{D}}_{e(h)}(E)$ in the analysis that was presented in Sec. II.2 (particularly in Eq. (14), Eq. (15), Eq. (22), and Eq. (LABEL:eq:1-Ne-Nh)), where $$\displaystyle\tilde{\mathcal{D}}_{e(h)}(E)$$ $$\displaystyle\equiv$$ $$\displaystyle\left(\mathcal{D}_{e(h)}\ast\mathcal{A}_{\Gamma}\right)(E)$$ (74) $$\displaystyle=$$ $$\displaystyle\int_{0}^{\infty}{\mathcal{D}}_{e(h)}(E^{\prime})\mathcal{A}_{% \Gamma}(E-E^{\prime}){\rm d}E^{\prime},$$ is the density of states of the carriers convolved using the spectral function. For example, the denominator in both Eq. (22) and Eq. (LABEL:eq:1-Ne-Nh) can be modified as follows: $$\displaystyle\int_{-\infty}^{\infty}\tilde{\mathcal{D}}_{e}(E^{\prime})\tilde{% \mathcal{D}}_{h}(\Delta E-E^{\prime}){\rm d}E^{\prime}$$ (75) $$\displaystyle=$$ $$\displaystyle\int_{-\infty}^{\infty}{\rm d}E^{\prime}\int_{0}^{\infty}{\rm d}E% _{e}\int_{0}^{\infty}{\rm d}E_{h}\mathcal{D}_{e}(E_{e})\mathcal{A}_{\Gamma}(E^% {\prime}-E_{e})$$ $$\displaystyle\times\mathcal{D}_{h}(E_{h})\mathcal{A}_{\Gamma}(\Delta E-E^{% \prime}-E_{h})$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{\infty}{\rm d}E_{e}\int_{0}^{\infty}{\rm d}E_{h}% \mathcal{D}_{e}(E_{e})\mathcal{D}_{h}(E_{h})$$ $$\displaystyle\qquad\qquad\times\mathcal{A}_{\sqrt{2}\Gamma}(\Delta E-E_{e}-E_{% h}).$$ In the last equation, we used a general property of the convolution of Gaussian functions: $$\displaystyle\mathcal{A}_{\Gamma_{1}}\ast\mathcal{A}_{\Gamma_{2}}(E)$$ $$\displaystyle\equiv$$ $$\displaystyle\int_{-\infty}^{\infty}{\rm d}E^{\prime}\mathcal{A}_{\Gamma_{1}}(% E^{\prime})\mathcal{A}_{\Gamma_{2}}(E-E^{\prime})$$ (76) $$\displaystyle=$$ $$\displaystyle\mathcal{A}_{\sqrt{\Gamma_{1}^{2}+\Gamma_{2}^{2}}}(E).$$ A similar modification was made to the numerator in Eq. (22) and Eq. (LABEL:eq:1-Ne-Nh). As a result, the recombination loss rates for the indirect transition are given by $$\displaystyle J^{e,{\rm rad}}_{E_{e}}=\frac{1}{w}\times\frac{c}{2\pi^{2}}\left% (\frac{1}{\hbar c}\right)^{3}\times\int_{0}^{\infty}{\rm d}E\ E^{2}$$ (77) $$\displaystyle\times\frac{\int_{0}^{\infty}\mathcal{D}_{h}(E^{\prime}_{h})n^{e}% _{E_{e}}n^{h}_{E^{\prime}_{h}}\mathcal{A}_{\sqrt{2}\Gamma}(\Delta E-E_{e}-E^{% \prime}_{h}){\rm d}E^{\prime}_{h}}{\iint\mathcal{D}_{e}(E^{\prime}_{e})% \mathcal{D}_{h}(E^{\prime}_{h})\mathcal{A}_{\sqrt{2}\Gamma}(\Delta E-E^{\prime% }_{e}-E^{\prime}_{h}){\rm d}E^{\prime}_{e}{\rm d}E^{\prime}_{h}},$$ $$\displaystyle J^{h,{\rm rad}}_{E_{h}}=\frac{1}{w}\times\frac{c}{2\pi^{2}}\left% (\frac{1}{\hbar c}\right)^{3}\times\int_{0}^{\infty}{\rm d}E\ E^{2}$$ (78) $$\displaystyle\times\frac{\int_{0}^{\infty}\mathcal{D}_{e}(E^{\prime}_{e})n^{e}% _{E^{\prime}_{e}}n^{h}_{E_{h}}\mathcal{A}_{\sqrt{2}\Gamma}(\Delta E-E^{\prime}% _{e}-E_{h}){\rm d}E^{\prime}_{e}}{\iint\mathcal{D}_{e}(E^{\prime}_{e})\mathcal% {D}_{h}(E^{\prime}_{h})\mathcal{A}_{\sqrt{2}\Gamma}(\Delta E-E^{\prime}_{e}-E^% {\prime}_{h}){\rm d}E^{\prime}_{e}{\rm d}E^{\prime}_{h}},$$ for the electrons and holes, respectively (where the approximation $1-n^{e}_{E_{e}}-n^{h}_{E_{h}}\approx 1$ was used). The above expressions can be simplified further by introducing the dimensionless functions $\Phi(x)$ and $\Theta(x,y)$: $$\displaystyle\Phi(x)$$ $$\displaystyle\equiv$$ $$\displaystyle\int_{0}^{\infty}\frac{{\rm d}s}{\sqrt{\pi}}s^{2}e^{-(s-x)^{2}}$$ (79) $$\displaystyle=$$ $$\displaystyle\frac{xe^{-x^{2}}}{2\sqrt{\pi}}+\frac{(1+2x^{2})(1+{\rm erf}(x))}% {4},$$ $$\displaystyle\Theta(x,y)$$ $$\displaystyle\equiv$$ $$\displaystyle\int_{-x}^{\infty}\frac{(s+x)^{2}}{\Phi(s)}\times\frac{e^{-(s-y)^% {2}}}{\sqrt{\pi}}{\rm d}s,$$ (80) where ${\rm erf}(x)$ is the Gaussian error function and $\Psi(x)$ has following asymptotic forms: $$\displaystyle\Phi(x\gg 1)=x^{2},\quad\Phi(x\ll-1)=\frac{e^{-x^{2}}}{4\sqrt{\pi% }|x|^{3}}.$$ (81) Using these functions, Eq. (77) and Eq. (78) can be rewritten as: $$\displaystyle J^{e,{\rm rad}}_{E_{e}}=\frac{8}{\pi d_{e}}\left(\frac{c/w}{2\pi% ^{2}}\right)\left(\frac{1}{\hbar c}\right)^{3}n^{e}_{E_{e}}$$ $$\displaystyle\qquad\times\int_{0}^{\infty}\sqrt{E_{h}}\Theta\left(\frac{E_{g}}% {\sqrt{2}\Gamma},\frac{E_{e}+E_{h}}{\sqrt{2}\Gamma}\right)n^{h}_{E_{h}}{\rm d}% E_{h},$$ (82) $$\displaystyle J^{h,{\rm rad}}_{E_{h}}=\frac{8}{\pi d_{h}}\left(\frac{c/w}{2\pi% ^{2}}\right)\left(\frac{1}{\hbar c}\right)^{3}n^{h}_{E_{h}}$$ $$\displaystyle\qquad\times\int_{0}^{\infty}\sqrt{E_{e}}\Theta\left(\frac{E_{g}}% {\sqrt{2}\Gamma},\frac{E_{e}+E_{h}}{\sqrt{2}\Gamma}\right)n^{e}_{E_{e}}{\rm d}% E_{e}.$$ (83) Within the limit from $\Gamma\to+0$, we find that $\Theta\to(E_{g}+E_{e}+E_{h})^{2}/(E_{e}+E_{h})^{2}$ in the integrands, which reproduces the original expressions of Eq. (II.2) and Eq. (II.2). Equations (82) and (83), when derived in this way, are the explicit forms of the radiation loss rates for indirect gap semiconductor absorbers. A similar analysis is also applied to direct gap semiconductor absorbers. We find that the expressions in Eq. (II.2) and Eq. (II.2) are modified by the broadening effect as follows: $$\displaystyle J^{e,{\rm rad}}_{E_{e}}=\frac{c}{2\pi^{2}w}\left(\frac{1}{\hbar c% }\right)^{3}\frac{\left(1+\frac{m_{e}^{\ast}}{m_{h}^{\ast}}\right)}{d_{e}\sqrt% {E_{e}}}$$ (84) $$\displaystyle\quad\times\int_{0}^{\infty}E^{2}\frac{\sqrt{E_{eh}}\mathcal{A}_{% \sqrt{2}\Gamma}(\Delta E-E_{eh})n^{e}_{E_{e}}n^{h}_{E_{h}}}{\int_{0}^{\infty}% \sqrt{E_{eh}^{\prime}}\mathcal{A}_{\sqrt{2}\Gamma}(\Delta E-E_{eh}^{\prime}){% \rm d}E_{eh}^{\prime}}{\rm d}E,$$ $$\displaystyle J^{h,{\rm rad}}_{E_{e}}=\frac{c}{2\pi^{2}w}\left(\frac{1}{\hbar c% }\right)^{3}\frac{\left(1+\frac{m_{h}^{\ast}}{m_{e}^{\ast}}\right)}{d_{h}\sqrt% {E_{h}}}$$ (85) $$\displaystyle\quad\times\int_{0}^{\infty}E^{2}\frac{\sqrt{E_{eh}}\mathcal{A}_{% \sqrt{2}\Gamma}(\Delta E-E_{eh})n^{e}_{E_{e}}n^{h}_{E_{h}}}{\int_{0}^{\infty}% \sqrt{E_{eh}^{\prime}}\mathcal{A}_{\sqrt{2}\Gamma}(\Delta E-E_{eh}^{\prime}){% \rm d}E_{eh}^{\prime}}{\rm d}E,$$ where we used the approximation $1-n^{e}_{E_{e}}-n^{h}_{E_{h}}\approx 1$. Here, $\Delta E\equiv E-E_{g}$ and $m_{e}^{\ast}E_{e}=m_{h}^{\ast}E_{h}$ for direct gap semiconductors, and $E_{eh}\equiv E_{e}+E_{h}$. For the carrier extraction rate $J^{e(h),{\rm out}}_{E_{e(h)}}$ — inclusion of the broadening effect in this term is straightforward. The incoming particle number rates from the electrodes into the absorber are modified because the single particle states have a spectral width. After modification, we obtain $$\displaystyle J^{e,{\rm out}}_{E_{e}}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\tau_{\rm out}}\left(n^{e}_{E_{e}}-\tilde{f}^{F}_{\mu_{c% },\beta_{c}}(E_{g}+E_{e})\right),$$ (86) $$\displaystyle J^{h,{\rm out}}_{E_{h}}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\tau_{\rm out}}\left(n^{h}_{E_{h}}-\tilde{f}^{F}_{-\mu_{% v},\beta_{c}}(E_{h})\right),$$ (87) where we assume that $\tau^{e(h)}_{\rm out}=\tau_{\rm out}$, and the Fermi-Dirac distribution convolved with the lineshape function is given by $$\displaystyle\tilde{f}^{F}_{\mu,\beta}(E)$$ $$\displaystyle=$$ $$\displaystyle(f^{F}_{\mu,\beta}\ast\mathcal{A}_{\Gamma})(E)$$ (88) $$\displaystyle=$$ $$\displaystyle\int_{-\infty}^{\infty}f^{F}_{\mu,\beta}(E^{\prime})\mathcal{A}_{% \Gamma}(E-E^{\prime}){\rm d}E^{\prime}.$$ For the phonon scattering rate: $\left.\frac{d}{dt}n^{e(h)}_{E_{e(h)}}\right|_{\rm phonon}$ — inclusion of the broadening effect replaces the $\delta$-functions in Eq. (59) and Eq. (60) with the convolved lineshape functions as follows: $$\displaystyle\delta\bigl{(}E_{e(h)}(k)-E_{e(h)}(k-q)-E_{q}^{ph}\bigr{)}$$ $$\displaystyle\to$$ $$\displaystyle\mathcal{A}_{\sqrt{2}\Gamma}\bigl{(}E_{e(h)}(k)-E_{e(h)}(k-q)-E_{% q}^{ph}\bigr{)},$$ $$\displaystyle\Bigl{(}=\int_{-\infty}^{\infty}{\rm d}E_{1}\int_{-\infty}^{% \infty}{\rm d}E_{2}\ \delta\bigl{(}E_{1}-E_{2}-E_{q}^{ph}\bigr{)}$$ $$\displaystyle\quad\times\mathcal{A}_{\Gamma}\bigl{(}E_{1}-E_{e(h)}(k)\bigr{)}% \mathcal{A}_{\Gamma}\bigl{(}E_{2}-E_{e(h)}(k-q)\bigr{)}\Bigr{)}$$ $$\displaystyle\delta\bigl{(}E_{e(h)}(k+q)-E_{e(h)}(k)-E_{q}^{ph}\bigr{)}$$ $$\displaystyle\to$$ $$\displaystyle\mathcal{A}_{\sqrt{2}\Gamma}\bigl{(}E_{e(h)}(k+q)-E_{e(h)}(k)-E_{% q}^{ph}\bigr{)}.$$ (90) After the replacement, we can transform the $q$-summation into an integration over polar coordinates, and the angular integration can finally be performed, as was done in Sec. II.4. The final result is $$\displaystyle\left.\frac{d}{dt}n^{e(h)}_{E_{e(h)}}\right|_{\rm phonon}$$ $$\displaystyle=$$ $$\displaystyle\frac{-C_{ph}^{e(h)}}{\sqrt{E_{e(h)}}}\int_{0}^{\epsilon^{ph}_{% \rm cut}}\epsilon^{2}{\rm d}\epsilon\int^{E_{+}}_{E_{-}}{\rm d}E^{\prime}$$ $$\displaystyle\mathcal{A}_{\sqrt{2}\Gamma}(E_{e(h)}-E^{\prime}-\epsilon)\biggl{% (}n^{e(h)}_{E_{e(h)}}(1-n_{E^{\prime}}^{e(h)})$$ $$\displaystyle\times\bigl{(}1+f^{B}_{0,\beta_{\rm ph}}({\epsilon})\bigr{)}-n_{E% ^{\prime}}^{e(h)}(1-n^{e(h)}_{E_{e(h)}})f^{B}_{0,\beta_{\rm ph}}({\epsilon})% \biggr{)}$$ $$\displaystyle+$$ $$\displaystyle\frac{C_{ph}^{e(h)}}{\sqrt{E_{e(h)}}}\int_{0}^{\epsilon^{ph}_{\rm cut% }}\epsilon^{2}{\rm d}\epsilon\int^{E_{+}}_{E_{-}}{\rm d}E^{\prime}$$ $$\displaystyle\mathcal{A}_{\sqrt{2}\Gamma}(E_{e(h)}-E^{\prime}+\epsilon)\biggl{% (}n^{e(h)}_{E^{\prime}}(1-n_{E_{e(h)}}^{e(h)})$$ $$\displaystyle\times\bigl{(}1+f^{B}_{0,\beta_{\rm ph}}({\epsilon})\bigr{)}-n_{E% _{e(h)}}^{e(h)}(1-n^{e(h)}_{E^{\prime}})f^{B}_{0,\beta_{\rm ph}}({\epsilon})% \biggr{)},$$ where we defined $E_{\pm}$ as $$\displaystyle E_{\pm}\equiv\left(\sqrt{E_{e(h)}}\pm\epsilon\sqrt{\frac{1}{2m^{% \ast}_{e(h)}v_{A}^{2}}}\right)^{2}\geq 0.$$ (92) Within the limit from $\Gamma\to+0$ in Eq. (II.5), the Gaussian function $\mathcal{A}_{\sqrt{2}\Gamma}$ becomes $\delta$ functions. The terms that remain after integration of $E^{\prime}$ should come from the poles $E^{\prime}=E_{e(h)}\pm\epsilon$, which are located in the interval $E_{-}<E^{\prime}(=E_{e(h)}\pm\epsilon)<E_{+}$. In this way, Eq. (II.5) safely reproduces Eq. (63) when $\Gamma=+0$. Here, we assumed again that $E_{e(h)}>E^{\rm PB}_{e(h)}$, i.e., that the phonon bottleneck effect was neglected as it was in Eq. (63). The reason for adoption of the Gaussian lineshape— As mentioned earlier in this subsection, we have used a Gaussian lineshape for the microscopic states, despite the fact that it cannot be derived naturally for an ideal model without statistical fluctuations. This lineshape was adopeted for the following technical reason. We have derived rate equations for the microscopic distribution functions of the carriers in both direct and indirect gap semiconductor absorbers. In the derivation for the indirect gap semiconductors, we introduced a probability density of $p^{e(h)}(E_{e(h)})$ in Eq. (9) and used it to obtain the generation rate $J^{e(h),{\rm sun}}_{E_{e(h)}}$ and the recombination loss rate $J^{e(h),{\rm rad}}_{E_{e(h)}}$. The factor $\int_{0}^{\Delta E}\mathcal{D}_{e}(E^{\prime})\mathcal{D}_{h}(\Delta E-E^{% \prime}){\rm d}E^{\prime}$ that is found in the denominator in each of Eq. (10), Eq. (22), and Eq. (LABEL:eq:1-Ne-Nh) can all be traced back to the same normalization factor in the definition of $p^{e(h)}(E_{e(h)})$ in Eq. (9), i.e., the number of possible combinations of all electron-hole pairs with $E_{e}$ and $E_{h}$ that can emit a photon of energy $E$. If the spectral broadening of the microscopic states is included, the normalization factor should then be modified by replacing $\mathcal{D}_{e(h)}(E)$ with $\tilde{\mathcal{D}}_{e(h)}(E)$ (as defined in Eq. (74)), as shown in Eq. (75). No problems arise with the mathematics here if Gaussian lineshape function $\mathcal{A}_{\Gamma}(x)$ is used for the lineshape. However, if we use the Lorentzian lineshape function, $$\displaystyle\mathcal{A}^{L}_{\Gamma}(x)\equiv\frac{1}{\pi}\frac{\Gamma}{x^{2}% +\Gamma^{2}},$$ (93) the situation changes. Using a similar analysis, the normalization factor is calculated to be $$\displaystyle\int_{-\infty}^{\infty}\tilde{\mathcal{D}}_{e}(E^{\prime})\tilde{% \mathcal{D}}_{h}(\Delta E-E^{\prime}){\rm d}E^{\prime}$$ (94) $$\displaystyle=$$ $$\displaystyle\int_{-\infty}^{\infty}{\rm d}E^{\prime}\int_{0}^{\infty}{\rm d}E% _{e}\int_{0}^{\infty}{\rm d}E_{h}\mathcal{D}_{e}(E_{e})\mathcal{A}^{L}_{\Gamma% }(E^{\prime}-E_{e})$$ $$\displaystyle\times\mathcal{D}_{h}(E_{h})\mathcal{A}^{L}_{\Gamma}(\Delta E-E^{% \prime}-E_{h})$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{\infty}{\rm d}E_{e}\int_{0}^{\infty}{\rm d}E_{h}% \mathcal{D}_{e}(E_{e})\mathcal{D}_{h}(E_{h})$$ $$\displaystyle\qquad\qquad\times\mathcal{A}^{L}_{2\Gamma}(\Delta E-E_{e}-E_{h}).$$ In the last equation, we used a general property of the convolution of Lorentzian functions: $$\displaystyle\mathcal{A}^{L}_{\Gamma_{1}}\ast\mathcal{A}^{L}_{\Gamma_{2}}(E)$$ $$\displaystyle\equiv$$ $$\displaystyle\int_{-\infty}^{\infty}{\rm d}E^{\prime}\mathcal{A}^{L}_{\Gamma_{% 1}}(E^{\prime})\mathcal{A}^{L}_{\Gamma_{2}}(E-E^{\prime})$$ (95) $$\displaystyle=$$ $$\displaystyle\mathcal{A}^{L}_{\Gamma_{1}+\Gamma_{2}}(E).$$ The integration can also be performed by changing the integration variable to $E_{eh}\equiv E_{e}+E_{h}$ and $E_{h}$, which results in the following divergence: $$\displaystyle\int_{-\infty}^{\infty}\tilde{\mathcal{D}}_{e}(E^{\prime})\tilde{% \mathcal{D}}_{h}(\Delta E-E^{\prime}){\rm d}E^{\prime}$$ (96) $$\displaystyle\propto$$ $$\displaystyle\int_{0}^{\infty}{\rm d}E_{eh}\int_{0}^{E_{eh}}{\rm d}E_{h}\sqrt{% E_{h}(E_{eh}-E_{h})}\mathcal{A}^{L}_{2\Gamma}(\Delta E-E_{eh})$$ $$\displaystyle=$$ $$\displaystyle\frac{\pi}{8}\int_{0}^{\infty}E_{eh}^{2}\mathcal{A}^{L}_{2\Gamma}% (\Delta E-E_{eh}){\rm d}E_{eh}$$ $$\displaystyle\propto$$ $$\displaystyle\int_{0}^{\infty}\frac{E_{eh}^{2}}{E_{eh}^{2}+(2\Gamma)^{2}}{\rm d% }E_{eh}=+\infty.$$ This divergence represents a clear artifact that is derives from the simplification of our model of the semiconductor carriers. We adopted an effective two-band model with infinite bandwidths for carriers. Because of these infinite bandwidths, there is no upper domain bound for integration over $E_{eh}$ in Eq. (96). This prevents us from defining the probability measure for $p^{e(h)}(E_{e(h)})$ and our formulation thus fails when we use the Lorentzian lineshape function and the infinite bandwidths for the carriers. Of course, this problem can be avoided by introducing an effective bandwidth parameter that should exist naturally. However, the introduction of an additional bandwidth parameter can then reduce the benefits of our simple two-band model with the infinite bandwidths. The final results may also depend on the bandwidth parameter, the definition of which remains unclear. For these reasons, we have used the Gaussian lineshape for the spectral function, not having encountered such an artifact, and have thus benefitted greatly from use of our simplified model. In realistic devices, the semiconductor absorbers are not ideally prepared with structural imperfections to cause statistical fluctuations. As a result, their spectral lineshapes will be more or less Gaussian-like, at least in the tails (which could also be Voigt functions). Even when the Lorentzian lineshape is used in the infinite bandwidth model, this divergence problem does not occur for direct gap semiconductors. Despite the above discussion on possible changes in the formulation, we still expect that the selection of the lineshape functions will not strongly affect the result or the main conclusion. II.6 Macroscopic properties: current, conversion efficiency, and energy flow The rate equations used to determine the microscopic carrier distribution functions were derived in the preceding subsections (from Sec. II.1 to Sec. II.5). In this section, we briefly summarize the method used to calculate macroscopic quantities such as the output charge current $I$, the conversion efficiency $\eta$, and the energy flows into the different channels, denoted by $J_{X}$ ($X=$sun, T, rad, work, $Q_{\rm in}$, and $Q_{\rm out}$ are defined below and are also shown in Fig. 7) in terms of the distribution functions that are to be obtained. The charge current $I$ is defined as $$\displaystyle I$$ $$\displaystyle=$$ $$\displaystyle|e|\sum_{k}J_{E_{e}(k)}^{e,{\rm out}}=|e|\sum_{k}J_{E_{h}(k)}^{h,% {\rm out}},$$ (97) $$\displaystyle=$$ $$\displaystyle|e|\int_{0}^{\infty}\mathcal{V}\mathcal{D}_{e}(E_{e})J_{E_{e}}^{e% ,{\rm out}}{\rm d}E_{e},$$ which represents the total charge per unit time (where $|e|$ is the elementary charge) output by an absorber of volume $\mathcal{V}(=\mathcal{A}w)$ to an electrode. Here $J_{E_{e}}^{{e},{\rm out}}$ and $J_{E_{h}}^{{h},{\rm out}}$ are the functions of the distribution functions given by Eq. (51) and Eq. (54) without the broadening effect, and of the functions given by Eq. (86) and Eq. (87) with the broadening effect, respectively. The second equation in Eq. (97) comes from the steady-state condition based on the total number of charges in the absorber (and will be mentioned again as Eq. (117) in Sec. III). Given that the conduction electrons are extracted to the electrode with chemical potential $\mu_{c}$ and the valence holes are extracted to the electrode with potential $\mu_{v}$ ($-\mu_{v}$ for holes), the total output energy per unit time is given by $$\displaystyle\mathcal{P}_{\rm work}=(\mu_{c}-\mu_{v})I=VI,$$ (98) which defines the conversion efficiency as $$\displaystyle\eta(\%)=\frac{\mathcal{P}_{\rm work}}{\mathcal{A}\times\int_{0}^% {\infty}Ej^{\rm sun}(E){\rm d}E}\times 100.$$ (99) This expression indicates that the conversion efficiency is dependent on the absorber thickness $w$ because $I$ and $\mathcal{P}_{\rm work}$ are both proportional to $\mathcal{V}$ from Eq. (97). The microscopic carrier distribution function offers more detailed information about the energy balance of the solar cells. This information is very helpful in understanding what occurs inside solar cells during the energy conversion processes, which could also be addressed experimentally  Chen . The energy current $J_{X}$ (per unit time per unit area) that flows into each channel $X$ can be evaluated separately using the steady-state solution as follows (see also Fig. 7). The energy current per unit time per unit area that is carried by the photons that illuminate the solar cell is $$\displaystyle J_{\rm sun}=\int_{0}^{\infty}Ej^{\rm sun}(E){\rm d}E.$$ (100) The energy flow from sunlight is shared by different flow channels (five channels are present in this model), where $$\displaystyle J_{\rm sun}=J_{\rm T}+J_{\rm rad}+J_{\rm work}+J_{Q_{\rm out}}+J% _{Q_{\rm in}}.$$ (101) The proportion of the solar energy, $J_{\rm T}$, per unit time per unit area, that is transmitted is given by $$\displaystyle J_{\rm T}=\int^{E_{g}}_{0}Ej^{\rm sun}(E){\rm d}E.$$ (102) Here, the absorption spectrum of the absorber is approximated simply using the step function $\alpha(E)=\alpha_{0}\theta(E-E_{g})$ under the assumption of perfect absorption $\alpha_{0}w\gg 1$ (or a large effective light path $l_{\rm eff}\gg 1/\alpha_{0}$ with light trapping textures Yablonovitch ). The energy that is radiated outside the solar cell per unit time per unit area is then given by $$\displaystyle J_{\rm rad}$$ $$\displaystyle=$$ $$\displaystyle\frac{c}{2\pi^{2}}\left(\frac{1}{\hbar c}\right)^{3}\int_{0}^{% \infty}E^{3}\langle n^{e}n^{h}\rangle_{\Delta E}{\rm d}E,$$ (103) where $\Delta E\equiv E-E_{g}$, and $1-n^{e}_{E_{e}}-n^{h}_{E_{h}}\approx 1$ is used. When the broadening effect is taken into accoint, $\langle n^{e}n^{h}\rangle_{\Delta E}$ in the integrand is given by $$\displaystyle\langle n^{e}n^{h}\rangle_{\Delta E}=\int_{0}^{\infty}\int_{0}^{% \infty}p_{E_{e},E_{h}}^{\Delta E}n^{e}_{E_{e}}n^{h}_{E_{h}}{\rm d}E_{e}{\rm d}% E_{h},$$ (104) where $p_{E_{e},E_{h}}^{\Delta E}$ is a weight function that is defined by $$\displaystyle p_{E_{e},E_{h}}^{\Delta E}$$ $$\displaystyle\equiv$$ $$\displaystyle\frac{\mathcal{D}_{e}(E_{e})\mathcal{D}_{h}(E_{h})\mathcal{A}_{% \sqrt{2}\Gamma}(\Delta E-E_{e}-E_{h})}{\iint\mathcal{D}_{e}(E_{e})\mathcal{D}_% {h}(E_{h})\mathcal{A}_{\sqrt{2}\Gamma}(\Delta E-E_{e}-E_{h}){\rm d}E_{e}{\rm d% }E_{h}}$$ for indirect gap semiconductors. For the direct gap semiconductors, $\langle n^{e}n^{h}\rangle_{\Delta E}$ is given by $$\displaystyle\langle n^{e}n^{h}\rangle_{\Delta E}=\int_{0}^{\infty}p_{E_{eh}}^% {\Delta E}n^{e}_{E_{e}}n^{h}_{E_{h}}{\rm d}E_{eh},$$ (106) with a weight function that is defined by $$\displaystyle p_{E_{eh}}^{\Delta E}\equiv\frac{\sqrt{E_{eh}}\mathcal{A}_{\sqrt% {2}\Gamma}(\Delta E-E_{eh})}{\int_{0}^{\infty}\sqrt{E^{\prime}_{eh}}\mathcal{A% }_{\sqrt{2}\Gamma}(\Delta E-E^{\prime}_{eh}){\rm d}E^{\prime}_{eh}},$$ (107) where $\Delta E\equiv E-E_{g}$, $m_{e}^{\ast}E_{e}=m_{h}^{\ast}E_{h}$, and $E_{eh}\equiv E_{e}+E_{h}$. In a similar manner to the discussion in Sec. II.5, the expressions above can be used in the limit where $\Gamma=+0$ by changing the lineshape functions in the integrand into delta functions. $J_{\rm work}$ is the energy that is extracted as work from the solar cell (per unit time per unit area) and transferred via the charge current, which we already evaluated earlier, and is given by $$\displaystyle J_{\rm work}$$ $$\displaystyle=$$ $$\displaystyle P_{\rm work}/\mathcal{A}$$ (108) $$\displaystyle=$$ $$\displaystyle w|e|V\int_{0}^{\infty}\mathcal{D}_{e}(E_{e})J_{E_{e}}^{e,{\rm out% }}{\rm d}E_{e}.$$ $J_{Q_{\rm out}}$ is the heat that is carried by the charge current and lost in the electrodes outside the cell per unit time per unit area. It can be calculated as: $$\displaystyle J_{Q_{\rm out}}$$ (109) $$\displaystyle=$$ $$\displaystyle\frac{w}{\tau_{\rm out}}\iint\mathcal{D}_{E_{e}}^{e}E\mathcal{A}_% {\Gamma}(\Delta E-E_{e})\left(n^{e}_{E_{e}}-f^{F}_{\mu_{c},\beta_{c}}(E)\right% ){\rm d}E_{e}{\rm d}E$$ $$\displaystyle+$$ $$\displaystyle\frac{w}{\tau_{\rm out}}\iint\mathcal{D}_{E_{h}}^{h}E\mathcal{A}_% {\Gamma}(E-E_{h})\left(n^{h}_{E_{h}}-f^{F}_{-\mu_{v},\beta_{c}}(E)\right){\rm d% }E_{h}{\rm d}E$$ $$\displaystyle-$$ $$\displaystyle J_{\rm work},$$ where $\Delta E\equiv E-E_{g}$, and the integration runs over the ranges $-\infty<E<\infty$ and $0<E_{e(h)}<\infty$. The remaining channel for the energy loss in our model is the absorber thermalization loss that is lost in the phonon reservoir (Bath 3 in Fig. 1). $J_{Q_{\rm in}}$ is this heat current, which is lost in the phonon bath, inside the cell per unit time per unit area. It is convenient to determine $J_{Q_{\rm in}}$ using the energy conservation law given in Eq. (101), $$\displaystyle J_{Q_{\rm in}}=J_{\rm sun}-J_{\rm T}-J_{\rm rad}-J_{\rm work}-J_% {Q_{\rm out}},$$ (110) because all other terms on the right-hand side have been given above. It can also be evaluated directly using the phonon scattering rate in the rate equation; however, this complicates the calculation. III Basic properties and classification of the parameter regime In the previous section, the rate equations that determine the microscopic distribution functions were derived based on our nonequilibrium model. In this section, before we describe the numerical simulation, we give the basic properties related to the particle number conservation laws that must be preserved in any steady states for the equations. This information can then be used in the first screening to validate the numerical results obtained. In the latter part of this section, we will determine the parameter regime in which the solar cells will work under or out of the detailed balance condition. The parameter regime can be classified from the equation itself without fully solving for it. III.1 Conservation of the total number of carriers within the band during phonon scattering processes The rate equations in Eq. (1) and Eq. (2) represent the net rates for the numbers of particles coming into the microscopic states with energies $E_{e}$ and $E_{h}$. Therefore, as shown in the previous section (Sec. II.6), the summation of ${\rm d}n^{e(h)}_{E_{e(h)}}/{\rm d}t$ over all microscopic states gives the total net number of electrons (or holes) coming into the absorber (or the cell) per unit time. In the steady state, the net total rate vanishes as a result of the balance between the four terms that are related to generation by sunlight absorption, the radiative recombination loss, extraction to the electrodes, and phonon scattering, on the right-hand side of the rate equation. Among these four terms, the last one, which leads to intraband carrier thermalization, should not change the total number of carriers contained within the band. This basic property must be preserved in our model formulation, which can be checked using the general expression given in Eq. (II.5) (with the broadening effect) as shown below. The net change in the total number of electrons (holes) in the absorber due to the phonon scattering processes per unit time is given by $$\displaystyle\mathcal{V}\int_{0}^{\infty}\mathcal{D}_{e(h)}(E_{e(h)})\left.% \frac{d}{dt}n^{e(h)}_{E_{e(h)}}\right|_{\rm phonon}{\rm d}E_{e(h)}.$$ (111) By inserting Eq. (II.5) into Eq. (111), we find that the integrand of the $\epsilon$-integration is proportional to $$\displaystyle\int_{0}^{\infty}{\rm d}E_{e(h)}\int^{E_{+}}_{E_{-}}{\rm d}E^{% \prime}\left(I^{\epsilon}_{E_{e(h)},E^{\prime}}-I^{\epsilon}_{E^{\prime},E_{e(% h)}}\right),$$ (112) where $$\displaystyle I^{\epsilon}_{E_{e(h)},E^{\prime}}$$ $$\displaystyle\equiv$$ $$\displaystyle\mathcal{A}_{\sqrt{2}\Gamma}(E_{e(h)}-E^{\prime}-\epsilon)\biggl{% (}n^{e(h)}_{E_{e(h)}}(1-n_{E^{\prime}}^{e(h)})$$ $$\displaystyle\times\bigl{(}1+f^{B}_{0,\beta_{\rm ph}}({\epsilon})\bigr{)}-n_{E% ^{\prime}}^{e(h)}(1-n^{e(h)}_{E_{e(h)}})f^{B}_{0,\beta_{\rm ph}}({\epsilon})% \biggr{)}.$$ From the definition of $E_{\pm}$ given in Eq. (92), the integration domain, $E_{-}<E^{\prime}<E_{+}$, is equivalent to $$\displaystyle E_{e(h)}<\left(\sqrt{E^{\prime}}+\epsilon\sqrt{\frac{1}{2m^{\ast% }_{e(h)}v_{A}^{2}}}\right)^{2},$$ (114) $$\displaystyle E_{e(h)}>\left(\sqrt{E^{\prime}}-\epsilon\sqrt{\frac{1}{2m^{\ast% }_{e(h)}v_{A}^{2}}}\right)^{2},$$ (115) which are both symmetrical with respect to interchange of the following variables: $E^{\prime}\leftrightarrow E_{e(h)}$. In this way, we find that the two contributions in Eq. (112) cancel perfectly after integration. This ensures that the total number of carriers is preserved within the band during the carrier thermalization process. III.2 Conservation of total charge and charge neutrality The rate equation for the net total charge in the absorber, $Q_{\rm tot}(\equiv|e|\sum_{k}(n^{h}_{E_{h}(k)}-n^{e}_{E_{e}(k)}))$, can also be obtained using the microscopic rate equations. Summation over all states and the difference between those states for the electrons and holes give $$\displaystyle\frac{d}{dt}Q_{\rm tot}$$ $$\displaystyle=$$ $$\displaystyle-|e|\mathcal{V}\int_{0}^{\infty}\mathcal{D}_{e}(E_{e})(J^{e,{\rm sun% }}_{E_{e}}-J^{e,{\rm rad}}_{E_{e}}-J^{e,{\rm out}}_{E_{e}}){\rm d}E_{e}$$ $$\displaystyle+|e|\mathcal{V}\int_{0}^{\infty}\mathcal{D}_{h}(E_{h})(J^{h,{\rm sun% }}_{E_{h}}-J^{h,{\rm rad}}_{E_{h}}-J^{h,{\rm out}}_{E_{h}}){\rm d}E_{h},$$ where the phonon scattering terms are absent, in line with the discussion in Sec. III.1. Additionally, the terms for photon absorption and recombination radiation on the right-hand side should all cancel. This should be true because each single photon absorption and emission process generates or loses one electron-hole pair with no changes in the total charge. This statement can also be verified easily in our formulation using the general expressions for the generation rate (Eq. (7) and Eq. (8) for direct gap semiconductors, and Eq. (10) and Eq. (11) for indirect gap semiconductors) and the radiative recombination rate (Eq. (84) and Eq. (85) for direct gap semiconductors, and Eq. (82) and Eq. (83) for indirect gap semiconductors). As a result, Eq. (III.2) can be rewritten as $$\displaystyle\frac{d}{dt}Q_{\rm tot}$$ $$\displaystyle=$$ $$\displaystyle|e|\mathcal{V}\Bigl{(}\int_{0}^{\infty}\mathcal{D}_{e}(E_{e})J^{e% ,{\rm out}}_{E_{e}}{\rm d}E_{e}$$ (117) $$\displaystyle-\int_{0}^{\infty}\mathcal{D}_{h}(E_{h})J^{h,{\rm out}}_{E_{h}}{% \rm d}E_{h}\Bigr{)}=0,$$ where the steady state has been assumed in the second equation. The above equation, which shows that the net extraction rates for the electrons and holes are balanced in the steady state, was used in Eq. (97) in the previous section. Additionally, when using Eq. (86) and Eq. (87) along with the definition of the total charge, the total net charge that is present in the absorber is determined from Eq. (117) under steady-state conditions and is given by $$\displaystyle\frac{Q_{\rm tot}}{\tau_{\rm out}}$$ $$\displaystyle=$$ $$\displaystyle\frac{|e|\mathcal{V}}{\tau_{\rm out}}\Bigl{(}\int_{0}^{\infty}% \mathcal{D}_{h}(E_{h})\tilde{f}^{F}_{-\mu_{v},\beta_{c}}(E_{h}){\rm d}E_{h}$$ (118) $$\displaystyle-\int_{0}^{\infty}\mathcal{D}_{e}(E_{e})\tilde{f}^{F}_{\mu_{c}-E_% {g},\beta_{c}}(E_{e}){\rm d}E_{e}\Bigr{)}.$$ From this result, we see that the total charge that is present in the absorber is solely determined by the cell distribution functions (apart from the broadening effect). In the numerical simulations presented here, we have focused only on the special case of charge neutrality, where $Q_{\rm tot}=0$. When the condition $n^{e(h)}_{E_{e(h)}}\ll 1$ is satisfied at low carrier densities, the charge neutrality condition then relates the chemical potentials in the electrodes, $\mu_{c}$ and $\mu_{v}$, as follows: $$\displaystyle\frac{\mu_{c}+\mu_{v}}{2}=\frac{E_{g}}{2}+\frac{\beta_{c}^{-1}}{2% }\ln{\frac{d_{h}}{d_{e}}}.$$ (119) Therefore, using the voltage between the electrodes ($|e|V\equiv\mu_{c}-\mu_{v}$), the chemical potentials can be given by $$\displaystyle\mu_{c}$$ $$\displaystyle=$$ $$\displaystyle\left(E_{g}+\beta_{c}^{-1}\ln(d_{h}/d_{e})+|e|V\right)/2,$$ (120) $$\displaystyle\mu_{v}$$ $$\displaystyle=$$ $$\displaystyle\left(E_{g}+\beta_{c}^{-1}\ln(d_{h}/d_{e})-|e|V\right)/2,$$ (121) to enable charge neutrality to be realized in the absorber. III.3 Classification of the parameter regime In Sec. II, we saw that the rate equation has two characteristic times: $\tau_{\rm out}$ for carrier extraction (from Eq. (52) and Eq. (55)) and $\tau_{\rm ph}^{e(h)}$ for carrier thermalization (from Eq. (67)), which will be determinate for the solar cell properties. As noted in Sec. I, the SQ theory assumes that these two characteristic times should be fast enough for the detailed balance analysis to be applicable, whereas the quantitative issue of how short these times should be remains to be solved. Here, using the rate equations that were derived using the nonequilibrium approach, we address this issue. In the following discussion, we consider $\tau_{\rm out}$ to be a free parameter, while $\tau_{\rm ph}^{e(h)}$ is the given material parameter and is roughly of picosecond order (although the parameter may be modified to a certain degree by introduction of phononic nanostructures  Yu ; Hopkins ). We therefore focus on the parameter space given by $\tau_{\rm out}$ (in a similar manner to parameterization using the conductance in hot carrier solar cells Takeda2 ). As shown in Fig. 8, we divide the parameter space given by $0<\tau_{\rm out}<+\infty$ into three different regimes. In regime III, i.e., the fast extraction regime where $\tau_{\rm out}<\tau_{\rm ph}^{e(h)}$, solar cell operation is likely to be different from that of conventional solar cells because carrier extraction before thermalization with the lattice will become possible Nozik ; Wurfel2 . In real devices, fast extraction within the ps scale requires an ultrathin absorber. With an average estimated velocity at room temperature of $v_{e}=\sqrt{2m_{e}^{\ast}\times 3k_{B}T_{c}/2}\sim 10^{5}$m/s, the carriers can travel a maximum of 0.1 $\mu$m within 1 ps. Because the extraction time cannot be less than the travel time from the center of the absorber (where photogeneration occurs) to its surface (where extraction occurs), $\tau_{\rm out}$ of less than 1 ps requires absorbers with less than sub-$\mu$m thickness in planar solar cells. The situation can hardly be realized in Si solar cells with low absorption coefficients (a thickness of 10 $\mu$m will be required even with light trapping for perfect absorption which was assumed here). To achieve fast extraction in Si solar cells, a meander-like structure could be used (see Sec. 7.4 in Wurfel text ). However, for direct gap semiconductors with higher absorption coefficients, sub-$\mu$m-scale absorbers could possibly be used to provide efficient solar cells. This is why hot carrier solar cells should use ultrathin nanostructures with strongly absorbing materials, such as GaAs, as described in the next section. While we acknowledge these realistic issues, we will however proceed with further discussions and simulation of fast extraction regime III, given that this scenario could even be realized in Si solar cells, to highlight the general properties of solar cells operating in this fast extraction regime. The SQ theory assumes fast carrier extraction, where the condition $\tau_{\rm out}<\tau_{\rm ph}^{e(h)}$ is not required. An upper limit on the extraction time, $\tau_{\rm out}^{{\rm ul}\ast}(>\tau_{\rm ph}^{e(h)})$, should therefore exist, above which the SQ theory will fail to predict the conversion efficiency limit. We can therefore define these two regimes separately using a boundary as the normal extraction regime (II) and the slow extraction regime (I). In regime II, the SQ theory can be used to predict the solar cell properties. The boundary time $\tau_{\rm out}^{\rm ul\ast}$ between regimes I and II, which will be dependent on device parameters such as the material and thickness of the absorber, can be evaluated as shown in the remainder of this section. In regimes I and II, we can safely assume that the thermalization (intraband carrier cooling) is completed within the absorber because the phonon scattering rate dominates the other terms in the rate equations, Eq. (1) and Eq. (2). In this case, the carrier distribution function in the cell is given by the Fermi-Dirac distribution function in Eq. (71), while the chemical potentials in the cell can differ from those in the electrodes, i.e., $\mu^{\rm cell}_{e(h)}\neq\mu_{e(h)}$. When the function in Eq. (71) has been assumed, we can insert $\left.\frac{d}{dt}n^{e(h)}_{E_{e(h)}}\right|_{\rm phonon}=0$ into the rate equations in Eq. (1) and Eq. (2). This is what we observed earlier in Sec. II.5. We should also reconsider the meaning of the relation $\mu^{\rm cell}_{e(h)}\neq\mu_{e(h)}$ here. This means that the carrier distribution functions in the cell differ from those in the electrodes, while they are assumed to be equal in the SQ theory when calculating the radiative recombination loss. Using the difference $\Delta n^{e(h)}_{E_{e(h)}}\bigl{(}\equiv n^{e(h)}_{E_{e(h)}}-f^{F}_{\mu_{e(h)}% ,\beta_{c}}(E_{e(h)})\bigr{)}$, the microscopic rate equations for the steady states can be rewritten as $$\displaystyle J^{e(h),{\rm out}}_{E_{e(h)}}=\frac{\Delta n^{e(h)}_{E_{e(h)}}}{% \tau_{\rm out}}=J^{e(h),{\rm sun}}_{E_{e(h)}}-J^{e(h),{\rm rad}}_{E_{e(h)}}.$$ (122) Therefore, the difference is given by $$\displaystyle\Delta n^{e(h)}_{E_{e(h)}}$$ $$\displaystyle=$$ $$\displaystyle\tau_{\rm out}\left(J^{e(h),{\rm sun}}_{E_{e(h)}}-J^{e(h),{\rm rad% }}_{E_{e(h)}}\right)$$ (123) $$\displaystyle\approx$$ $$\displaystyle\tau_{\rm out}J^{e(h),{\rm sun}}_{E_{e(h)}},$$ Where, in the second line, we have used the approximation $J^{e(h),{\rm sun}}_{E_{e(h)}}\gg J^{e(h),{\rm rad}}_{E_{e(h)}}$, which is usually fulfilled at the maximum operating power point of $V=V_{\rm mp}$ (0.779 V for Si and 1.052 V for GaAs at 1 sun when using our 6000 K blackbody Sun), being focused from this point in this subsection. Now we will determine the condition by which the SQ conversion efficiency limit can be modified. When $\tau_{\rm out}$ increases inside the regime II and gradually approaches I, we can fix the maximum operating power point because the maximum power condition given in the SQ theory will at least remain unchanged inside regime II. The SQ calculation using $n^{e(h)}_{E_{e(h)}}=f^{F}_{\mu_{e(h)},\beta_{c}}(E_{e(h)})$ will be justified as long as $\Delta n^{e(h)}_{E_{e(h)}}\ll f^{F}_{\mu_{e(h)},\beta_{c}}(E_{e(h)})$ is satisfied at $V=V_{\rm mp}$, which can be read as $$\displaystyle\tau_{\rm out}\ll\frac{f^{F}_{\mu_{e(h)},\beta_{c}}(E_{e(h)})}{J^% {e(h),{\rm sun}}_{E_{e(h)}}}\equiv\tau_{\rm out}^{\rm ul}(E_{e(h)}).$$ (124) When $\tau_{\rm out}>\tau_{\rm out}^{\rm ul}(E_{e(h)})$, the conversion efficiency limit can then differ from the limit from the SQ theory. A sufficient condition for the SQ calculation to be justified is that where Eq. (124) is fulfilled over an energy range in which there is a non-negligible carrier distribution that makes a relevant contribution to the radiative recombination; this is given approximately by $0<E_{e(h)}<k_{B}T_{c}$. (Given that $f^{F}_{\mu_{e(h)},\beta_{c}}(E_{e(h)})$ decreases exponentially with $E_{e(h)}$, it is almost impossible to satisfy this condition at higher energies.) In this way, the upper boundary of parameter regime II can be evaluated in principle using $\tau_{\rm out}^{\rm ul}(E_{e(h)})$ in Eq. (124), e.g. by selecting $E_{e(h)}=k_{B}T_{c}$ as a typical carrier energy scale. In Fig. 9, we plotted the boundary time $\tau_{\rm out}^{{\rm ul}\ast}$ that was defined using 10 percent of $\tau_{\rm out}^{\rm ul}(E_{e}=k_{B}T_{c})$ as a function of absorber thickness $w$ for Si and GaAs solar cells. Fig. 9(a) clearly shows that $\tau_{\rm out}^{{\rm ul}\ast}$ is dependent on the material parameters (effective mass and indirect/direct gap) and is linearly on $w$. For 100-$\mu$m-thick Si solar cells, $\tau_{\rm out}^{{\rm ul}\ast}$ is estimated to be on the sub-ms scale. Additionally, $\tau_{\rm out}^{{\rm ul}\ast}$ will decrease in the concentrator solar cells and is plotted as a function of concentration ratio $\mathcal{CR}$ in Fig. 9(b). $\tau_{\rm out}^{{\rm ul}\ast}$ is roughly proportional to $1/\sqrt{\mathcal{CR}}$. The physical meaning of this condition still seems unclear since $\tau_{\rm out}^{\rm ul}(E_{e(h)})$ in Eq. (124) is dependent on energies of the microscopic carrier states. The physical meaning of the timescale will become clearer if the condition $\Delta n^{e(h)}_{E_{e(h)}}\ll f^{F}_{\mu_{e(h)},\beta_{c}}(E_{e(h)})$ is summed over all the microscopic states. This can be read as $$\displaystyle\tau_{\rm out}$$ $$\displaystyle\ll$$ $$\displaystyle\frac{\mathcal{V}\int\mathcal{D}_{e(h)}(E_{e(h)})f^{F}_{\mu_{e(h)% },\beta_{c}}(E_{e(h)}){\rm d}E_{e(h)}}{\mathcal{V}\int\mathcal{D}_{e(h)}(E_{e(% h)})J^{e(h),{\rm sun}}_{E_{e(h)}}{\rm d}E_{e(h)}}$$ (125) $$\displaystyle\approx$$ $$\displaystyle\frac{N_{c}}{I_{\rm sc}^{\rm max}/|e|}\left(=\frac{n_{c}}{i_{\rm sc% }^{\rm max}/|e|}\times w\right),$$ where $N_{c}(=N_{e}=N_{h})$ is the total number of carriers, $n_{c}\equiv N_{c}/\mathcal{V}$ is the carrier density (where $n_{c}w=N_{c}/\mathcal{A}$ is the areal density); the maximum available short-circuit current $I_{\rm sc}^{\rm max}$ is given by $$\displaystyle I_{\rm sc}^{\rm max}\equiv|e|\times\mathcal{A}\times\int_{E_{g}}% ^{\infty}j^{\rm sun}(E){\rm d}E,$$ (126) from which the corresponding current density per unit area is defined using $$\displaystyle i_{\rm sc}^{\rm max}=I_{\rm sc}^{\rm max}/\mathcal{A}.$$ (127) The meaning of the condition in Eq. (125), which is equivalent to $N_{c}/\tau_{\rm out}\gg I_{\rm sc}^{\rm max}/|e|$, is now much clearer. The number of carriers output from the absorber per unit time (which differs from the net current given by the outflow minus the inflow) must be greater than the number of photons absorbed per unit time, i.e., the number of carriers that is generated per unit time in the absorber. This condition may have simply been assumed in the SQ theory, although it is not given explicitly in their original paper SQ . The final equation in Eq. (125) clearly explains the $w$-linear dependence of $\tau_{\rm out}^{{\rm ul}\ast}$ shown in Fig. 9(a). Note here that $i_{\rm sc}^{\rm max}$ and the carrier density $n_{c}$ are independent of $w$ because $i_{\rm sc}^{\rm max}$ is given solely by the Sun illumination conditions and $n_{c}$ is given by the chemical potential that is fixed at the maximum operating power point, $V=V_{\rm mp}$. The $\mathcal{CR}$-dependence of $\tau_{\rm out}^{{\rm ul}\ast}\propto 1/\sqrt{\mathcal{CR}}$ shown by Fig. 9(b) can also be understood based on the following analysis. The radiative loss current $I_{\rm rad}$ at the maximum operating power can be estimated from the SQ theory using the $I$-$V$ relation, where $I=I_{\rm sun}-I_{\rm rad}$ with $I_{\rm rad}=I_{\rm rad}^{0}e^{\beta_{c}|{\rm e}|V}$ (see Sec. I). Using ${\rm d}(IV)/{\rm d}V=0$, we can easily show that $I_{\rm rad}\approx I_{\rm sun}/(\beta_{c}V_{\rm mp})$ as long as $\beta_{c}V_{\rm mp}\gg 1$, which is normally fulfilled. In general, $V_{\rm mp}$ increases with the concentration ratio $\mathcal{CR}$; at the same time, however, it can never exceed the absorber band gap $E_{g}$ for solar cell operation (below the lasing condition Bernard ), i.e., $V_{\rm mp}(\mathcal{CR}=1)\leq V_{\rm mp}(\mathcal{CR})\leq E_{g}$. In Si, for example, this means that $0.779\ {\rm eV}\leq V_{\rm mp}(\mathcal{CR})\leq 1.12\ {\rm eV}$, and correspondingly, $I_{\rm rad}\left(\approx I_{\rm sun}/(\beta_{c}V_{\rm mp})\right)$ ranges at most from 2.3 to 3.3 percent of $I_{\rm sun}$ over the whole $\mathcal{CR}$ range $(1\leq\mathcal{CR}\leq 46200\ ({\rm full}))$. Therefore, $I_{\rm rad}$ is almost proportional to $I_{\rm sun}(\propto\mathcal{CR})$. In addition, we notice that the radiative loss current in general is $I_{\rm rad}\propto n_{c}^{2}$, or equivalently, $n_{c}\propto I_{\rm rad}^{1/2}\propto\mathcal{CR}^{1/2}$. Because $i_{\rm sc}^{\rm max}$ is obviously $\propto I_{\rm sun}\propto\mathcal{CR}$, another definition of $\tau_{\rm out}^{{\rm ul}\ast}$ from Eq. (125) shows that $\tau_{\rm out}^{{\rm ul}\ast}\propto n_{c}/i_{\rm sc}^{\rm max}\propto\mathcal% {CR}^{-1/2}$. IV Numerical simulation of device performance and conversion efficiency limit of a simple planar solar cell In this section, we present numerical simulation results for the device performance and the conversion efficiency limit of a simple planar solar cell. The results obtained here are summarized in Fig. 10, which shows the theoretical conversion efficiency limit as a function of carrier extraction time $\tau_{\rm out}$ for various values of concentration ratio $\mathcal{CR}$. As shown in Fig. 10, two curves (solid and dashed lines) are presented for a given $\mathcal{CR}$. The theoretical limit is dependent on how effectively the heat that is generated in the crystal lattices of the absorber and the electrodes can be delivered to each other via phonon transport. In the ideal limit case, where phonon transport between the absorber and the electrodes is very fast in either direct or indirect ways, which we shall refer to as “heat-shared phonon reservoirs ” (see Fig. 11(a)), we have higher maximum conversion efficiencies (shown as solid curves in Fig. 10) when the carrier extraction becomes fast. At the opposite limit, where the phonon transport between absorber and electrodes so slow as to be negligible, which we refer to as “heat-isolated phonon reservoirs” (Fig. 11(b)), the maximum conversion efficiency is reduced when the carrier extraction becomes fast (see the dashed curves in Fig. 10). In realistic cases, the maximum conversion efficiency will lie somewhere between these two curves. The precise position will be dependent on the phonon transport properties between the absorber and the electrodes, which will be sensitive to the formation conditions for the contacts between the absorber and the electrodes and/or the phonon environment surrounding them. The differences in the two ideal cases are incorporated into the stability conditions for solar cell operation. To enable solar cell operation with $J_{\rm work}>0$ to be self-sustained solely by solar illumination with $J_{\rm sun}>0$, the requirements are that $J_{Q_{\rm in}}+J_{Q_{\rm out}}>0$ for the heat-shared phonon reservoirs shown in Fig. 11(a), and that $J_{Q_{\rm in}}>0$ and $J_{Q_{\rm out}}>0$ for the heat-isolated phonon reservoirs shown in Fig. 11(b). Otherwise, an additional external heat supply to the absorber and/or the electrodes will be required to sustain solar cell operation, and the definition of the solar cell conversion efficiency then becomes less clear. In the following subsections, we discuss the solar cell properties and the underlying physics for these two limiting cases. IV.1 A limiting case: heat-shared phonon reservoirs We now present numerical simulation results for a limiting case involving heat-shared phonon reservoirs. The solid lines in Fig. 10 show the maximum conversion efficiencies for various concentration ratios ($\mathcal{CR}=1,10,10^{2},10^{3},46200$) plotted as a function of the carrier extraction time $\tau_{\rm out}$. We find flat regions for $\tau_{\rm out}$ of more than 1 ps and less than $\tau_{\rm out}^{\rm ul\ast}$, which is dependent on the concentration ratio $\mathcal{CR}$ (Fig. 9 (b)), while the maximum conversion efficiency is equal to the SQ limit $\eta^{\rm SQ}$. As expected, we can conclude that the SQ theory applies in the normal extraction regime II. Outside this regime, in both the slow and fast extraction regimes denoted by I and III, respectively, we found significant reductions in $\eta_{\rm max}$ for the simple planar solar cell (Fig. 3). Because the solar cell properties are different for each regimes, our definition of the parameter classifications in Fig. 8 seems reasonable. We must now consider how the difference can be understood. Fig. 12 shows an increase in the Fermi levels of the electrons and holes in the absorber when compared with those in the electrodes, denoted by $\Delta\mu_{e}\equiv\mu_{c}^{\rm cell}-\mu_{c}$ and $\Delta\mu_{h}\equiv(-\mu_{v}^{\rm cell})-(-\mu_{v})$, respectively, when evaluated using the carrier distribution functions at the maximum operating power point $V=V_{\rm mp}$ under 1 sun illumination ($\mathcal{CR}=1$). The increases in the Fermi levels found in regimes I and III clearly show that the carriers in the absorber form nonequilibrium populations. We checked numerically that similar results were also obtained for higher values of concentration ratio $\mathcal{CR}>1$. The increased Fermi levels in the different regimes originate from different mechanisms. In regime I, the carriers are accumulated in the absorber because of the slow extraction process, resulting in increases in the carrier density and Fermi level. In contrast, the increment in regime III is attributed to broadening of the microscopic states because $\Gamma=\hbar/\left(2\sqrt{\log{2}}\tau_{\rm out}\right)$ is no longer negligible. In this regime, the dominant term in the rate equation is the carrier extraction term $J^{e(h),{\rm out}}_{E_{e(h)}}$, which allows us to have $J^{e(h),{\rm out}}_{E_{e(h)}}\approx 0$. As a result, the carrier distribution functions in the absorber can be approximated using $n^{e}_{E_{e}}\approx\tilde{f}^{F}_{\mu_{c},\beta_{c}}(E_{g}+E_{e})$ and $n^{h}_{E_{h}}\approx\tilde{f}^{F}_{-\mu_{v},\beta_{c}}(E_{h})$ from Eq. (86) and Eq. (87). As long as the band filling effect remains negligible, i.e., when $n^{e}_{E_{e}}\ll 1$ and $n^{h}_{E_{h}}\ll 1$, the distribution functions from Eq. (88) can be approximated using $$\displaystyle n^{e}_{E_{e}}$$ $$\displaystyle\approx$$ $$\displaystyle\exp{\left(-\beta_{c}(E_{g}+E_{e}-(\mu_{c}+\beta_{c}\Gamma^{2}/4)% \right)},$$ (128) $$\displaystyle n^{h}_{E_{h}}$$ $$\displaystyle\approx$$ $$\displaystyle\exp{\left(-\beta_{c}(E_{h}-(-\mu_{v}+\beta_{c}\Gamma^{2}/4)% \right)}.$$ (129) Therefore, in this regime, the increases in the Fermi levels are fitted well using $$\displaystyle\Delta\mu_{e}$$ $$\displaystyle\equiv$$ $$\displaystyle\mu_{c}^{\rm cell}-\mu_{c}\approx\beta_{c}\Gamma^{2}/4,$$ (130) $$\displaystyle\Delta\mu_{h}$$ $$\displaystyle\equiv$$ $$\displaystyle(-\mu_{v}^{\rm cell})-(-\mu_{v})\approx\beta_{c}\Gamma^{2}/4.$$ (131) To see what happened in the nonequilibrium regimes directly, we simulated the energy balance at the maximum operating power point, as shown in Fig. 13, which provides further information on the energy loss mechanism. In slow extraction regime I, the energy loss caused by radiative recombination ($J_{\rm rad}$) increases greatly with increasing $\tau_{\rm out}$, while that due to thermal loss ($J_{Q_{\rm in}}+J_{Q_{\rm out}}$) decreases. This can be understood easily because the slow extraction process increases the carrier density in the absorber, which thus enhances the radiative recombination rate. In contrast, in fast extraction regime III, the thermal loss increases while the radiation loss decreases as $\tau_{\rm out}$ decreases because the excess energy of the photo-generated carriers is transferred quickly and dissipated rapidly in the electrodes before thermalization in the absorber is complete. In this sense, $\Delta\mu_{e}$ in Eq. (130) and $\Delta\mu_{h}$ in Eq. (131) can be regarded as additional excess heat energy conveyed by the fast extraction of one carrier. As shown in Fig. 13, the requirement for stable solar cell operation, given by $J_{Q_{\rm in}}+J_{Q_{\rm out}}>0$ in this model with heat-shared phonon reservoirs, is fulfilled (and is also fulfilled for $0<V<V_{\rm oc}$, which is not shown here). The differences between these loss mechanisms are reflected in their current-voltage ($I$-$V$) characteristics in each regime, as shown in Fig. 14. In slow extraction regime I (Fig. 14(a)), the short-circuit current $I_{\rm sc}$ decreases with increasing $\tau_{\rm out}$ while the open-circuit voltage $V_{\rm oc}$ remains unchanged. In contrast, in fast extraction regime III (Fig. 14(b)), the open-circuit voltage $V_{\rm oc}$ decreases with decreasing $\tau_{\rm out}$ while the short-circuit current $I_{\rm sc}$ remains unchanged. These results agree with the consideration that the enhanced radiation loss in regime I accompanies a loss in the output charge current while the enhanced heat current in regime III does not accompany such a loss. We find no significant changes in the $I$-$V$ characteristics for $\tau_{\rm out}$ between 1 ps and $10^{-3.5}$ s, thus supporting the belief that the SQ theory works in normal extraction regime II. The reduction of $\eta_{\rm max}$ in fast extraction regime III in Fig. 10 may be confusing because the hot carrier solar cells that are targeted in this regime can surpass the SQ limit of $\eta^{\rm SQ}$ theoretically Nozik ; Wurfel2 ; Takeda1 ; Takeda2 ; Suchet . Therefore, we may expect an increase in $\eta_{\rm max}$ as $\tau_{\rm out}$ decreases in regime III. However, this discrepancy is not surprising and can be explained as follows. The differences in the results stem from the differences between the carrier extraction processes. A hot carrier solar cell uses a filter to select the energies of the carriers that pass from the absorber to the electrodes, which can reduce the heat dissipation (thermal losses) in the electrodes. Tailored filtering can increase the output voltage while preventing large output current losses in hot carrier solar cells. However, our simple planar solar cell in Fig. 10 does not use such a filter, and the heat dissipation in the electrodes is therefore not controlled. As already shown in Fig. 13 and as will be shown in the next subsection (Fig. 15), heat losses, and especially the heat loss in the electrodes, increase in regime III. This results in a strong reduction in $\eta_{\rm max}$ in our case. These contrasting results support the supposition that, unless a tailored carrier extraction process such as that using energy selection is used, it is difficult for fast carrier extraction before thermalization to be beneficial. IV.2 Another limiting case: heat-isolated phonon reservoirs The maximum conversion efficiency $\eta_{\rm max}$ is also dependent on the phonon environment that surrounds the solar cell. In this subsection, we focus on solar cell performance in another limiting case with the heat-isolated phonon reservoirs shown in Fig. 11(b). When exchange of phonons between the absorber and electrode crystals is prevented both directly and indirectly, i.e. when their phonon environments are isolated (e.g., Baths 3 and 4 in Fig. 11(b)), $\eta_{\rm max}$ is significantly reduced from the values obtained for heat-shared reservoirs for a small $\tau_{\rm out}$. Similar results are obtained, irrespective of the value of $\mathcal{CR}$. Here we explain how these differences occur. In the heat-isolated phonon reservoirs, the stability condition for the solar cell requires $J_{Q_{\rm in}}>0$ and $J_{Q_{\rm out}}>0$. Otherwise, an additional heat supply must be added to the absorber or the electrodes, which is not suitable for our targeted device. To be more precise, let’s consider a situation where $J_{Q_{\rm out}}<0$ (as found below) and what will happen next in the device. In this case, the electrodes require heat supply from others for the stable operation. However, supply from the absorber lattice is prohibited in the heat isolated model. With no heat supply, the lattice temperature of the electrodes, initially at the ambient temperature, will decrease. Then, the cooled electrodes will start to collect heat from the ambient (e.g. surrounding air) at the higher temperature. Then, the cooled electrodes will cool the ambient next during the solar cell operation. Semiconductor devices with similar structure, which cool down the ambient for the operation, can exist in reality as found in light-emitting diodes (LEDs) (namely, refrigerating LEDs Xue ; Tauc ; Dousmanis ; Berdahl ; Santhanam ). However solar cells are the devices aimed for long-term stand-alone operation requiring the stability at a long-time scale. In our model simulation with heat isolated reservoirs, we consider the device with $J_{Q_{\rm out}}<0$ inevitably requires an additional heat supply for the stable operation. In Fig. 15, the energy balance is shown as a function of bias voltage $V$ for various carrier extraction times: (a) $\tau_{\rm out}=10^{-5}$, (b) $10^{-7}$, and (c) $10^{-11}$ s. To address the stability condition, we plotted the contributions from the heat flow into the absorber $J_{Q_{\rm in}}$ and into the electrodes $J_{Q_{\rm out}}$ separately. As shown by the plots, $J_{Q_{\rm out}}$ decreases with $V$ and changes sign from positive to negative at a point $V=V_{\rm st}$, whereas $J_{Q_{\rm in}}$ always remains positive. We call $V_{\rm st}$ the stability boundary here because the stability condition is fulfilled for $V<V_{\rm st}$. When the carrier extraction is slow, e.g. when $\tau_{\rm out}=10^{-5}$ s in Fig. 15(a), $V_{\rm st}$ is almost the same as $V_{\rm oc}$. This means that the heat flow direction between absorber and electrodes is the same as the charge flow direction. We consider this to be the normal situation. However, when the extraction becomes faster, as shown in Fig. 15(b) and Fig. 15(c), we find a clear departure of $V_{\rm st}$ from $V_{\rm oc}$. In this case, we found the regime where $V_{\rm st}<V<V_{\rm oc}$, in which $J_{\rm work}>0$ but $J_{Q_{\rm out}}<0$, i.e., where the heat flow and charge flow directions are different. In this regime, the solar cell generates electric power ($J_{\rm work}>0$), but an external heat supply of no less than $|J_{Q_{\rm out}}|$ must be additionally provided for the electrodes. Such a device would not provide a solar cell operating in a self-sustained manner using solar illumination alone. In this way, we have evaluated $\eta_{\rm max}$ to be the maximum conversion efficiency under the stability conditions $0<V<V_{\rm st}$. For example, in Fig. 15, $\eta_{\rm max}$ is 29.5 percent for $\tau_{\rm out}=10^{-5}$ and $10^{-7}$ s, whereas $\eta_{\rm max}$ is reduced to 25.0 percent for $\tau_{\rm out}=10^{-11}$ s. As already shown in Fig. 13, the sum $J_{Q_{\rm in}}+J_{Q_{\rm out}}$ is positive as long as $0<V<V_{\rm oc}$. Therefore, if thermally linked, the depleted heat in the electrodes when $V_{\rm st}<V<V_{\rm oc}$ can be complemented by the heat inflow into the absorber denoted by $J_{Q_{\rm in}}$. The ideal limit in such a case with a strong thermal link corresponds to the heat-shared phonon reservoirs that were discussed in Sec. IV.1. In real situations where the thermal link strength is moderate (i.e., heat depletion in the electrodes is partially complemented by the absorber via phonon transport), $\eta_{\rm max}$ will be located between the two ideal cases (the solid and dashed curves in Fig. 10). Because the difference between these two results is large, we propose that $\eta_{\rm max}$ is sensitive to the phonon transport between the absorber and the electrodes (in either direct or indirect ways) in the fast carrier extraction regime. V Conclusion and future prospects We have formulated a nonequilibrium theory for solar cells that includes microscopic descriptions of the carrier thermalization and extraction processes. This theory extends the Shockley-Queisser theory to nonequilibrium parameter regimes where the detailed balance cannot be applied. The theory provides detailed information about the solar cells, including nonequilibrium carrier distribution functions with the chemical potentials that are higher than those in the electrodes, and the energy balance (including output work, radiation losses, transmission losses, and heat dissipation in the absorber and the electrodes), which will provide a precise understanding of the loss mechanisms in various solar cell types for a wide range of parameters. Using the developed theory, we defined three different regimes in terms of their carrier extraction time that were bounded using two time scales: the thermalization time, $\tau_{\rm ph}$, and $\tau_{\rm out}^{{\rm ul}\ast}$, at which the device characteristics should change. The upper boundary $\tau_{\rm out}^{{\rm ul}\ast}$ is dependent on the absorber material parameters, and is more strongly dependent on the system parameters, e.g., the absorber thickness and solar light concentration ratio. Device simulations of simple planar solar cells have shown that the SQ limit is applicable in the normal extraction regime, denoted by regime II ($\tau_{\rm ph}<\tau_{\rm out}<\tau_{\rm out}^{{\rm ul}\ast}$) in Fig. 8. Outside this regime (in regimes I and III), nonequilibrium carrier populations are found in the absorber and the maximum conversion efficiency is significantly reduced from the SQ limit. While the reductions in $\eta_{\rm max}$ were similar, the energy loss mechanisms in the fast and slow extraction regimes are different, which is clearly reflected in their $I$-$V$ characteristics. The reduction in $\eta_{\rm max}$ in the fast extraction regime also indicates that unless a tailored carrier extraction procedure such as that based on energy selection was performed, it would be difficult for fast carrier extraction before carrier thermalization to be beneficial. This strong claim is consistent with the fact that hot carrier solar cells require energy selection during their carrier extraction processes in addition to the fast extraction procedure. The nonequilibrium theory presented here covers only a few basic elements of solar cells and has only been tested in simple planar solar cells. The losses of photo-generated carriers in the absorber in this work are solely due to radiative recombination. Inclusion of nonradiative recombination may change the result, as will be discussed elsewhere. In the carrier extraction process, this paper does not consider energy losses at the junction. A case of this type using ohmic contacts will be studied in future work. Application of the proposed theory to other types of solar cells, e.g., organic solar cells, perovskite solar cells, multi-junction solar cells, intermediate-band solar cells, and hot carrier solar cells will also be interesting. The most important and challenging aspect will be to provide feasible proposals for new solar cells using nonequilibrium features by which the SQ limit can be surpassed. Acknowledgements.Authors acknowledge Tatsuro Yuge, Makoto Yamaguchi, Yasuhiro Yamada, Katsuhiko Shirasawa, Tetsuo Fukuda, Katsuhito Tanahashi, Tomihisa Tachibana, and Yasuhiko Takeda for discussion. This work is by JSPS KAKENHI (15K20931), and the New Energy and Industrial Technology Development Organization (NEDO). References (1) W. Shockley, and H. J. Queisser, Detailed balance limit of efficiency of p-n junction solar cells, J. Appl. Phys. 32, 510 (1961). (2) A. Richter, M. Hermle, and S. W. Glunz, Reassessment of the Limiting Efficiency for Crystalline Silicon Solar Cells, IEEE J. Photovoltaics 3, 1184 (2013). (3) R. T. Ross, and A. J. Nozik, Efficiency of hot-carrier solar energy converters, J. Appl. Phys. 53, 3813 (1982). (4) P. Würfel, Solar energy conversion with hot electrons from impact ionisation, Sol. Energy Mater. Sol. Cells 46, 43 (1997). (5) Y. Takeda, T. Motohiro, D. König, P. Aliberti, Y. Feng, S. Shrestha, and G. Conibeer, Practical Factors Lowering Conversion Efficiency of Hot Carrier Solar Cells, Appl. Phys. Exp. 3, 104301 (2010). (6) Y. Takeda, A. Ichiki, Y. Kusano, N. Sugimoto, and T. Motohiro, Resonant tunneling diodes as energy-selective contacts used in hot-carrier solar cells, J. Appl. Phys. 118, 124510 (2015). (7) D. Suchet, Z. Jehl, Y. Okada, and J-F. Guillemoles, Influence of Hot-Carrier Extraction from a Photovoltaic Absorber: An Evaporative Approach, Phys. Rev. Applied 8, 034030 (2017). (8) M. A. Green, Y. Hishikawa, W. Warta, E. D. Dunlop, D. H. Levi, J. Hohl-Ebinger, A. W. Ho-Baillie, Solar Cell Efficiency Tables (Version 50)” Prog. Photovoltaics 25, 668 (2017). (9) P. C. Martin, and J. Schwinger, Theory of Many-Particle Systems. I, Phys. Rev. 115, 1342 (1959). (10) G. Baym, and Leo P. Kadanoff, Conservation Laws and Correlation Functions, Phys. Rev. 124, 287 (1961). (11) L. V. Keldysh, Diagram Technique for Nonequilibrium Processes, Sov. Phys.—JETP 20, 1018 (1965). (12) J. Rammer, and H. Smith, Quantum field-theoretical methods in transport theory of metals, Rev. Mod. Phys. 58, 323 (1986). (13) S. Datta, Electronic Transport in Mesoscopic Systems (Cambridge University Press, Cambridge, 1995). (14) S. Steiger, R. G. Veprek, and B. Witzigmann, Electroluminescence from a Quantum-Well LED using NEGF, in Proceedings of the 13th International Workshop on ComputationalElectronics, IWCE 2009, Beijing, China (IEEE 2009). (15) K. Henneberger, and H. Haug, Nonlinear optics and transport in laser-excited semiconductors, Phys. Rev. B 38, 9759 (1988). (16) S. -C. Lee, and A. Wacker, Nonequilibrium Green’s function theory for transport and gain properties of quantum cascade structures, Phys. Rev. B 66, 245314 (2002). (17) M. H. Szymańska, J. Keeling, and P. B. Littlewood, Nonequilibrium Quantum Condensation in an Incoherently Pumped Dissipative System, Phys. Rev. Lett. 96, 230602 (2006). (18) M. Yamaguchi, K. Kamide, R. Nii, T. Ogawa, and Y. Yamamoto, Second Thresholds in BEC-BCS-Laser Crossover of Exciton-Polariton Systems, Phys. Rev. Lett. 111, 026404 (2013). (19) U. Aeberhard, Quantum-kinetic theory of photocurrent generation via direct and phonon-mediated optical transitions, Phys. Rev. B 84, 035454 (2011). (20) U. Aeberhard, Theory and simulation of quantum photovoltaic devices based on the nonequilibrium Green’s function formalism, J. Comp. Electronics 10, 394 (2011). (21) N. Cavassilas, F. Michelini, and M. Bescond, Modeling of nanoscale solar cells: The Green’s function formalism, J. Renewable and Sustainable Energy 6, 011203 (2014). (22) U. Aeberhard, and U. Rau, Microscopic Perspective on Photovoltaic Reciprocity in Ultrathin Solar Cells, Phys. Rev. Lett. 118, 247702 (2017). (23) U. Rau, Reciprocity relation between photovoltaic quantum efficiency and electroluminescent emission of solar cells, Phys. Rev. B 76, 085303 (2007). (24) T. Kirchartz, and U. Rau, Electroluminescence analysis of high efficiency Cu(In,Ga)Se2 solar cells, J. Appl. Phys. 102, 104510 (2007). (25) M. Bernardi, D. Vigil-Fowler, J. Lischner, J. B. Neaton, and S. G. Louie, Ab Initio Study of Hot Carriers in the First Picosecond after Sunlight Absorption in Silicon, Phys. Rev. Lett. 112, 257402 (2014). (26) P. Y. Yu, and M. Cardona, Fundamentals of Semiconductors: Physics and Material Properties, 3rd ed. (Springer, New York, 2005). (27) P. Würfel, The chemical potential of radiation, J. Phys. C: Solid State Phys. 15, 3967 (1982). (28) P. Würfel, and U. Würfel, Physics of Solar Cells: From Basic Principles to Advanced Concepts, 3rd ed., (Weinheim: Wiley-VCH, 2016). (29) H. J. Carmichael, Statistical Methods in Quantum Optics 1: Master Equations and Fokker-Planck Equations, 2nd ed.” (Springer, 2003). (30) H. P. Breuer, and F. Petruccione, The Theory of Open Quantum Systems, (Oxford University Press, 2002). (31) C. W. Tang, Two-layer organic photovoltaic cell, Appl. Phys. Lett. 48, 183 (1986). (32) G. Yu, J. Gao, J. C. Hummelen, F. Wudl, and A. J. Heeger, Polymer Photovoltaic Cells: Enhanced Efficiencies via a Network of Internal Donor-Acceptor Heterojunctions, Science 270, 1789 (1995). (33) B. O’Regan and M. Gratzel, A low-cost, high-efficiency solar cell based on dye-sensitized colloidal TiO${}_{2}$ films, Nature 353, 737 (1991). (34) A. Kojima, K. Teshima, Y. Shirai, and T. Miyasaka, Organometal Halide Perovskites as Visible-Light Sensitizers for Photovoltaic Cells, J. Am. Chem. Soc. 131, 6050 (2009). (35) J. Frenkel, On the Transformation of light into Heat in Solids. I, Phys. Rev. 37, 17 (1931). (36) M. Fox, in Chapter 4 of Optical Properties of Solids, 2nd ed., (Oxford University Press, 2010). (37) Y. Tanaka, Y. Noguchi, K. Oda, Y. Nakayama, J. Takahashi, H. Tokairin, and H. Ishii, Evaluation of internal potential distribution and carrier extraction properties of organic solar cells through Kelvin probe and time-of-flight measurements, J. Appl. Phys. 116, 114503 (2014). (38) V. M. Le Corre, A. R. Chatri, N. Y. Doumon, and L. J. A. Koster, Charge Carrier Extraction in Organic Solar Cells Governed by Steady-State Mobilities, Adv. Energy Mater. 7, 1701138 (2017). (39) T. Suzuki, and R. Shimano, Cooling dynamics of photoexcited carriers in Si studied using optical pump and terahertz probe spectroscopy, Phys. Rev. B 83, 085207 (2011). (40) J. R. Goldman and J. A. Prybyla, Ultrafast Dynamics of Laser-Excited Electron Distributions in Silicon, Phys. Rev. Lett. 72, 1364 (1994). (41) A. J. Sabbah, and D. M. Riffe, Femtosecond pump-probe reflectivity study of silicon carrier dynamics, Phys. Rev. B 66, 165217 (2002). (42) M. Green, Third Generation Photovoltaics: Advanced Solar Energy Conversion, (Springer-Verlag: Berlin, Heidelberg, 2003). (43) E. Yablonovitch, Statistical ray optics, J. Opt. Soc. Am. 72, 899 (1982). (44) S. Chen, L. Zhu, M. Yoshita, T. Mochizuki, C. Kim, H. Akiyama, M. Imaizumi, and Y. Kanemitsu, Thorough subcells diagnosis in a multi-junction solar cell via absolute electroluminescence-efficiency measurements, Sci. Rep. 5, 7836 (2014). (45) J.-K. Yu, S. Mitrovic, D. Tham, J. Varghese, and J. R. Heath, Reduction of thermal conductivity in phononic nanomesh structures, Nat. Nanotechnol. 5, 718 (2010). (46) P. E. Hopkins, C. M. Reinke, M. F. Su, R. H. Olsson, E. A. Shaner, Z. C. Leseman, J. R. Serrano, L. M. Phinney, and I. El-Kady, Reduction in the Thermal Conductivity of Single Crystalline Silicon by Phononic Crystal Patterning, Nano Lett. 11, 107 (2011). (47) M. G. A. Bernard, and G. Duraffourg, Laser Conditions in Semiconductors, phys. stat. solidi b 1, 699 (1961). (48) J. Xue, Z. Li, and R. J. Ram, Irreversible Thermodynamic Bound for the Efficiency of Light-Emitting Diodes, Phys. Rev. Applied 8, 014017 (2017). (49) J. Tauc, The share of thermal energy taken from the surroundings in the electro-luminescent energy radiated from a p-n junction, Czech. J. Phys. 7, 275 (1957). (50) G. C. Dousmanis, C.W. Mueller, H. Nelson, and K. G. Petzinger, Evidence of refrigerating action by means of photon emission in semiconductor diodes, Phys. Rev. 133, A316 (1964). (51) P. Berdahl, Radiant refrigeration by semiconductor diodes, J. Appl. Phys. 58, 1369 (1985). (52) P. Santhanam, D. J. Gray Jr., and R. J. Ram, Thermoelectrically Pumped Light-Emitting Diodes Operating above Unity Efficiency, Phys. Rev. Lett. 108, 097403 (2012).
Dark Energy Constraints from the CTIO Lensing Survey Mike Jarvis, Bhuvnesh Jain, Gary Bernstein, Derek Dolney Dept. of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 mjarvis, bjain, garyb, dolney@physics.upenn.edu Abstract We perform a cosmological parameter analysis of the 75 square degree CTIO lensing survey in conjunction with CMB and Type Ia supernovae data. For $\Lambda$CDM cosmologies, we find that the amplitude of the power spectrum at low redshift is given by $\sigma_{8}=0.81^{+0.15}_{-0.10}\ (95\%\ {\rm c.l.})$, where the error bar includes both statistical and systematic errors. The total of all systematic errors is smaller than the statistical errors, but they do make up a significant fraction of the error budget. We find that weak lensing improves the constraints on dark energy as well. The (constant) dark energy equation of state parameter, $w$, is measured to be $-0.89^{+0.16}_{-0.21}\ (95\%\ {\rm c.l.})$. Marginalizing over a constant $w$ slightly changes the estimate of $\sigma_{8}$ to $0.79^{+0.17}_{-0.14}\ (95\%\ {\rm c.l.})$. We also investigate variable $w$ cosmologies, but find that the constraints weaken considerably; the next generation surveys are needed to obtain meaningful constraints on the possible time evolution of dark energy. Subject headings:gravitational lensing; cosmology; large-scale structure 1. Introduction Observations of weak gravitational lensing, the coherent distortion in the images of distant galaxies, have advanced rapidly in the past four years. The first detections of weak lensing in blank fields were reported only a few years ago (Wittman et al., 2000; Kaiser et al., 2000; Van Waerbeke et al., 2000; Bacon et al., 2000; Rhodes, Refregier, & Groth, 2000). More recent lensing measurements (Hoekstra et al., 2002; Refregier et al., 2002; Bacon et al., 2003; Brown et al., 2003; Hamana et al., 2003; Jarvis et al., 2003; Van Waerbeke et al., 2005; Heymans et al., 2005) have used larger and/or deeper surveys to reduce the statistical errors, which scale as $N^{-1/2}$ where $N$ is the number of galaxies measured. Better techniques for reducing systematic errors have also been developed, resulting in interesting cosmological constraints from lensing surveys. Shear correlations measured by lensing surveys determine the projected power spectrum of matter fluctuations in the Universe. These fluctuations are believed to have grown due to gravitational instability from the early universe to the present. The growth of fluctations from last scattering, $z=1100$, to the present is sensitive to the densities of dark energy and matter via the Hubble expansion rate. Further, the measured lensing signal depends on angular diameter distances to the source galaxies. Thus weak lensing observables probe the dark energy density through both the growth function and the geometric distance factors. Present lensing measurements are sensitive to redshifts $0\lesssim z\lesssim 1$. As the size of weak lensing surveys increases and the statistical errors keep going down, it becomes more important to similarly reduce the systematic errors. There are a few different systematic errors which can contaminate a weak lensing signal at the level of typical cosmic shear measurements. The largest of these has typically been the corrections of the anisotropic point-spread-function (PSF). We have recently developed a principal component analysis approach to interpolating the PSF between the stars in the image using information from multiple exposures (Jarvis & Jain, 2005). The PSF pattern is found to be a function of only a few underlying variables. Therefore, we are able to improve the fits of this pattern by using stars from many exposures. We have applied it to the CTIO survey data presented in Jarvis et al. (2003) and have shown that the level of systematic error is well below the statistical errors. Indeed the measured $B$-mode in shear correlations is consistent with zero on all scales measured. In this paper we present a new parameter analysis of the CTIO lensing survey. With the reduced systematic error, we are able to extract information from the measured signal over nearly two decades in length scale. Type Ia supernovae observations led to the discovery of the accelerated expansion of the universe (Riess et al.1998; Perlmutter et al.1999). By combining information from the CMB at $z=1100$, large-scale structure and SNIa, interesting constraints on dark energy have been obtained (Spergel et al., 2003; Bridle et al., 2003; Weller & Lewis, 2003; Alam et al., 2004; Saini et al., 2004; Tegmark et al., 2004; Wang & Tegmark, 2004; Seljak et al., 2005; Simon, Verde, & Jimenez, 2005; Rapetti et al., 2005; Jassal et al., 2005). Whether the dark energy density is constant or evolving with cosmic time is one of the most interesting observational questions. It is often expressed using the parameterization $$w(a)=w_{0}+w_{a}(1-a),$$ (1) where $w$ is the dark energy equation of state parameter $w=p/\rho$ and $a=1/(1+z)$ is the expansion scale factor (Chevallier & Polarski, 2001; Linder, 2003). The cosmological constant corresponds to $w_{0}=-1,w_{a}=0$. A non-zero value of $w_{a}$ corresponds to a time-dependent equation of state. We investigate three priors for $w$: $\Lambda$CDM ($w_{0}=-1,w_{a}=0$), constant $w$ ($w_{a}=0$) with $-3<w<0$, and variable $w$ with $-8<w<8$ and $-8<w_{a}<8$. In §2 we briefly review our CTIO survey data and shape measurement technique, largely deferring to our previous papers (Jarvis et al., 2003; Jarvis & Jain, 2005) for more details. In §3, we present the results from the new reduction. We use these results to constrain cosmological parameters in §4, and conclude in §5. 2. Data Our CTIO survey data was originally described in detail in Jarvis et al. (2003), and we refer the reader to that paper for most of the details about the data and the analysis. Here, we present a brief summary of the data set, and point out two significant changes in the analysis: our PSF interpolation and the dilution correction. The data were taken at Cerro Tololo Interamerican Observatory (CTIO) in Chile from December, 1996 to July, 2000. We observed 12 fields, well separated on the sky, in low extinction but otherwise random locations. Each field is approximately 2.5 degrees on a side, giving us a total of 75 square degrees. The 50% completeness level occurs near $R=23.5$ for each field, although it varies somewhat between the fields. We impose limits of $19<R<23$, which gives about 2 million galaxies to use for our lensing statistics. The shape measurements of the galaxies follow the techniques of Bernstein & Jarvis (2002). The galaxy shapes are measured using an elliptical Gaussian weight function which is matched to have the same ellipticity as the galaxy. (That is, we use a circular Gaussian in the sheared coordinate system in which the galaxy is round.) The observed ellipticity is then calculated as: $$\mbox{\bf e}=e_{1}+ie_{2}=\frac{\int I(x,y)W(x,y)(x+iy)^{2}dxdy}{\int I(x,y)W(% x,y)(x^{2}+y^{2})dxdy}$$ (2) where $I$ is the intensity, $W$ is our Gaussian weight, $x$ and $y$ are measured from the (weighted) centroid of the image, and the boldface indicates that e is a complex quantity. We correct for the effects of the point spread function (PSF) in two steps. First, we correct for the effect of the shape of the PSF by reconvolving the observed images with a spatially varying convolution kernel which is designed to make the stars round. The galaxies in the convolved images are thus no longer affected by the shape of the point spread function (PSF), but are still affected by the size of the PSF. Since the convolution kernel is only measured where we have an observed star, and the PSF is far from uniform across each image, we need to interpolate between the stars. In our previous analysis, we used a separate interpolation for every image. For this analysis, however, we use our new principal component algorithm which uses the information from all of the images at once. This new method, which gives a better fit, is described in Jarvis & Jain (2005). The effect of the size of the PSF is called dilution. A perfectly round PSF blurs the images of galaxies, which reduces the observed ellipticity. For a purely Gaussian PSF and Gaussian galaxies, the measured ellipticities are reduced by a factor $R$: $$R=1-\frac{\sigma^{2}_{\rm psf}}{\sigma^{2}_{\rm gal}}$$ (3) However, galaxies are certainly not Gaussian, and stars are only approximately Gaussian. In our previous analysis we used a formula intended to account fot the first order corrections due to the kurtosis of the galaxies and the PSF. Unfortunately, our formula was incorrect with respect to the PSF kurtosis, as pointed out by Hirata & Seljak (2003). They give a more accurate correction scheme which accounts for both kurtoses correctly to first order – we implement this scheme in the analysis presented below. Their formula is quite complicated to write, but is easy to implement, so we defer to their Appendix B for the relevant equations. Finally, we estimate the shear, $\gamma$, from an ensemble of ellipticities using the formula: $$\hat{\mbox{\boldmath$\gamma$}}=\frac{1}{2\mbox{$\cal R$}}\frac{\sum w_{i}\,% \mbox{\bf e}_{i}}{\sum w_{i}}$$ (4) where the responsivity, $\cal R$, describes how our mean ellipticity changes in the presence of an applied distortion. It generalizes the formula given in equation 3 to non-Gaussian shapes. The factor of 2 in the denominator above converts the ellipticity to shear, and $w$ is our weight function. We use the “easy” weight function given in Bernstein & Jarvis (2002) (Equation 5.36): $$w=[e^{2}+(1.5\sigma_{\eta})^{2}]^{-1/2}$$ (5) where $\sigma_{\eta}$ is the shape uncertainty in the sheared coordinates where the galaxy is circular. The corresponding responsivity, $\cal R$, is also given in Bernstein & Jarvis (2002) (Equation 5.33). 3. Weak Lensing Statistics To describe the two-point statistics of our shear field, we use the aperture mass statistic (Schneider et al., 1998; Crittenden et al., 2002; Schneider et al., 2002; Pen et al., 2002). The predictions from theory come in the form of the convergence power spectrum: $$P_{\kappa}(\ell)=\frac{9}{4}\Omega_{0}^{2}\int_{0}^{\chi_{H}}d\chi\frac{g^{2}(% \chi)}{a^{2}(\chi)}P_{3D}\left(\frac{\ell}{r(\chi)};\chi\right)$$ (6) where $\chi$ is the radial comoving distance, $\chi_{H}$ is the horizon distance, $r(\chi)$ is the comoving angular distance, $a$ is the scale size of the universe, $P_{3D}$ is the three-dimensional power spectrum of the matter fluctuations, and $$g(\chi)=\int_{\chi}^{\chi_{H}}d\chi^{\prime}p(\chi^{\prime})\frac{r(\chi^{% \prime}-\chi)}{r(\chi^{\prime})}$$ (7) where $p$ is the normalized (to give unit integral over $\chi$) redshift distribution of source galaxies. These predictions are not reliable for high values of $k$ ($k>10h{\rm Mpc}^{-1}$) (Smith et al., 2003) due to difficulties in predicting the non-linear growth. The observed second moments are completely described using the two-point correlation functions: $$\displaystyle\xi_{+}(\theta)$$ $$\displaystyle=$$ $$\displaystyle\langle\gamma({\bf r})\gamma^{*}({\bf r+\mbox{\boldmath$\theta$}})\rangle$$ $$\displaystyle\xi_{-}(\theta)+i\xi_{\times}(\theta)$$ $$\displaystyle=$$ $$\displaystyle\langle\gamma({\bf r})\gamma({\bf r+\mbox{\boldmath$\theta$}})e^{% -4i\alpha}\rangle$$ (8) where $\mbox{\boldmath$\theta$}=\theta e^{i\alpha}$ is the separation between pairs of galaxies, and treating the positions on the sky as complex values. The correlation functions are not measured at all on scales larger than the size of the survey fields (for this survey, at $\theta>200\arcmin$), which correspond to low $k$ values. Using the information from the correlation function to obtain the power spectrum, or vice versa, requires an extrapolation away from the $k$ values which are well measured or well predicted. The aperture mass statistic is in some sense a compromise between these two statistics, since it can be calculated from either the power spectrum or the correlation function using only the range of $k$ values which are either predicted or measured, respectively: $$\displaystyle\langle M_{\rm ap}^{2}\rangle(R)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2\pi}\int\ell d\ell P_{\kappa}(\ell)W(\ell R)^{2}$$ (9) $$\displaystyle=$$ $$\displaystyle\frac{1}{2}\int\frac{\theta\,d\theta}{R^{2}}\left[\xi_{+}(\theta)% T_{+}\left(\frac{\theta}{R}\right)+\xi_{-}(\theta)T_{-}\left(\frac{\theta}{R}% \right)\right]$$ where we use the form suggested by Crittenden et al. (2002) for which we have: $$\displaystyle W(\eta)$$ $$\displaystyle=$$ $$\displaystyle\frac{\eta^{4}}{4}e^{-\eta^{2}}$$ $$\displaystyle T_{+}(x)$$ $$\displaystyle=$$ $$\displaystyle\frac{x^{4}-16x^{2}+32}{128}e^{-x^{2}/4}$$ $$\displaystyle T_{-}(x)$$ $$\displaystyle=$$ $$\displaystyle\frac{x^{4}}{128}e^{-x^{2}/4}$$ (10) The aperture mass therefore has both good predictions from theory and accurate measurements from the data. While the integral in Equation 9 is technically from 0 to infinity, the $T$ functions drop off very quickly, so that the effective upper limit is really around $5R$. Our fields are $200\arcmin$ along the long diagonal, so we can measure the aperture mass up to $R=40\arcmin$. The lower limit, due to difficulties of measuring the correlation function on very small scales, is about $R=1\arcmin$. The function $W(\ell R)$ is a narrow function around $\ell R=\sqrt{2}$, so this corresponds to a range in $\ell$ of approximately $120<\ell<5000$. The other big advantage to the aperture mass statistic is that it provides a natural check for systematics. Weak lensing should only produce a curl-free $E$-mode component, so any $B$-mode observed in the shear field represents a systematic error of some sort. The aperture mass statistic has a $B$-mode counterpart which can be likewise calculated from the correlation functions as: $$\langle M_{\times}^{2}\rangle(R)=\frac{1}{2}\int\frac{\theta\,d\theta}{R^{2}}% \left[\xi_{+}(\theta)T_{+}\left(\frac{\theta}{R}\right)-\xi_{-}(\theta)T_{-}% \left(\frac{\theta}{R}\right)\right]$$ (11) Figure 1 shows the results for our reanalysis. The blue points are the $E$-mode signal, and the red points are the $B$-mode contamination. Points separated by at least one other point are very nearly uncorrelated. The black curve is the best fit flat $\Lambda$CDM model found below. The $B$-mode is seen to be consistent with zero at all scales, which was not the case for our previous analysis. Further, in Jarvis & Jain (2004) we show that the measured ellipticity correlation function of stars, which is another measure of systematic errors, is one to two orders of magnitude smaller than the expected lensing signal at all scales. Therefore, we can now confidently use all of the aperture mass values from $1\arcmin$ to $40\arcmin$ for our constraints on cosmology. In addition to the aperture mass, we also measure the variance of the mean shear in circular apertures: $$\displaystyle\langle|\gamma|\rangle^{2}(R)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2\pi}\int\ell d\ell P_{\kappa}(\ell)\frac{4J_{1}(\ell R)% ^{2}}{(\ell R)^{2}}$$ (12) $$\displaystyle=$$ $$\displaystyle\int_{0}^{2R}\frac{\theta\,d\theta}{R^{2}}\xi_{+}(\theta)S_{+}% \left(\frac{\theta}{R}\right)$$ where $$S_{+}(x)=\frac{1}{\pi}\left(4\arccos(x/2)-x\sqrt{4-x^{2}}\right)$$ (13) Figure 1 shows the results for the shear variance statistic. There is no $E/B$ decomposition for this statistic111 Technically, there is (Schneider et al., 2002), but it requires extrapolation of the correlation functions past where they are measured. . We assume implicitly that there is no $B$-mode contamination in the shear variance measurements, which seems reasonable given the low $B$-mode seen for the aperture mass. The usefulness of this statistic is that it is able to probe the power spectrum at somewhat smaller $\ell$ values than the aperture mass statistic. The upper limit in the integral in Equation 12 is only $2R$, so we can calculate the shear variance up to $R=100\arcmin$ with our data. This probes the power spectrum down to $\ell$ of about 70. This leads to a total dynamic range for both statistics of almost 2 orders of magnitude. The shear variance below $R=50\arcmin$ is degenerate with the aperture mass, so for our constraints, we only use the shear variance at large values of $R$ where it is providing extra information. In Figure 1, we also show the overall best fit $\Lambda$CDM model (see §4). This fit has a $\chi^{2}$ of 7.7, for effectively 6 degrees of freedom, yielding a reduced $\chi^{2}$ of 1.28. 4. Analysis 4.1. Dark Energy Constraints from Weak Lensing Our lensing measurement constrains the shear power spectrum, which is a weighted projection of the mass power spectrum. The constraints on dark energy arise from two sources. The first is the angular diameter distances to the lens, to the source, and between the lens and the source that enter into the projection. The second is the amplitude of the power spectrum. The dark energy component alters the expansion rate of the universe at redshifts below about 2 (at least if its evolution is not too different from a cosmological constant). This in turns affects the growth of structure. Since the CMB fixes the amplitude of the power spectrum at $z=1100$, the lensing measurement of the amplitude at low redshift measures the growth function. See Hu & Jain (2004) for a detailed discussion. We assume that massive neutrinos make a negligible contribution to the matter density, that the primordial power spectrum index has no running and that the universe is spatially flat. The shape of the mass power spectrum is then specified by the baryon density $\mbox{$\Omega_{\rm b}$}h^{2}$, the matter density $\mbox{$\Omega_{\rm m}$}h^{2}$, and the primordial spectrum. Following the WMAP convention, we use the scalar amplitude $A_{\rm s}$ and spectral index $n$ such that the shape of the primordial power spectrum is $\mbox{$A_{\rm s}$}(k/k_{0})^{(n-1)}$, where $k_{0}=0.05$ Mpc${}^{-1}$ is the normalization scale. The current uncertainties in these parameters are at the 10% level or better. Thus the power spectrum as a function of $k$ in Mpc${}^{-1}$ (not $h$ Mpc${}^{-1}$) in the matter dominated regime can be considered as largely known. The amplitude of the power spectrum at a given redshift depends on the initial normalization $A_{\rm s}$ and the “growth function” $G$ defined by $$P(k,z)=\left[\frac{1}{1+z}\frac{G(z)}{G_{0}}\right]^{2}P(k,0)\,,$$ (14) where $G_{0}\equiv G(z=0)$ and we assume that all relevant scales are sufficiently below the maximal sound horizon of the dark energy. The normalization of the linear power spectrum today is conventionally given at a scale of $r=8h^{-1}$Mpc and can be approximated as (Hu & Jain, 2004) $$\sigma_{8}~{}\approx~{}\frac{\mbox{$A_{\rm s}$}^{1/2}}{0.97}\left(\frac{\mbox{% $\Omega_{\rm b}$}h^{2}}{0.024}\right)^{-0.33}\left(\frac{\mbox{$\Omega_{\rm m}% $}h^{2}}{0.14}\right)^{0.563}~{}\times(3.123h)^{(n-1)/2}\left(\frac{h}{0.72}% \right)^{0.693}\frac{G_{0}}{0.76}\,,$$ (15) Thus a measurement of $\sigma_{8}$, in conjunction with constraints on the other parameters in the above equation from the CMB, constrains a combination of dark energy parameters that affect $G_{0}$. Note that while the equation above illustrates how the dark energy parameters are linked to others, we do not actually use it in our parameter analysis. Instead we use the projection integral of Equation 6 which includes the full range of redshifts probed by our survey. The peak contribution to the lensing correlations is from $z\simeq 0.3$, though the maximum sensitivity to $w$ is at $z\simeq 0.4$ as discussed below. Combining our measurement with others such as Type Ia supernovae, which are sensitive to different redshifts, enables a probe of the time dependence of dark energy parameters. With photometric redshifts, this could be done with the lensing data alone using the auto and cross-spectra in redshift bins. The dark energy modifies the expansion of the universe according to the equation (for a flat universe)(e.g. Linder & Jenkins, 2003): $$H^{2}(a)=H_{0}^{2}\left[\mbox{$\Omega_{\rm m}$}a^{-3}+\mbox{$\Omega_{\rm de}$}% e^{-3\int_{1}^{a}{\frac{da}{a}(1+w(a))}}\right]$$ (16) where the dark energy density is $\mbox{$\Omega_{\rm de}$}(a)=8\pi G\rho_{\rm de}/3H(a)^{2}$, its equation of state is $w(a)=p_{\rm de}/\rho_{\rm de}$, and we indicate the present time values, $\mbox{$\Omega_{\rm m}$}(a=1)$ and $\mbox{$\Omega_{\rm de}$}(a=1)$ as simply $\Omega_{\rm m}$and $\Omega_{\rm de}$. Distances are then given by $$\chi(a)=\int_{a}^{1}{\frac{da^{\prime}c}{H(a^{\prime}){a^{\prime}}^{2}}}$$ (17) and the growth function $G$ depends only on the dark energy throught the equation: $$\frac{d^{2}G}{d\ln a^{2}}+\left[\frac{5}{2}-\frac{3}{2}w(a)\mbox{$\Omega_{\rm de% }$}(a)\right]\frac{dG}{d\ln a}+\frac{3}{2}[1-w(a)]\mbox{$\Omega_{\rm de}$}(a)G% =0\,.$$ (18) When we use a constant $w$ parameterization, it is equivalent to a measurement of $w$ at the redshift for which the errors in the constant and time-dependent piece (in a Taylor expansion) are uncorrelated. See Hu & Jain (2004) for a discussion of this pivot redshift, which we find (§4.2.3) to be about $0.4$ for our survey (when combined with the CMB and SN priors as described below). 4.2. Joint Constraints We use the results from WMAP (Spergel et al., 2003) as priors for our analysis. Specifically, we use the Monte Carlo Markov Chain (MCMC) calculated by Verde et al. (2003)222 Available at http://www.physics.upenn.edu/ lverde/MAPCHAINS/mcmc.html. . We choose to use the less restrictive prior of $w>-3$ rather than $w>-1$ in order to be as conservative as possible. However, we do have a hard prior that $\mbox{$\Omega_{\rm k}$}=0$. While WMAP has constrained this to be $0.02\pm 0.02$ for pure $\Lambda$CDM, and there is theoretical bias in believing it is exactly 0, it is worth emphasising that our dark energy constraints would be weakened if this prior were relaxed. Each step in the Markov chain contains a value for each of the following parameters: $\omega_{\rm b}=\mbox{$\Omega_{\rm b}$}h^{2}$ is the density of baryons; $\omega_{\rm c}=\mbox{$\Omega_{\rm c}$}h^{2}$ is the density of cold dark matter; $\theta_{\rm A}$ is the angular scale of the acoustic peaks; $n$ is the spectral slope of the scalar primordial density power spectrum; $Z=\exp(-2\tau)$ is related to the optical depth ($\tau$) of the last scattering surface; $A_{\rm s}$ is the overal amplitude of the scalar primordial power spectrum; $h=H_{0}/100$ is the Hubble constant; $w$ is the equation of state parameter for the dark energy. The parameters $\theta_{A}$ and $Z$ are not directly relevant to the aperture mass statistic which we have measured, so we marginalize over these two. The others define a cosmology from which we can predict the aperture mass statistic on the scales where we have measured it. We present results for selected parameters below, these are obtained by a full marginalization over all the other parameters listed above. For the linear power spectrum, we use the transfer function of Bardeen et al. (1986). We then estimate the non-linear power spectrum using the halo-based model of model of Smith et al. (2003). Their fitting formulae provide a means of estimating the quasi-linear and non-linear halo contributions to the power spectrum based on the linear value and the effective spectral index. This model agrees with the results of $N$-body simulations better than the simpler formula of Peacock & Dodds (1996). The nonlinear correction affects the predicted variance in the mass aperture statistic on scales below about 4 arcminutes. We also note that the model of Smith et al. (2003) that we use does not include $w$ directly. We correctly take it into account for the growth factor and the distances, but Mainini et al. (2003) and Klypin et al. (2003) show that dark energy changes the virial density contrast, $\Delta_{c}$, which results in changes in the power spectrum at high $k$ values ($k>1\ h{\rm Mpc}^{-1}$). However, the effect is smaller than the expected error in the non-linear model even for a $w\simeq-0.5$ cosmology. From the predicted power spectrum, we calculate the aperture mass and shear variance statistics using Equations 6, 9, and 12. Our data then give a likelihood value for each cosmology, which we combine with the CMB likelihood from the MCMC. We also use the recent supernova measurements of Riess et al. (2004) to further constrain the results. For this, we use the $\chi^{2}$ calculation program of Tonry et al. (2003)333 Available at http://www.ifa.hawaii.edu/ jt/SOFT/snchi.html. The code there was modified slightly to read the data of Riess et al. (2004). . 4.2.1 $\Lambda$CDM Models With dark energy priors of $w=-1$ and $w_{a}=0$, the likelihood constraints are non-trivial contours through a five-dimensional parameter space. However, most of the gain in constraining power from the addition of the lensing data comes in the quantities $\Omega_{\rm m}$ and $\sigma_{8}$. We show the error contours projected onto the $\mbox{$\Omega_{\rm m}$}-\sigma_{8}$ plane in Figure 2. The left plot shows the contours starting with the CMB data set, and sequentially adding the supernova and lensing data. The right plot shows the contours in the $\mbox{$\Omega_{\rm m}$}-\sigma_{8}$ plane separately for each of the three data sets to indicate the degree of their complementarity, which is why their combination leads to the tight overall constraints. In both plots, the contours correspond to 68% and 95% confidence regions ($\Delta\chi^{2}$ = 2.30 and 6.17). The crosses are at the peak likelihood in the projected plane, which is at: ($\mbox{$\Omega_{\rm m}$}=0.26$, $\sigma_{8}=0.82$). 4.2.2 Constant $w$ Models We show the uncertainty contours for the dark energy priors of $-3<w_{0}<0$ and $w_{a}=0$ in Figure 3. The left plot give the projection in the $\mbox{$\Omega_{\rm m}$}-\sigma_{8}$ plane, which indicates that allowing $w$ to be free does not significantly worsen the constraints on $\Omega_{\rm m}$ and $\sigma_{8}$ compared to the pure $\Lambda$CDM model. The $\mbox{$\Omega_{\rm de}$}-w$ plot (right) shows why. While none of the three data sets individually have tight constraints in this plane, the combination of all three leads to a fairly tight contour near (and consistent with) $w=-1$. The peak likelihood models in the two projections are: ($\mbox{$\Omega_{\rm m}$}=0.25$, $\sigma_{8}=0.79$) and ($\mbox{$\Omega_{\rm de}$}=0.75$, $w=-0.90$). 4.2.3 Variable $w$ Models Finally, we consider dark energy priors of $-8<w_{0}<8$ and $-8<w_{a}<8$. It turns out that some of the dark energy models in this range have $\Omega_{\rm DE}(z=1100)\approx 1$. That is, the mass-energy of the universe was essentially all dark energy at the epoch of recombination. This seems to be ruled out by WMAP data (Caldwell et al., 2003; Caldwell & Doran, 2004; Wang & Tegmark, 2004). Therefore, we make the additional prior that $\Omega_{\rm DE}(z=1100)<0.5$. In practice, all the models have $\Omega_{\rm DE}(z=1100)\approx 0$ or $1$, so this contraint is effectively $\Omega_{\rm DE}(z=1100)\approx 0$. Given this constraint, the primary effect of dark energy on the CMB is through the distance to the last-scattering surface, $d_{\rm LSS}$. Therefore, we approximate the CMB likelihoods by using the WMap constant-$w$ Markov chain mentioned above, modifying the dark energy parameters to maintain a constant $d_{\rm LSS}$. Specifically, for each line in the Markov chain, we determine $d_{\rm LSS}$ from the values of $\Omega_{m}$ and $w$; we select $w_{a}$ from $-8<w_{a}<8$; then we determine what $w_{0}$ with this $w_{a}$ and the same $\Omega_{m}$ maintain the given value of $d_{\rm LSS}$, and we write these values out as a line in a new pseudo-chain. The main approximation in this process is that we neglect the difference of the integrated Sachs-Wolfe (ISW) effect between the two models. There is some indication444 Wayne Hu, private communication that for some of the models that fall within our contours (Figure 4, right), the ISW may spread the first peak in the power spectrum of the CMB enough to disfavor these models by 1 or 2 sigma. However, we defer a more detailed study of this effect to future work. The error contours projected onto the $\mbox{$\Omega_{\rm m}$}-\sigma_{8}$ plane and the $w_{0}-w_{a}$ plane are shown in Figure 4. With this much freedom in the models, we do finally lose much of our constraining power. Even with all three data sets, the error contours are still quite large. The $\mbox{$\Omega_{\rm m}$}-\sigma_{8}$ plot (left) shows that allowing $w$ to vary with cosmological time does significantly worsen the constraints, allowing much lower values of $\sigma_{8}$. Also, while lensing does significantly reduce the allowed parameter space in the $\mbox{$\Omega_{\rm m}$}-\sigma_{8}$ plane, this reduction has only a small effect in the $w_{0}-w_{a}$ plane, so that our constraints there are not much better than those for the CMB + SN data alone. The data are consistent with a cosmological constant ($w_{0}=-1,w_{a}=0$) at the 95% confidence level, but $w_{a}$ may range as high as 2555 It is for some of these high $w_{a}$ models that the ISW effect may be important, requiring a more careful analysis to determine what portion of the nominally allowed region is really ruled out.. We can use the direction of the contours in the $w_{0}-w_{a}$ plane to determine the redshift at which we have the strongest constraints on the dark energy. If we change variables from $\{w(z=0),w_{a}\}$ to $\{w(z=0.4),w_{a}\}$, the likelihood contour becomes roughly vertical. This indicates that our pivot redshift, or “sweet spot” (Huterer & Turner, 2001; Weller & Albrecht, 2002; Hu, 2002; Hu & Jain, 2004), where the constraints of the dark energy are strongest, is at a redshift of about $0.4$. (Of course, the banana shape makes this impossible to do precisely, so $z_{\rm piv}=0.4$ is just an approximate value.) We constrain $w$ at this redshift, marginalizing over $w_{a}$ (and everything else) below. 4.3. Systematic Errors Systematic errors are harder to estimate than statistical errors, since by their nature they contaminate the data in unknown ways. There are four systematic errors which we investigate and attempt to estimate: residual anisotropic PSF as estimated by the residual $B$-mode, calibration error, redshift distribution error, and errors in the non-linear prediction. For shorthand, we refer to these as B, CAL, Z, and NL respectively. When estimating the contributions of these systematic uncertainties to our error budget, we limit our consideration to constraints on single parameters, fully marginalized over all the other parameters. First we calculate the 95% error bars with only the statistical errors. Then, for each systematic effect, we change our handling of the effect as described more completely below for each case. When we do this, the 95% confidence intervals move around somewhat. We define the upper systematic error to be the maximum upper limit of the confidence interval allowed by the various changes minus the nominal upper limit with only the statistical errors. Likewise the lower systematic error is the lower limit with only the statistical errors minus the minimum allowed lower limit. Finally, we (conservatively) estimate the total error as the sum of the statistical errors and each of the systematic errors added linearly, not in quadrature. For the residual PSF (B) systematic, we implement the same technique we used in Jarvis et al. (2003), namely running the analysis with the $B$-mode contamination added to and subtracted from the $E$-mode signal. Most types of contamination either add power (roughly) equally to the $E$ and $B$ modes, in which case the subtraction is appropriate, or mix power between the two modes while conserving total signal, in which case the addition is appropriate. We allow for both possibilities to estimate how the contamination could be affecting the cosmological fits. The calibration (CAL) uncertainty includes errors in the dilution calculation, the responsivity formula, and possibly biases in the shape measurements. This systematic was the subject of significant discussion at the recent IAU symposium 225666July 19-23, 2004, Lausanne, Switzerland. There seem to be calibration differences of order 5% in shear estimators between different methods. Tests with simulated images with known shears indicate that we have calibration errors of less than 2% (Heymans et al., 2005), but we allow for $\pm$ 5% in our shear values as a conservative estimate of this systematic. For the redshift calibration (Z) of our survey, there are two public redshift surveys with depths similar to our observations: the Caltech Faint Galaxy Redshift Survey (Cohen et al., 2000) (CRS), and the Canada-France Redshift Survey (Lilly et al., 1995) (CFRS). We argue in Jarvis et al. (2003) that the CRS is a better choice, since it is more complete in the $R$ filter band pass used for our observations. However, switching to the CFRS distribution allows us to estimate the uncertainties due to the redshift calibration. Also, since we only have one other survey to use, we cannot run symmetric plus and minus versions of this test. So when the 95% confidence limit moved inward for a value, we take the absolute value of the change as the measure of the systematic error, since a different redshift survey might have moved the limit a similar amount outward. Finally, for the non-linear predictions (NL), we used the Smith et al. (2003) model, which were an improvement over that of Peacock & Dodds (1996). Switching back to the older model should give us a rough (over-)estimate of the remaining uncertainties due to the non-linear modelling. Again, we cannot run symmetric tests, so we take the absolute value of any change as a measure of the systematic error. Since this technique is non-standard and may be confusing, an example with the actual values might help explain it. For the CAL test with the $\Lambda$CDM prior, when we multiplied the shear data by 1.05 (for the +5% test), the upper limit of the 95% confidence interval for $\sigma_{8}$ moved from 0.910 to 0.929, and increase of 0.019. This is our estimate of the positive systematic error. Similarly, in the -5% CAL test, the lower limit decreased, providing the negative systematic error for $\sigma_{8}$. In Table 1, we present the statistical and systematic error estimates for $\Omega_{\rm m}$, $\sigma_{8}$, $w_{0}$ and $w_{a}$, for each of our dark energy priors. The first two columns present the estimates for each value with and without the lensing data, marginalized over all other parameters, and quoting only the statistical errors. The next four columns show the estimated systematic error due to each effect listed above. The final column includes the total uncertainty with the systematic errors added linearly with the statistical uncertainty. Note that we make no attempt to estimate the systematic errors present in the CMB or SN data. It is apparent that the systematic uncertainties in our survey are smaller than the statistical uncertainties for each of the above cosmological parameters. However, there is still significant room for improvement in all of these sources of systematic errors. Future lensing surveys which expect to reduce the statistical uncertainties by a factor of order 10 will need to address these systematics so that they do not dominate the final error budget. And while the systematic errors due to the non-linear modelling are essentially completely negligible for our survey, they will be more significant for lensing surveys with smaller fields than ours. 5. Discussion We have used our measurement of the shear two-point correlations to constrain the clustering of mass at redshifts $z\sim 0.3$ and the density and equation of state of dark energy. This has been done by combining the lensing information with the CMB and supernovae data. The three probes are sufficiently complementary that the joint contraints are significantly better than from any one or two methods. We find that the primary sytematic effects on our lensing data are the redshift distribution of the galaxies, the overall calibration of the shear estimates, and the systematic error due to the coherent PSF anisotropy. With our new analysis, the total effect of these three systematics is smaller than the statistical errors. The PCA technique has substantially reduced the contribution of the $B$-mode, which used to be the dominant systematic error (Jarvis et al., 2003). We are continuing to work on methods to reduce this and the calibration uncertainty. Improving the redshift distribution would require more data: either a larger redshift survey of similar depth as our data, or imaging the galaxies in three or four other filters to measure photometric redshifts. As discussed below, this second option would also allow us to bin the galaxies and make tomographic measurements. We have necessarily made some choices of priors and datasets in our parameter analysis. The datasets we have used in addition to the lensing data are the WMAP first year extended data (Verde et al., 2003) (which also include CBI and ACBAR data) and the Riess et al. (2004) Type 1a Supernovae data. We have assumed that the universe is spatially flat; weakening this assumption significantly weakens constraints on dark energy, especially if $w$ is allowed to vary in time. We have also assumed no tensor contribution to the CMB power spectrum, that the primordial power spectrum is an exact power law (no running), and we have neglected the effects of massive neutrinos on the power spectrum. Current upper limits from cosmology (see e.g. Seljak et al., 2005) are below 1 electron volt for the the sum of neutrino masses. Allowing for massive neutrinos could lead to (at most) a few percent increase in our estimated $\sigma_{8}$, as the presence of massive neutrinos suppresses the power spectrum on scales that affect our observed shear correlations, but would not lead to interesting constraints on the neutrino mass. Combining our data with CMB and SN data, our investigation of dark energy models show no evidence for the dark energy being different from a standard cosmological constant. Constant $w$ models are consistent with $w=-1$, and variable $w$ models are consistent with $w_{a}=0$. The constraints on dark energy from weak lensing come from a range of redshifts centered at $z\sim 0.3$, and extending by about $0.2$ in redshift on either side. Thus the measurements of its density and of $w$ should be interpreted with this redshift range in mind. When we combine our data with supernova data, we are using information from different redshifts, and the combined data have a pivot redshift of about $0.4$, where $w$ is best measured. At this redshift, we also find that $w$ is consistent with $-1$. Our analysis can be compared with other recent work that combines CMB and supernova data with galaxy clustering, the abundance of galaxy clusters, the clustering of the Lyman alpha forest and other probes (Spergel et al., 2003; Weller & Lewis, 2003; Tegmark et al., 2004; Wang & Tegmark, 2004; Seljak et al., 2005; Rapetti et al., 2005). It is a powerful consistency check that these different methods appear to agree in their conclusions. It is interesting to compare the different redshift ranges probed by these methods, and explore constraints on the time dependence of the equation of state (e.g. Linder, 2003; Huterer & Cooray, 2005). The prospects for constraining dark energy with future lensing surveys are very interesting. With a well designed survey and the recent analysis techniques, it can be hoped that systematic errors would stay at levels comparable to statistical errors. The addition of tomographic information with photometric redshifts would allow for significantly better constraints on cosmological parameters from lensing alone (Hu, 1999). The use of three point correlations would allow for some independent checks on systematics as well as improved constraints on cosmological parameters (see Pen et al. 2003; Jarvis, Bernstein & Jain 2004 for detections of the lensing skewness, and Takada & Jain 2004 for forecasts). Thus even with a survey of size similar to ours, significant improvements in parameter measurements are possible. With future surveys that will cover a significant fraction of the sky, weak lensing should allow for very precise measurements of the mass power spectrum, the dark energy density and its evolution. We thank Wayne Hu, Eric Linder, Masahiro Takada and Martin White for helpful discussions. We are grateful to Licia Verde and the WMAP team for making available their Markov chains and to Adam Riess, John Tonry and the High-z Supernova team for making available their data and likelihood code. We also thank the anonymous referee for useful comments. This work is supported in part by NASA grant NAG5-10924, NSF grant AST-0236702, and a Keck foundation grant. References Alam et al. (2004) Alam, U., Sahni, V., & Starobinsky, A. A. 2004, Journal of Cosmology and Astro-Particle Physics, 6, 8 Bacon et al. (2000) Bacon, D., Refregier, A., Clowe, D., & Ellis, R. 2000, MNRAS, 318, 625 Bacon et al. (2003) Bacon, D., Massey, R., Refregier, A., & Ellis, R. 2003, MNRAS, 344, 673 Bardeen et al. (1986) Bardeen, J. M., Bond, J. R., Kaiser, N., & Szalay, A. S. 1986, ApJ, 304, 15 Bernstein & Jarvis (2002) Bernstein, G. & Jarvis, M. 2002, AJ, 123, 583 Bridle et al. (2003) Bridle, S., Lahav, O., Ostriker, J., & Steinhardt, P. 2003, Science, 299, 1532 Brown et al. (2003) Brown, M., Taylor, A., Bacon, D., Gray, M., Dye, S., Meisenheimer, K., & Wolf, C. 2003, MNRAS, 341, 100 Caldwell et al. (2003) Caldwell, R. R., Doran, M., Müller, C. M., Schäfer, G., & Wetterich, C. 2003, ApJ, 591, L75 Caldwell & Doran (2004) Caldwell, R. & Doran, M. 2004, Phys. Rev. D69, 103517 Chevallier & Polarski (2001) Chevallier, M. & Polarski, D. 2001, Int. J. Mod. Phys., D10, 213 Cohen et al. (2000) Cohen, J. G., Hogg, D. W., Blandford, R., Cowie, L. L., Hu, E., Songaila, A., Shopbell, P., & Richberg, K. 2000, ApJ, 538, 29 Crittenden et al. (2002) Crittenden, R., Natarajan, P, Pen, U., & Theuns, T. 2002, ApJ, 568, 20 Hamana et al. (2003) Hamana, T., et al. 2003, ApJ, 597, 98 Heymans et al. (2005) Heymans, C., et al. 2005, MNRAS, 361, 160 Heymans et al. (2005) Heymans, C., et al. 2005, MNRAS, accepted, astro-ph/0506112 Hirata & Seljak (2003) Hirata, C. & Seljak, U. 2003, MNRAS, 343, 459 Hoekstra et al. (2002) Hoekstra, H., Yee, H. K. C., & Gladders, M. D. 2002, ApJ, 577, 595 Hu (1999) Hu, W. 1999, ApJ, 522, L21 Hu & Tegmark (1999) Hu, W. & Tegmark, M. 1999, ApJ, 514, L65 Huterer & Turner (2001) Huterer, D., & Turner, M. S. 2001, Phys. Rev. D, 64, 123527 Hu (2002) Hu, W. 2002, Phys. Rev. D66, 063506 Hu & Jain (2004) Hu, W. & Jain, B. 2004, Phys. Rev. D70, 043009 Huterer & Cooray (2005) Huterer, D., & Cooray, A., 2005, Phys. Rev. D71, 023506 Jarvis & Jain (2005) Jarvis, M. & Jain, B., 2005, ApJ, submitted, astro-ph/0412234 Jarvis et al. (2003) Jarvis, M., Bernstein, G., Fischer, P., Smith, D., Jain, B., Tyson, J. A., & Wittman, D. 2003, AJ, 125, 1014 Jarvis, Bernstein & Jain (2004) Jarvis, M., Bernstein, G. & Jain, B. 2004, MNRAS, 352, 338 Jassal et al. (2005) Jassal, H. K., Bagla, J. S., & Padmanabhan, T. 2005, MNRAS, 356, L11 Kaiser et al. (2000) Kaiser, N., Wilson, G., & Luppino, G. 2000, astro-ph/0003338 Klypin et al. (2003) Klypin, A., Macciò, A., Mainini, R., & Bonometto, S. 2003, ApJ, 599, 31 Lilly et al. (1995) Lilly, S., Le Fèvre, O., Crampton, D., Hammer, F., & Tresse, L. 1995, ApJ, 455, 50 Linder & Jenkins (2003) Linder, E. & Jenkins, A. 2003, MNRAS, 346, 573 Linder (2003) Linder, E., 2003, Phys. Rev. Lett., 90, 091301 Mainini et al. (2003) Mainini, R., Macciò, A., Bonometto, S., & Klypin, A. 2003, ApJ, 599, 24 Peacock & Dodds (1996) Peacock, J. & Dodds, S. 1996, MNRAS, 280, L19 Pen et al. (2002) Pen, U., van Waerbeke, L., & Mellier, Y. 2002, ApJ, 567, 31 Pen et al. (2003) Pen, U., Zhang, T., van Waerbeke, L., Mellier, Y., Zhang, P., & Dubinski, J. 2003, ApJ, 592, 664 Perlmutter et al. (1999) Perlmutter, S., et al. 1999, ApJ, 517, 565 Rapetti et al. (2005) Rapetti, D., Allen, S. W., & Weller, J. 2005, MNRAS, 360, 555 Refregier et al. (2002) Refregier, A., Rhodes, J., & Groth, E. J. 2002, ApJ, 572, L131 Rhodes, Refregier, & Groth (2000) Rhodes, J., Refregier, A., & Groth, E. J. 2000, ApJ, 536, 79 Riess et al. (1998) Riess, A., et al., 1998, AJ, 116, 1009 Riess et al. (2004) Riess, A. G., et al. 2004, ApJ, 607, 665 Saini et al. (2004) Saini, T., Weller, J., & Bridle, S. 2004, MNRAS, 348, 603 Schneider et al. (1998) Schneider, P., van Waerbeke, L., Jain, B., & Kruse, G. 1998, MNRAS, 296, 873 Schneider et al. (2002) Schneider, P., van Waerbeke, L., & Mellier, Y. 2002, A&A, 389, 729 Seljak et al. (2005) Seljak, U., et al. 2005, Phys. Rev. D, 71, 103515 Simon, Verde, & Jimenez (2005) Simon, J., Verde, L., & Jimenez, R. 2005, Phys. Rev. D, 71, 123001 Smith et al. (2003) Smith, R. E., et al. 2003, MNRAS, 341, 1311 Spergel et al. (2003) Spergel, D. N., et al. 2003, ApJS, 148, 175 Takada & Jain (2004) Takada, M. & Jain, B. 2004, MNRAS, 348, 897 Tegmark et al. (2004) Tegmark, M., et al. 2004, Phys. Rev. D, 69, 103501 Tonry et al. (2003) Tonry, J. L., et al. 2003, ApJ, 594, 1 Van Waerbeke et al. (2000) Van Waerbeke, L., et al. 2000, A&A, 358, 30 Van Waerbeke et al. (2005) Van Waerbeke, L., Mellier, Y., & Hoekstra, H. 2005, A&A, 429, 75 Verde et al. (2003) Verde, L., et al. 2003, ApJS, 148, 195 Wang & Tegmark (2004) Wang, Y. & Tegmark, M. 2004, Physical Review Letters, 92, 241302 Weller & Albrecht (2002) Weller, J., & Albrecht, A. 2002, Phys. Rev. D, 65, 103512 Weller & Lewis (2003) Weller, J. & Lewis, A. 2003, MNRAS, 346, 987 Wittman et al. (2000) Wittman, D., Tyson, J. A., Kirkman, D., Dell’Antonio, I., & Bernstein, G. 2000, Nature, 405, 143
3-manifolds and Vafa-Witten theory Sergei Gukov California Institute of Technology, Pasadena, CA, 91125. E-mail address: gukov@theory.caltech.edu ,  Artan Sheshmani Harvard University, Jefferson Laboratory, 17 Oxford St, Cambridge, MA 02138. E-mail address: artan@mit.edu  and  Shing-Tung Yau Yau Mathematical Science Center, Tsinghua University, Beijing, 10084, China. E-mail address: yau@math.harvard.edu (Date: July 12, 2022) Abstract. We initiate explicit computations of Vafa-Witten invariants of 3-manifolds, analogous to Floer groups in the context of Donaldson theory. In particular, we explicitly compute the Vafa-Witten invariants of 3-manifolds in a family of concrete examples relevant to various surgery operations (the Gluck twist, knot surgeries, log-transforms). We also describe the structural properties that are expected to hold for general 3-manifolds, including the modular group action, relation to Floer homology, infinite-dimensionality for an arbitrary 3-manifold, and the absence of instantons. S. G. was supported by the National Science Foundation under Grant No. NSF DMS 1664227 and by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award No. DE-SC0011632. Research of A. S. was partially supported by the NSF DMS-1607871, NSF DMS-1306313, the Simons 38558, and HSE University Basic Research Program. A.S. would like to further sincerely thank the Center for Mathematical Sciences and Applications at Harvard University, and Harvard University Physics department, IMSA University of Miami, as well as the Laboratory of Mirror Symmetry in Higher School of Economics, Russian federation, for the great help and support. The work of S.-T. Y. was partially supported by the Simons Foundation grant 38558. Contents 1 Introduction 2 Predictions from MTC$[M_{3}]$ and the equivariant Verlinde formula 2.1 Symmetries and gradings 2.2 Cohomology of $\mathcal{E}$-valued Higgs bundles 2.3 Derivation of (b) and (c) 3 Q-cohomology 3.1 Comparison to Floer homology 3.2 Differentials and spectral sequences 4 Applications and future directions A Supersymmetry algebra 1. Introduction The main goal of this paper is to compute and study invariants of 3-manifolds in Vafa-Witten theory [VW94], which is a particular generalization of the Donaldson gauge theory [Don83]. The latter involves the study of moduli spaces of solutions to the anti-self-duality equations (1.1) $$F_{A}^{+}\;=\;0$$ for the gauge connection $A$ over a 4-manifold $M_{4}$. When the 4-manifold is of the form (illustrated in Figure 1) (1.2) $$M_{4}\;=\;\mathbb{R}\times M_{3}$$ one can construct an infinite-dimensional version of the Morse theory on the space of gauge connections on $M_{3}$, called the instanton Floer homology [Flo88]. Figure 1. The setup of a Floer theory. In physics, it represents the space of states of a 4-dimensional topological gauge theory on $M_{3}$. In particular, the Floer homology is a homology of a chain complex generated by $\mathbb{R}$-invariant (“stationary”) solutions to the PDEs (1.1), with the differential that comes from non-trivial $\mathbb{R}$-dependent “instanton” solutions on $\mathbb{R}\times M_{3}$. For introduction to Floer theory we recommend an excellent book [Don02]. From the physics perspective, instanton Floer homology of a 3-manifold $M_{3}$ is the space of states in a Hamiltonian quantization of the topological Yang-Mills theory on $\mathbb{R}\times M_{3}$ [Wit88]. Since then, many variants of Floer homology have been studied, most notably the monopole Floer homology [KM07] based on Seiberg-Witten monopole equations: (1.3) $$\displaystyle F_{A}^{+}$$ $$\displaystyle=$$ $$\displaystyle\Psi\otimes\overline{\Psi}-\frac{1}{2}(\overline{\Psi}\Psi){\rm Id},$$ $$\displaystyle D\!\!\!\!/\,\Psi$$ $$\displaystyle=$$ $$\displaystyle 0$$ where, in addition to the $U(1)$ gauge connection $A$, the configuration space (the “space of fields”) includes a section $\Psi\in\Gamma(M_{4},W^{+})$ of a complex spinor bundle $W^{+}$. The monopole Floer homology $HM(M_{3},\mathfrak{s})$ depends on a choice of additional data, namely the spin${}^{c}$ structure $\mathfrak{s}\in\operatorname{Spin}^{c}(M_{3})$, and is equivalent to the Heegaard Floer homology $HF(M_{3},\mathfrak{s})$ [OS04a] and to the embedded contact homology [Hut10, Tau09]. In fact, technical details in each of these theories lead to four different variants, which correspondingly match. Figure 2. The space $\mathcal{T}^{+}$ is isomorphic to the Hilbert space of a quantum harmonic oscillator. The variant that will be most relevant to us in what follows is the so-called “to” version of the monopole Floer homology, $\widecheck{HM}(M_{3},\mathfrak{s})$, and the corresponding “plus” version of the Heegaard Floer homology, denoted $HF^{+}(M_{3},\mathfrak{s})$. The equivalence between these Floer theories will be useful to us because the monopole Floer homology will be conceptually closer to its analogue in Vafa-Witten theory, whereas concrete calculations are usually simpler in the Heegaard Floer homology. In particular, before we turn to PDEs in Vafa-Witten theory, let us briefly mention a few concrete results in the Heegaard Floer homology which, on the one hand, will serve as a prototype in our study of Vafa-Witten invariants of 3-manifolds and, on the other hand, illustrate the general structure of Floer homology in Yang-Mills theory (1.1) and in Seiberg-Witten theory (1.3). Theorem 1 (based on [OS04b, Theorem 9.3]). Let $L(p,1)$ denote a Lens space and $\Sigma_{g}$ be a closed oriented surface of genus $g$. Then, (1.4a) $$\displaystyle HF^{+}(L(p,1),\mathfrak{s})$$ $$\displaystyle=$$ $$\displaystyle\mathcal{T}^{+}_{0},\qquad\qquad\forall\mathfrak{s}$$ (1.4b) $$\displaystyle HF^{+}(S^{2}\times S^{1},\mathfrak{s})$$ $$\displaystyle=$$ $$\displaystyle\mathcal{T}^{+}_{-1/2}\oplus\mathcal{T}^{+}_{1/2},\qquad\mathfrak{s}=\mathfrak{s}_{0}$$ (1.4c) $$\displaystyle HF^{+}(\Sigma_{g}\times S^{1},\mathfrak{s}_{h})$$ $$\displaystyle=$$ $$\displaystyle\bigoplus_{i=0}^{d}\Lambda^{i}H^{1}(\Sigma_{g};\mathbb{Z})\otimes\mathcal{T}^{+}_{0}/(U^{i-d-1}),\qquad h\neq 0$$ where $d=g-1-|h|$ and $\mathfrak{s}_{h}$ is the spin${}^{c}$ structure with $c_{1}(\mathfrak{s}_{h})=2h[S^{1}]$. This list of basic but important calculations in the Heegaard Floer homology illustrates well a key ingredient that plays a central in this theory, namely the space (1.5) $$\mathcal{T}^{+}=\mathbb{C}[U,U^{-1}]/U\cdot\mathbb{C}[U]\cong H^{*}_{U(1)}(\text{pt})=H^{*}(\mathbb{C}\mathbf{P}^{\infty})\cong\mathbb{C}[u]$$ From the physics perspective [GPV17], this space can be understood as the Fock space of a single boson, $\mathcal{T}^{+}\cong\text{Sym}^{*}(\phi)$, illustrated in Figure 2. When the minimal degree (of the “ground state”) is equal to $n$ we write $\mathcal{T}^{+}_{n}$. It is often convenient to work with the corresponding Poincaré polynomial, $\frac{t^{n}}{1-t^{2}}$, where we introduced a new variable $t$ and took into consideration that in standard conventions $U^{-1}$ carries homological degree $2$. The same variable $t$, with the same meaning, will be used in the context of Vafa-Witten theory, to which we turn next. Our main goal is to set up a concrete framework that allows to compute analogous invariants of 3-manifolds in Vafa-Witten theory, where the relevant PDEs generalize (1.1) and (1.3): (1.6) $$\begin{aligned} F_{A}^{+}-\frac{1}{2}[B\times B]+[C,B]&=0\\ d_{A}^{*}B-d_{A}C&=0\end{aligned}\qquad\text{where}\qquad\begin{aligned} A&\in\mathcal{A}_{P}\\ B&\in\Omega^{2,+}(M_{4};\text{ad}_{P})\\ C&\in\Omega^{0}(M_{4};\text{ad}_{P})\end{aligned}$$ As explained in the main text, these equations have a number of parallels and relations to (1.1) and (1.3). This will be the basis for various structural properties of the Floer homology groups in Vafa-Witten theory which, following [GPV17], we denote by $\mathcal{H}_{\mathrm{VW}}(M_{3})$, cf. (3.2). In particular, reflecting the fact that the configuration space in (1.6) is much larger than in (1.1) and (1.3), we will see that $\mathcal{H}_{\mathrm{VW}}(M_{3})$ is also much larger, in particular, compared to $HF^{+}(M_{3})$. Also, many challenges that one encounters in constructing 3-manifold invariants based on (1.1) and (1.3) will show up in the Vafa-Witten theory as well. Our main result is a concrete framework that allows computation of $\mathcal{H}_{\mathrm{VW}}(M_{3})$ for many simple 3-manifolds. In particular, we produce a suitable analogue of Theorem 1 in Vafa-Witten theory which, due to a large size of $\mathcal{H}_{\mathrm{VW}}(M_{3})$, we state here only at the level of the Poincaré series, relegating the full description of $\mathcal{H}_{\mathrm{VW}}(M_{3})$ to the main text. Proposition 2. For $G=SU(2)$: (1.7a) $$\displaystyle\operatorname{gr-dim}\mathcal{H}_{\mathrm{VW}}(S^{3})$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{1-t^{2}}$$ (1.7b) $$\displaystyle\operatorname{gr-dim}\mathcal{H}_{\mathrm{VW}}(S^{2}\times S^{1})$$ $$\displaystyle=$$ $$\displaystyle\frac{2t^{3/2}\left(tx^{4}+1\right)}{\left(1-t^{2}\right)\left(1-t^{2}x^{4}\right)}$$ (1.7c) $$\displaystyle\operatorname{gr-dim}\mathcal{H}_{\mathrm{VW}}(\Sigma_{g}\times S^{1})$$ $$\displaystyle=$$ $$\displaystyle\sum_{\lambda=0}^{9}S_{0\lambda}^{2-2g}$$ where $t$ has the same meaning as in (1) and the explicit values of $S_{0\lambda}$ are summarized in (2.19). We present two approaches to these results, as part of a more general framework for computing $\mathcal{H}_{\mathrm{VW}}(M_{3})$. First, in section 2 we carefully analyze gradings on $\mathcal{H}_{\mathrm{VW}}(M_{3})$ for general 3-manifolds as well as for circle bundles over $\Sigma_{g}$. In the latter case, the relevant moduli spaces turn out to be closely related to the moduli spaces of complex $G_{\mathbb{C}}$ connections on $\Sigma_{g}$, whose non-compactness can be compensated by additional symmetries, as in the equivariant Verlinde formula [GP17]. Then, in section 3 we reproduce the same results by a direct computation of $Q$-cohomology (a.k.a. the BRST cohomology) in the Vafa-Witten theory. Apart from analyzing the main statement of Proposition 2 via several methods, in this paper we also present evidence for a number of conjectures — Conjectures 9, 12, and 13 — all of which are concrete falsifiable statements. Part of our motivation is that future efforts to either prove or disprove these conjectures can lead to better understanding of the Vafa-Witten theory on 3-manifolds. Another motivation for this work is to develop surgery formulae in Vafa-Witten theory, analogous to those in Yang-Mills theory [Flo90, BD95] and in Seiberg-Witten theory [MMS97, FS98, KMOS07]. Initial steps in this direction were made in [FG20] where simple instances of cutting and gluing along 3-manifolds were considered in the context of Vafa-Witten theory. One of our goals here is to generalize these recent developments and to bring them closer to the above-mentioned surgery formulae by studying Vafa-Witten invariants of 3-manifolds. In the long run, this could be a strategy for computing Vafa-Witten invariants on general 4-manifolds, beyond the well studied class of Kähler surfaces. Finally, aiming to make this paper accessible to both communities, we tried to not overload it with mathematics or physics jargon. Hopefully, we managed to strike the right balance. Acknowledgements The authors would like to thank Tobias Ekholm, Po-Shen Hsin, Martijn Kool, Nikita Nekrasov, Du Pei, and Yuuji Tanaka for useful discussions during recent years related to various aspects of this work. 2. Predictions from MTC$[M_{3}]$ and the equivariant Verlinde formula We start by summarizing the symmetries of Vafa-Witten theory and demonstrating how these symmetries can help to deal with non-compact moduli spaces. 2.1. Symmetries and gradings The holonomy group of a general 4-manifold $M_{4}$ is (2.1) $$SO(4)_{E}\;\cong\;SU(2)_{\ell}\times SU(2)_{r}$$ where we use the subscript “$E$” to distinguish it from other symmetries, which will enter the stage shortly. When $M_{4}$ is of the form (1.2), the holonomy is reduced to (2.2) $$SU(2)_{E}\;=\;\text{diag}[SU(2)_{\ell}\times SU(2)_{r}]$$ And, when $M_{3}=\Sigma_{g}\times S^{1}$, it is further reduced to $SO(2)_{E}\cong U(1)_{E}\subset SU(2)_{E}$. In addition, 4d $\mathcal{N}=4$ super-Yang-Mills theory has “internal” $R$-symmetry $SO(6)_{R}$. In the process of a topological twist, required to define the theory on a general 4-manifold [VW94], this symmetry is broken to a subgroup $SU(2)_{R}\subset SO(6)_{R}$. All fields and states in the topological theory form representations under this group. Its Cartan subgroup, which we denote by $U(1)_{t}\subset SU(2)_{R}$, is familiar from the Donaldson-Floer theory and also plays an important role in this paper. In particular, it provides a grading to the Floer homology groups, which are $\mathbb{Z}$-graded in the Vafa-Witten theory.111In the Donaldson-Floer theory, it is reduced to a $\mathbb{Z}/8\mathbb{Z}$ grading, which is a manifestation of an anomaly in the physical super-Yang-Mills theory. There is no such anomaly in the Vafa-Witten theory. We use the variable $t$ to write the corresponding generating series of their graded dimensions: Definition 3. (2.3) $$\operatorname{gr-dim}\mathcal{H}_{\mathrm{VW}}(M_{3})\;:=\;\sum_{n}t^{n}\dim\mathcal{H}_{\mathrm{VW}}^{n}(M_{3})$$ When 4-manifold $M_{4}$ is not generic, i.e. has reduced holonomy, a smaller subgroup of the original $R$-symmetry $SO(6)_{R}$ is used in the topological twist and, as a result, a larger part of this symmetry may remain unbroken. Specifically, on a 3-manifold Vafa-Witten theory has $SU(2)_{R}\times\overline{SU(2)}_{R}$ symmetry, and on a 2-manifold this internal symmetry is enhanced further to $Spin(4)_{R}\times U(1)_{R}$. All these cases are summarized in Table 1 where, following the notations of [DM97, BT97], we also list the number of unbroken supercharges, $N_{T}$. These numbers are easy to see by noting that the supercharges in Vafa-Witten theory transform as $({\bf 2},{\bf 2},{\bf 2})\oplus({\bf 1},{\bf 3},{\bf 2})\oplus({\bf 1},{\bf 1},{\bf 2})$ under $SU(2)_{\ell}\times SU(2)_{r}\times SU(2)_{R}$ [VW94]. Then, using the second column in Table 1 and counting the number of singlets under the holonomy group gives $N_{T}$ in each case. The symmetry $SU(2)_{\ell}\times SU(2)_{r}\times SU(2)_{R}$ of the Vafa-Witten theory is also useful for keeping track of various fields that appear in PDEs and lead to moduli spaces of solutions. Apart from the gauge connection, all ordinary bosonic (as opposed to Grassmann odd) variables originate from scalar fields of the 4d $\mathcal{N}=4$ super-Yang-Mills theory. After the topological twist they transform as (2.4) $$\underbrace{({\bf 2},{\bf 2},{\bf 1})}_{A}\oplus\underbrace{({\bf 1},{\bf 3},{\bf 1})}_{B}\oplus\underbrace{({\bf 1},{\bf 1},{\bf 3})}_{C,\phi,\overline{\phi}}$$ under $SU(2)_{\ell}\times SU(2)_{r}\times SU(2)_{R}$. In particular, we see that, apart from the fields $A$, $B$, and $C$ that already appeared in the PDEs (1.6), the full theory also contains fields $\phi$ and $\overline{\phi}$ that sometimes vanish and, therefore, can be ignored on closed 4-manifolds with a special metric, but will play an important role below, in the computation of the Floer homology groups. Using (2.2) we also see that, upon reduction to three dimensions, the Vafa-Witten theory contains a gauge connection and fields that transform as $({\bf 1},{\bf 1})\oplus({\bf 3},{\bf 1})\oplus({\bf 1},{\bf 3})$ under $SU(2)_{E}\times SU(2)_{R}$, reproducing one of the results in [BT97]. In other words, the theory contains two 1-forms, which naturally combine into a complexified gauge connection, so that the space of solutions on a 3-manifold contains the space of complex flat connections, $\mathcal{M}_{\text{flat}}(M_{3},G_{\mathbb{C}})$. This also holds true for another twist of 4d $\mathcal{N}=4$ super-Yang-Mills [Mar95], which under reduction to three dimensions gives the same theory. If we continue this process further, and reduce the Vafa-Witten theory to a two-dimensional theory on $\Sigma$, from the above it follows that the resulting theory contains two 1-form fields and four complex 0-form Higgs fields, all in the adjoint representation of the gauge group $G$. These fields comprise a $\widetilde{\mathcal{E}}$-valued $G$-Higgs bundles on $\Sigma$, (2.5) $$\widetilde{\mathcal{E}}\;=\;L_{1}\oplus L_{2}\oplus L_{3}\oplus L_{4}\,,\qquad L_{i}=K^{R_{i}/2}\qquad(R_{i}\in\mathbb{Z})$$ with $R_{i}=(2,0,0,0)$. As in the classical work of Hitchin [Hit87], one of the Higgs fields here (say, the one associated with $L_{4}$) comes from dimensional reduction of 4d gauge connection to two dimensions. Therefore, the Vafa-Witten theory on $M_{3}=S^{1}\times\Sigma$ can be viewed as a four-dimensional lift of the $\widetilde{\mathcal{E}}$-valued $G$-Higgs bundles on $\Sigma$, without the last term in (2.5): (2.6) $$\mathcal{E}\;=\;L_{1}\oplus L_{2}\oplus L_{3}\,,\qquad L_{i}=K^{R_{i}/2}\qquad(R_{i}\in\mathbb{Z})$$ where $R_{i}=(2,0,0)$. Put differently, on $M_{3}=S^{1}\times\Sigma$ the fields (2.4) in four-dimensional Vafa-Witten theory consist of a gauge connection and three copies of the Higgs field. Below, we shall refer to this collection of fields as the $\mathcal{E}$-valued $G$-Higgs bundles on $\Sigma$ and will be interested in the computation of its K-theoretic (three-dimensional) and elliptic (four-dimensional) equivariant character.222Note, all fields in the Vafa-Witten theory are valued in the adjoint representation of the gauge group $G$, cf. (1.6). And, the terminology is such that “$\mathcal{E}$-valued $G$-Higgs bundles on $\Sigma$” in the present context describe a version of the Hitchin equations on $\Sigma$, with Higgs fields valued in $L_{1}\otimes\text{ad}_{P}\oplus L_{2}\otimes\text{ad}_{P}\oplus L_{3}\otimes\text{ad}_{P}$. Remark 4. By definition, the generating series of graded dimensions (2.3) is the Vafa-Witten invariant on $M_{4}=S^{1}\times M_{3}$ with a holonomy for $U(1)_{t}$ symmetry along the $S^{1}$. In other words, the variable $t$ is the holonomy of a background $U(1)_{t}$ connection on the $S^{1}$. (Hopefully, this also clarifies our choice of notations.) Note, in a $d$-dimensional TQFT that satisfies Atiyah’s axioms, the relation $Z(S^{1}\times M_{d-1})=\dim\mathcal{H}(M_{d-1})$ is simply one of the axioms. Although the Vafa-Witten theory is not a TQFT in this sense — in part, because $\mathcal{H}_{\mathrm{VW}}(M_{3})$ is infinite-dimensional for any $M_{3}$ — a version of the relation $Z(S^{1}\times M_{3})=\dim\mathcal{H}(M_{3})$ still holds if instead of the ordinary dimension we use the graded dimension (2.3). And, it is also useful to compare this modification of the Vafa-Witten invariant of $S^{1}\times M_{3}$, with the holonomy $t$, to the ordinary Vafa-Witten invariant of $M_{4}=S^{1}\times M_{3}$. Since this 4-manifold is Kähler for many $M_{3}$, we can use the calculation of Vafa-Witten invariants on Kähler surfces [VW94, DPS97] (see also [TT20, GK20, GSY20, MM21] for recent work and mathematical proofs): (2.7) $$Z_{\mathrm{VW}}(M_{4})=\sum_{\genfrac{}{}{0.0pt}{2}{x:\mathrm{basic}}{\mathrm{classes}}}\mathrm{SW}_{M_{4}}(x)\Big{[}(-1)^{\frac{\chi+\sigma}{4}}\delta_{v,[x^{\prime}]}\left(\frac{G(q^{2})}{4}\right)^{\frac{\chi+\sigma}{8}}\left(\frac{\theta_{0}}{\eta^{2}}\right)^{-2\chi-3\sigma}\left(\frac{\theta_{1}}{\theta_{0}}\right)^{-x^{\prime}\cdot x^{\prime}}\\ +2^{1-b_{1}}(-1)^{[x^{\prime}]\cdot v}\left(\frac{G(q^{1/2})}{4}\right)^{\frac{\chi+\sigma}{8}}\left(\frac{\theta_{0}+\theta_{1}}{2\eta^{2}}\right)^{-2\chi-3\sigma}\left(\frac{\theta_{0}-\theta_{1}}{\theta_{0}+\theta_{1}}\right)^{-x^{\prime}\cdot x^{\prime}}\\ +2^{1-b_{1}}i^{-v^{2}}(-1)^{[x^{\prime}]\cdot v}\left(\frac{G(-q^{1/2})}{4}\right)^{\frac{\chi+\sigma}{8}}\left(\frac{\theta_{0}-i\theta_{1}}{2\eta^{2}}\right)^{-2\chi-3\sigma}\left(\frac{\theta_{0}+i\theta_{1}}{\theta_{0}-i\theta_{1}}\right)^{-x^{\prime}\cdot x^{\prime}}\Big{]}$$ which allows to express the result in terms of the Seiberg-Witten invariants $\mathrm{SW}_{M_{4}}$ and basic topological invariants, such as the Euler characteristic $\chi$ and the signature $\sigma$. Because $\chi=0=\sigma$ for $M_{4}=S^{1}\times M_{3}$, we quickly conclude that the ordinary Vafa-Witten invariant (without holonomy along the $S^{1}$) is simply a number, independent of $q$ or other variables. The calculation of (2) is the analogue of (2.7) in the presence of holonomies along the $S^{1}$. In particular, much like (2.7), it has no $q$-dependence and enjoys a relation to the Seiberg-Witten theory (more precisely, $HF^{+}(M_{3})$, cf. (1)) as we explain below. Remark 5. On Kähler manifolds, the derivatiion of (2.7) deals with one of the main challenges in the Vafa-Witten theory: the non-compactness of moduli spaces of solutions to PDEs. This issue has many important ramifications and consequences, e.g. it prevents the theory to be a TQFT in the traditional sense. In this section, this issue is addressed with the help of symmetries and the corresponding equivariant parameters, reducing the calculations to compact fixed point sets. Therefore, comparing the results of such calculations, e.g. (2), to the partition functions on $M_{4}=S^{1}\times M_{3}$ without holonomies along the $S^{1}$, such as (2.7), can not be achieved simply by “tunring off” the holonomies, i.e. taking the limit $x\to 1$ and $t\to 1$. In addition, one needs to regularize in some way the contribution of zero-modes (non-compact directions) that make (2) singular in this limit. The difficulty is that, in a non-abelian theory on a general manifold, there is no canonical way to do this because all fields interact with one another. In the case of (2), one could simply multiply this expression by a suitable factor before taking the limit; we will determine the precise factor in Lemma 6 and then present a thorough discussion of the corresponding zero-modes in section 3. 2.2. Cohomology of $\mathcal{E}$-valued Higgs bundles The ring of functions on $\mathcal{M}_{\mathrm{flat}}(M_{3},G_{\mathbb{C}})$ and, more generally, cohomology of $\mathcal{E}$-valued Higgs bundles naturally appear in a slightly different but related problem. Here, we briefly outline the connection, in particular because it suggests that we should expect particular structural properties of $\mathcal{H}_{\mathrm{VW}}(M_{3})$, such as the action of the modular group $SL(2,\mathbb{Z})$ and the mapping class group of $M_{3}$: (2.8) $$\mathrm{MCG}(M_{3})\times SL(2,\mathbb{Z})\;\mathrel{\reflectbox{$\righttoleftarrow$}}\;\mathcal{H}_{\mathrm{VW}}(M_{3})$$ as well as relations to the familiar variants of the Floer homology. Starting in six dimensions and reducing on $T^{2}\times M_{3}$, with a partial topological twist along $M_{3}$, we obtain the space of states in quantum mechanics that can be viewed in several equivalent ways [GPV17]. Reducing on $T^{2}$ first (and also taking the limit $\mathrm{vol}(T^{2})\to 0$) we obtain the Vafa-Witten theory on $\mathbb{R}\times M_{3}$ that leads to $\mathcal{H}_{\mathrm{VW}}(M_{3})$. On the other hand, first reducing on $M_{3}$ gives a 3d theory $T[M_{3}]$. Its space of supersymmetric states on $T^{2}$ can be also viewed as the space of states in 2d A-model $T^{\text{A}}[M_{3}]$, whose target space is essentially $\mathcal{M}_{\mathrm{flat}}(M_{3},G_{\mathbb{C}})$. In this way we obtain a chain of approximate relations (2.9) $$\mathcal{H}_{\mathrm{VW}}(M_{3})\;\simeq\;\mathcal{H}_{T[M_{3}]}(T^{2})\;\simeq\;\mathcal{H}_{T^{\text{A}}[M_{3}]}(S^{1})\;\cong\;QH^{*}(\mathcal{M}_{\text{flat}}(M_{3},G_{\mathbb{C}}))$$ where e.g. in the last relation we only wrote the most interesting part of the moduli space333This point will be properly addressed in the rest of the paper, where all fields and all moduli will be taken into account. In particular, when $M_{3}=S^{1}\times\Sigma$, incorporating the extra moduli gives precisely the moduli space of $\mathcal{E}$-valued Higgs bundles, and quantum cohomology in (2.9) is replaced by the classical cohomology. and essentially identified $T[M_{3}]$ with $T^{\text{A}}[M_{3}]$. These two theories are related by a circle compactification, and if the circle has finite size the quantum cohomology in (2.9) should be replaced by the quantum (equivariant) K-theory $QK(\mathcal{M}_{\text{flat}}(M_{3},G_{\mathbb{C}}))$. A similar approximation enters the first relation in (2.9) because $\mathcal{H}_{\mathrm{VW}}(M_{3})$ and $\mathcal{H}_{T[M_{3}]}(T^{2})$ differ by the Kaluza-Klein modes on $T^{2}$. This somewhat delicate point is often overlooked when 6d theory on a finite-size torus is treated as 4d $\mathcal{N}=4$ super-Yang-Mills. Luckily, the role of these KK modes, which we plan to address more fully elsewhere, is not very important for the aspects of our interest here,444For example, in the computation of the equivariant character of the moduli space of $\mathcal{E}$-valued Higgs bundles, taking the limit $\mathrm{vol}(S^{1})\to 0$, i.e. replacing $T^{2}$ by $S^{1}$ in the computation of (2.18)–(2.19) has the effect of reducing the set $\{\lambda\}$, so that instead of 10 possible values it contains only 5. However, since the remaining $\lambda$’s have the same values of $S_{0\lambda}$ as the eliminated ones, this changes the calculation of (2.12) only by an overall factor of 2. This is a general property of any theory with matter fields in the adjoint representation of the gauge group $G=SU(2)$, which is certainly the case for the Vafa-Witten theory we are interested in. in particular, it does not affect the symmetry (2.8). It was further proposed in [GPV17] that (2.9) can be categorified into a higher algebraic structure, dubbed MTC$[M_{3}]$, that also controls the BPS line operators and twisted indices in 3d theory $T[M_{3}]$. In other words, (2.9) should arise as the Grothendieck ring of the category MTC$[M_{3}]$, which in general may not be unitary or semi-simple: (2.10) $$K^{0}(\text{MTC}[M_{3}]^{ss})\;\cong\;QK(\mathcal{M}_{\text{flat}}(M_{3},G_{\mathbb{C}}))$$ When $G_{\mathbb{C}}$ flat connections on $M_{3}$ are isolated, they are expected to correspond to simple objects in MTC$[M_{3}]$. Moreover, its Grothendieck group is expected to enjoy the action of the modular group $SL(2,\mathbb{Z})=\mathrm{MCG}(T^{2})$, which comes from the symmetry of $T^{2}$ and in the Vafa-Witten theory can be thought of the $S$-duality. Similarly, the other part of the symmetry in (2.8), namely $\mathrm{MCG}(M_{3})$ comes from the $M_{3}$ part of the background and can be thought of as the duality of the 3d theory $T[M_{3}]$. Although connections between different theories outlined here will not be used in the rest of the paper, they certainly help to understand the big picture and to see the origin of various structural properties of $\mathcal{H}_{\mathrm{VW}}(M_{3})$. This includes the relation to the generalized cohomology of the moduli spaces and the action of the modular group in (2.8). Curiously, it also suggests a relation to the Heegaard Floer homology of $M_{3}$. Namely, for $G=U(1)$ the space (2.9) can be identified with $\widehat{HF}(M_{3})\otimes\mathbb{C}$, which Kronheimer and Mrowka [KM10] conjectured to be isomorphic to the framed instanton homology, $I^{\#}(M_{3}):=I(M_{3}\#T^{3})$: (2.11) $$I^{\#}(M_{3})\cong\widehat{HF}(M_{3};\mathbb{C})$$ On the other hand, because the space (2.9) with $G=U(1)$ is basically “the Cartan part” of $\mathcal{H}_{\mathrm{VW}}(M_{3})$ for $G=SU(2)$, it suggests that the latter should contain the (framed) instanton Floer homology. Notice, while this argument used relations between various theories, the conclusion can be phrased entirely in the context of Vafa-Witten theory. As we shall see later, via direct analysis of the $Q$-cohomology and spectral sequences in the Vafa-Witten theory, this conclusion is on the right track (see e.g. (3.5) and discussion that follows). Figure 3. Basic 2d cobordism that defines a product. 2.3. Derivation of (2b) and (2c) As explained above, for $M_{3}=S^{1}\times\Sigma$ the graded dimension (2.3) is equal to the equivariant Verlinde formula for $\mathcal{E}$-valued $G$-Higgs bundles on $\Sigma$, with $\mathcal{E}$ as in (2.6). In general, for $K^{R/2}_{\Sigma}$-valued Higgs fields with any $R$, it has the form of the ordinary Verlinde formula or A-model partition function on $\Sigma$ [GP17] (see also [AGP16, HL16] for further mathematical developments). In other words, it is described by a 2d semisimple TQFT with a finite-dimensional space of states on $S^{1}$ (despite the fact that the space of states in Vafa-Witten theory is infinite-dimensional). Let $\{\lambda\}$ be a basis of states that diagonalizes multiplication of the corresponding Frobenius algebra, called the equivariant Verlinde algebra, associated to a pair-of-pants and illustrated in Figure 3. We denote the corresponding eigenvalues of the structure constants by $(S_{0\lambda})^{-1}$; their values will be determined shortly. Then, (2.12) $$\operatorname{gr-dim}\mathcal{H}_{\mathrm{VW}}(M_{3})\;=\;\sum_{\lambda}S_{0\lambda}^{2-2g}$$ The sum runs over the set of admissible solutions to the Bethe ansatz equation for $\mathbb{T}$-valued variable $z$, i.e. the set of solutions from which those fixed by the Weyl group of $G$ are removed. Just like in Donaldson theory one always starts with the gauge group $G=SU(2)$, in this paper we mainly focus on Vafa-Witten theory with $G=SU(2)$. Then, $\mathbb{T}=U(1)$ and we need to solve one Bethe equation for a single variable $z$: (2.13) $$1\;=\;\exp\left(\frac{\partial\widetilde{\mathcal{W}}}{\partial\log z}\right)$$ The values of $(S_{0\lambda})^{-2}=e^{2\pi i\Omega}\frac{\partial^{2}\widetilde{\mathcal{W}}}{(\partial\log z)^{2}}\big{|}_{z_{\lambda}}$ consist of two factors, each evaluated on the solutions to (2.13). One factor is simply the second derivative of the same function $\widetilde{\mathcal{W}}(z)$, called the twisted superpotential, that determines the Bethe equation itself. Both the Bethe equation and the other factor $e^{2\pi i\Omega}$, sometimes called the effective dilaton, are multiplicative in charged matter fields, in fact, in weight spaces of $\mathfrak{g}=\mathrm{Lie}(G)$, whereas $\widetilde{\mathcal{W}}(z)$ is additive. Specifically, a $K^{R/2}_{\Sigma}$-valued Higgs field contributes to the Bethe ansatz equation a factor (2.14) $$L=K^{R/2}_{\Sigma}:\qquad\exp\left(\frac{\partial\widetilde{\mathcal{W}}}{\partial\log z}\right)\;=\;\frac{(t-z^{2})^{2}}{(tz^{2}-1)^{2}}$$ where, as before, $z$ is the equivariant parameter for the gauge symmetry, while $t$ is the analogous equivariant parameter for a $U(1)$ (or $\mathbb{C}^{*}$) symmetry acting on the adjoint Higgs field by phase rotation (and dilation). In particular, it follows that the additive contribution of a $K^{R/2}_{\Sigma}$-valued Higgs field to $\frac{\partial^{2}\widetilde{\mathcal{W}}}{(\partial\log z)^{2}}$ is $\frac{4}{z^{2}t^{-1}-1}-\frac{4}{z^{2}t-1}$. Similarly, a $K^{R/2}_{\Sigma}$-valued Higgs field contributes to $e^{2\pi i\Omega}$ a factor (2.15) $$e^{2\pi i\Omega_{\mathrm{Higgs}}}\;=\;\left(\frac{t^{3/2}z^{2}}{(t-1)(t-z^{2})(tz^{2}-1)}\right)^{R-1}$$ Note that Higgs fields with $R=0$ and $R=2$ produce opposite contributions; this feature will play a role below and can be seen directly in the calculations of one-loop determinants [GP17] that lead to the expressions quoted here. While the $SU(2)$ gauge field (or, rather, superfield) does not contribute directly to the Bethe ansatz equation (2.13), it contributes to $S_{0\lambda}^{2}$ a factor (2.16) $$e^{-2\pi i\Omega_{\mathrm{gauge}}}\;=\;\frac{1}{z^{2}}-2+z^{2}$$ Now we are ready to put these ingredients together and compute (2.12) for $\mathcal{E}$-valued Higgs bundles on $\Sigma$, with $\mathcal{E}$ in the form (2.6). In particular, corresponding to the three terms in (2.6), there are three factors (2.14) in the Bethe ansatz equation, (2.17) $$1\;=\;\exp\left(\frac{\partial\widetilde{\mathcal{W}}}{\partial\log z}\right)\;=\;\frac{\left(t-z^{2}\right)^{2}\left(x-z^{2}\right)^{2}\left(y-z^{2}\right)^{2}}{\left(tz^{2}-1\right)^{2}\left(xz^{2}-1\right)^{2}\left(yz^{2}-1\right)^{2}}$$ where, in addition to $z$, we introduced three equivariant parameters $(x,y,t)$ associated with each of the terms in (2.6). Equivalently, these are the equivariant parameters for the symmetry $U(1)_{x}\times U(1)_{y}\times U(1)_{t}$ of $\mathcal{E}$-valued Higgs bundles on $\Sigma$; using Table 1 and the discussion around it, this symmetry can be identified with the maximal torus of the group $Spin(4)_{R}\times U(1)_{R}$ in the last row. From that discussion we also know that only $U(1)_{t}$ subgroup of $U(1)_{x}\times U(1)_{y}\times U(1)_{t}$ admits a lift to the Vafa-Witten on a general 4-manifold. In other words we can use $U(1)_{x}\times U(1)_{y}\times U(1)_{t}$ and the corresponding equivariant parameters for the equivariant Verlinde formula in the case of $M_{3}=S^{1}\times\Sigma$, but need to set $x=1$ and $y=1$ when we work with more general 3-manifolds and 4-manifolds. There are 10 admissible solutions to (2.17), i.e. 10 values $z_{\lambda}$ not fixed by the Weyl group of $G=SU(2)$. Aside from a simple pair of solutions $z=\pm i$ that can be seen with a naked eye, the expressions for $z_{\lambda}$ as functions of $(x,y,t)$ are not very illuminating. In fact, to simplify things further, we often find it convenient to set $x=y$. (Recall, that on general manifolds one needs to set $x=y=1$.) Indeed, we should expect a simplication in this limit because the values of $R_{i}$ corresponding to $(x,y,t)$ are $(R_{1},R_{2},R_{3})=(2,0,0)$, and the contributions to $e^{2\pi i\Omega}$ with $R=0$ and $R=2$ cancel each other, as was noted above. Combining all contributions described above, a straightforward but slightly tedious calculation gives $$S_{00}^{2}\;=\;S_{01}^{2}\;=\;\frac{t^{3/2}(x-1)(x+1)^{3}y^{3/2}}{\left(t^{2}-1\right)x^{3/2}\left(y^{2}-1\right)(t(3xy+x+y-1)+x(y-1)-y-3)}$$ (2.18) $$S_{02}^{2}\;=\;S_{03}^{2}\;=\;S_{04}^{2}\;=\;S_{05}^{2}\;=\;\\ \;=\;\frac{t^{3/2}(x-1)^{3}y^{3/2}(tx-1)(xy-1)}{4(t-1)x^{3/2}(y-1)(ty-1)(txy-1)(t(3xy+x+y-1)+x(y-1)-y-3)}$$ $$S_{06}^{2}\;=\;S_{07}^{2}\;=\;S_{08}^{2}\;=\;S_{09}^{2}\;=\;\frac{t^{3/2}\left(x^{2}-1\right)y^{3/2}(tx-1)(xy-1)}{4\left(t^{2}-1\right)x^{3/2}\left(y^{2}-1\right)(ty-1)(txy+1)}$$ for generic $(x,y,t)$. Specializing to $x=y$ we obtain much simpler and easier to read expressions: (2.19) $$\begin{array}[]{l}S_{00}^{2}\;=\;S_{01}^{2}\;=\;\frac{t^{3/2}(x+1)}{\left(t^{2}-1\right)(t(3x-1)+x-3)}\\ S_{02}^{2}\;=\;S_{03}^{2}\;=\;S_{04}^{2}\;=\;S_{05}^{2}\;=\;\frac{t^{3/2}(x-1)^{3}}{4(t-1)\left(tx^{2}-1\right)(t(3x-1)+x-3)}\\ S_{06}^{2}\;=\;S_{07}^{2}\;=\;S_{08}^{2}\;=\;S_{09}^{2}\;=\;\frac{t^{3/2}\left(x^{2}-1\right)}{4\left(t^{2}-1\right)\left(tx^{2}+1\right)}\end{array}$$ Evaluating (2.12) for $g=0$ with (2.18) gives (2.20) $$\operatorname{gr-dim}\mathcal{H}_{\mathrm{VW}}(S^{2}\times S^{1})=\\ =\frac{2t^{3/2}(x-1)y^{3/2}\left(x\left(t\left(xy\left(t\left(x^{2}+x+1\right)y-(t+1)x-xy\right)+y+1\right)-x+y-1\right)-1\right)}{\left(t^{2}-1\right)x^{3/2}\left(y^{2}-1\right)(ty-1)\left(t^{2}x^{2}y^{2}-1\right)}=\\ =2t^{3/2}\left(\frac{(x-1)y^{3/2}(x^{2}-xy+x+1)}{x^{3/2}(y-1)(y+1)}+O(t)\right)$$ which turns into a more compact expression (2b) when we specialize to $x=y$. Similarly, for general $g>0$ eqs. (2.12) and (2.19) lead to the claim in (2c). The derivation of (2a) is much simpler and will be presented shortly in section 3. It is curious to note that (2.20) naively has a pole of order 4 at $x=y=t=1$ (associated with 4 non-compact complex directions in the moduli space), whereas its simplified version (2b) has only an order-2 pole. This happens because of a partial cancellation between the numerator and the denominator in (2.20) and teaches us a useful lesson. Namely, if we were to multiply (2.20) by a factor $(t-1)(y-1)(ty-1)(txy-1)$ that naively cancels the pole at generic $(x,y,t)$, we would get zero after a further specialization to $x=y=t=1$. Continuing along these lines, by a direct calculation it is not difficult to prove a general result: Lemma 6. At $x=1=t$, the asymptotic behavior of $\operatorname{gr-dim}\mathcal{H}_{\mathrm{VW}}\left(S^{1}\times\Sigma_{g}\right)$ specialized to $y=x$, i.e. that of (2b) and (2c), is given by (2.21) $$\operatorname{gr-dim}\mathcal{H}_{\mathrm{VW}}\left(S^{1}\times\Sigma_{g}\right)\;\sim\;\begin{cases}4\left(8\frac{1-t}{1-x}\right)^{3g-3},&\text{if }\;g>1\\ 10,&\text{if }\;g=1\\ \frac{1}{(1-t)(1-tx^{2})},&\text{if }\;g=0\end{cases}$$ Other limits and specializations of (2.12) can be analyzed in a similar fashion. Remark 7. By construction, the graded dimension (2.12) of the Floer homology in Vafa-Witten theory reduces to the equivariant Verlinde formula for ordinary Higgs bundles with either $R=0$ or $R=2$ in suitable limits: (2.22a) $$R=0:\qquad x\to 0,\quad y\to 0,\quad t=\mathrm{fixed}$$ (2.22b) $$R=2:\qquad x=\mathrm{fixed},\quad y\to 0,\quad t\to 0$$ For example, in the case of genus $g=2$, the latter gives (2.23) $$\frac{16x^{4}+49x^{3}+81x^{2}+75x+35}{(1-x^{2})^{3}}\;=\;35+75x+186x^{2}+274x^{3}+469x^{4}+\ldots$$ up to an overall factor $\left(\frac{x}{yt}\right)^{3/2}$ which has to do with the normalization of Vafa-Witten invariants. The expansion (2.23) agrees with eq.(1.5) in [GP17] at level $k=4=0+2+2$. Note, although the Higgs fields with $R=0$ effectively disappear in the limit (7b), they each leave a trace by shifting the value of $k$ by $+2$. For gauge groups of higher rank, the shift is by the dual Coxeter number, cf. [GP17, sec. 5.1.1]. More generally, for other values of $g$ similar expressions can be obtained by specializing (2.18) to $y=0$ and $t=0$: (2.24) $$\begin{array}[]{l}S_{00}^{2}\;=\;S_{01}^{2}\;=\;\frac{(x-1)(x+1)^{3}}{x+3}\\ S_{02}^{2}\;=\;S_{03}^{2}\;=\;S_{04}^{2}\;=\;S_{05}^{2}\;=\;\frac{(x-1)^{3}}{4(x+3)}\\ S_{06}^{2}\;=\;S_{07}^{2}\;=\;S_{08}^{2}\;=\;S_{09}^{2}\;=\;\tfrac{1}{4}(x^{2}-1)\end{array}$$ where we again omitted the overall factor $\left(\frac{x}{yt}\right)^{3/2}$ related to a choice of normalization. Since the values of $S_{0\lambda}^{2}$ are pairwise equal, there are effectively 5 values of $\lambda$, in agreement with the fact that the equivariant Verlinde formula has $k+1$ Bethe vacua for general $k$. Similarly, the limit (7a) can be obtained by specializing (2.19) to $x=0$. Remark 8. So far we tacitly ignored one important detail, which does not appear for $G=SU(2)$ but would enter the discussion for more general gauge groups. In general, the Vafa-Witten theory on $M_{4}$ requires a choice of a decoration (’t Hooft flux) valued in $H^{2}(M_{4};\Gamma)\cong\mathrm{Hom}\left(H^{2}(M_{4};\mathbb{Z}),\Gamma\right)$, where (2.25) $$\Gamma=\pi_{1}(G)$$ On a 4-manifold of the form $M_{4}=S^{1}\times M_{3}$ this requires a choice of a decoration valued in $H^{2}(M_{3};\Gamma)$, which can be thought of as simply the restriction of the decoration from $M_{4}$, as well as the choice of grading valued in $H^{2}(M_{3};\widehat{\Gamma})$, where (2.26) $$\widehat{\Gamma}=\mathrm{Hom}\left(\Gamma,U(1)\right)$$ is the Pontryagin dual group. Therefore, we conclude that, in general, $\mathcal{H}_{\mathrm{VW}}(M_{3})$ is decorated by $H^{2}(M_{3};\Gamma)$ and graded by $H^{2}(M_{3};\widehat{\Gamma})$: $$\mathcal{H}_{\mathrm{VW}}(M_{3})$$ structure graded by    $$H^{2}(M_{3};\widehat{\Gamma})$$ decorated by    $$H^{2}(M_{3};\Gamma)$$ We close this section by drawing a general lesson from preliminary considerations presented here. In a simple infinite family of 3-manifolds considered here, we found that the graded dimension of $\mathcal{H}_{\mathrm{VW}}(M_{3})$ is always a power series, rather than a finite polynomial. In fact, already from the preliminary analysis in this section one can see several good reasons why $\mathcal{H}_{\mathrm{VW}}(M_{3})$ is expected to be infinite-dimensional for a general 3-manifold, a conclusion that will be further supported by considerations in section 3. Conjecture 9. $\mathcal{H}_{\mathrm{VW}}(M_{3})$ is infinite-dimensional for any closed 3-manifold $M_{3}$. In light of this Conjecture, the role of the equivariant parameters $(x,y,t)$ — that were central to the considerations of the present section — is to provide a way to regularize the infinity in $\dim\mathcal{H}_{\mathrm{VW}}(M_{3})$. This also illustrate well the challenge of computing $\mathcal{H}_{\mathrm{VW}}(M_{3})$ on more general 3-manifolds: in the absence of suitable symmetries and equivariant parameters, one has to work with the entire $\mathcal{H}_{\mathrm{VW}}(M_{3})$, which is infinite-dimensional. 3. Q-cohomology In this section, we pursue the same goal — to explicitly compute $\mathcal{H}_{\mathrm{VW}}(M_{3})$ for a class of 3-manifolds — via a direct analysis of $Q$-cohomology in Vafa-Witten theory. The results agree with the preliminary considerations in section 2. Recall, that the off-shell realization [VW94] of Vafa-Witten theory involves the following set of fields555Here, the subscript indicates the degree of the differential form on a 4-manifold, while the superscript is the $U(1)_{t}$ grading, also known as the “ghost number.” $$\begin{array}[]{lcl}\hfil~{}~{}\text{{\bf bosons:}}\hfil\\[5.69046pt] \phi^{+2},~{}{\overline{\phi}}^{\,-2},~{}C^{0}&:&\text{scalars (0-forms)}\\ A^{0}_{1},~{}\widetilde{H}^{0}_{1}&:&\text{1-forms}\\ (B^{+}_{2})^{0},~{}(D^{+}_{2})^{0}&:&\text{self-dual 2-forms}\end{array}\qquad\begin{array}[]{lcl}\hfil~{}~{}\text{{\bf fermions}}\hfil\\[5.69046pt] \zeta^{+1},~{}\eta^{-1}&:&\text{scalars (0-forms)}\\ \psi^{+1}_{1},~{}\widetilde{\chi}_{1}^{-1}&:&\text{1-forms}\\ (\widetilde{\psi}^{+}_{2})^{+1},~{}(\chi^{+}_{2})^{-1}&:&\text{self-dual 2-form}\end{array}$$ and their $Q$-transformations: (3.1) $$\begin{aligned} QA&=\psi_{1}\\ Q\phi&=0\\ Q\overline{\phi}&=\eta\\ Q\eta&=i[\overline{\phi},\phi]\\ Q\psi_{1}&=d_{A}\phi\\ Q\chi^{+}_{2}&=D^{+}_{2}+s_{2}^{+}\\ QD^{+}_{2}&=i[\chi^{+}_{2},\phi]-Qs_{2}^{+}\end{aligned}\qquad\qquad\begin{aligned} QB^{+}_{2}&=\widetilde{\psi}^{+}_{2}\\ Q\widetilde{\psi}^{+}_{2}&=i[B^{+}_{2},\phi]\\ Q\widetilde{\chi}_{1}&=\widetilde{H}_{1}+s_{1}\\ Q\widetilde{H}_{1}&=i[\widetilde{\chi}_{1},\phi]-Qs_{1}\\ QC&=\zeta\\ Q\zeta&=i[C,\phi]\end{aligned}$$ where $$s_{2}^{+}=F^{+}_{\alpha\beta}+[B^{+}_{\gamma\alpha},B^{+\gamma}_{\beta}]+2i[B^{+}_{\alpha\beta},C]$$ $$s_{1}=D_{\alpha\dot{\alpha}}C+iD_{\beta\dot{\alpha}}{B^{+\beta}}_{\alpha}$$ The on-shell formulation is obtained simply by setting $D^{+}_{\alpha\beta}=0=\widetilde{H}_{\alpha\dot{\alpha}}$. Lemma 10. Up to gauge transformations, $Q^{2}=0$. Proof. This is easily demonstrated by a direct calculation. ∎ This basic fact about topological gauge theory is the reason one can define the $Q$-cohomology groups (3.2) $$\mathcal{H}_{\mathrm{VW}}(M_{3}):=\frac{\text{ker}\;Q}{\text{im}\;Q}$$ which are the main objects of study in the present paper. Figure 4. The space of local operators is isomorphic to the space of states on a sphere. Just like the fields of Vafa-Witten theory are differential forms of various degrees graded by weights of the $U(1)_{t}$ symmetry, so are the $Q$-cohomology classes. The cohomology classes represented by differential forms of degree 0 are called local operators, i.e. operators supported at points on a 4-manifold. Since in four dimensions a link of a point is a 3-sphere, the space of local operators is naturally isomorphic to $\mathcal{H}_{\mathrm{VW}}(S^{3})$, cf. Figure 4. This relation, often called the state-operator correspondence, has obvious generalizations that will be discussed below. For example, when we talk about $M_{3}=S^{2}\times S^{1}$ we are effectively counting local operators in three-dimensional theory obtained by dimensional reduction of the 4d theory on a circle. Such 3d local operators come either from local operators in four dimensions or from 4d line operators, i.e. $Q$-cohomology classes supported on lines (a.k.a. 1-observables). We will return to the local operators of such 3d theory shortly, after discussing the original 4d theory first, thus providing a proof of (2a). Proposition 11. In the notations (1.5) introduced in the Introduction, the space of local observables in Vafa-Witten theory with gauge group $G=SU(2)$, i.e. the space of states on $M_{3}=S^{3}$, is (3.3) $$\mathcal{H}_{\mathrm{VW}}(S^{3})=\mathcal{T}^{+}_{0}\cong\mathbb{C}[u]$$ generated by $u=\mathrm{Tr}\,\phi^{2}$. Moreover, (3.3) transforms trivially under the modular $SL(2,\mathbb{Z})$ action discussed in section 2. Proof. To construct a local operator, we can only use 0-forms. We can not use their exterior derivatives or forms of higher degrees because that would require the metric and the resulting operators would be $Q$-exact. This limits our arsenal to the 0-forms $\phi$, ${\overline{\phi}}$, $C$, $\zeta$, and $\eta$. In addition, all observables (not only local) must be gauge-invariant. Since all fields in Vafa-Witten theory transform in the adjoint representation of the gauge group, this means we need to consider traces of polynomials in $\phi$, ${\overline{\phi}}$, $C$, $\zeta$, and $\eta$. Inspecting the $Q$-action (3.1) on these fields leads to $u=\mathrm{Tr}\,\phi^{2}$ as the only independent gauge-invariant local observable, as also found e.g. in [Loz99]. Polynomials in $u$ also represent $Q$-cohomology classes, of course. This leads to (3.3). ∎ In higher rank, $\mathcal{H}_{\mathrm{VW}}(S^{3})$ is spanned by invariant polynomials of $\phi$. In the opposite direction, it is also instructive to consider a version of the Proposition 11 in Vafa-Witten theory with gauge group $G=U(1)$. The arguments are similar, but the $Q$-action on fields is much simpler than (3.1). Namely, in abelian theory commutators vanish and $d_{A}$ becomes the ordinary exterior derivative: (3.4) $$\begin{aligned} QA&=\psi_{1}\\ Q\phi&=0\\ Q\overline{\phi}&=\eta\\ Q\eta&=0\\ Q\psi_{1}&=d\phi\\ Q\chi^{+}_{2}&=D^{+}_{2}+s_{2}^{+}\\ QD^{+}_{2}&=0\end{aligned}\qquad\qquad\begin{aligned} QB^{+}_{2}&=\widetilde{\psi}^{+}_{2}\\ Q\widetilde{\psi}^{+}_{2}&=0\\ Q\widetilde{\chi}_{1}&=\widetilde{H}_{1}+s_{1}\\ Q\widetilde{H}_{1}&=0\\ QC&=\zeta\\ Q\zeta&=0\end{aligned}$$ We quickly learn that in Vafa-Witten theory with $G=U(1)$ one also has $\mathcal{H}_{\mathrm{VW}}(S^{3})=\mathcal{T}^{+}_{0}$, just as (3.3) in the theory with $G=SU(2)$. Moreover, both of these agree with the Floer homology of $S^{3}$ and with the Heegaard Floer homology of $S^{3}$, cf. (1a). This is not a coincidence and is a good illustration of a deeper set of relations between $\mathcal{H}_{\mathrm{VW}}(M_{3})$ and the instanton Floer homology of the same 3-manifold that will be discussed further below. These parallels with the instanton Floer homology will help us to better understand $\mathcal{H}_{\mathrm{VW}}(M_{3})$ for $M_{3}=\Sigma_{g}\times S^{1}$. 3.1. Comparison to Floer homology The first column in (3.1) is precisely the $Q$-cohomology in Donaldson-Witten theory, with the same action of $Q$. This suggests that we should consider $\mathcal{H}_{\mathrm{VW}}(M_{3})$ and $HF(M_{3})$ in parallel, anticipating a general relation of the form (3.5) $$HF(M_{3})\subseteq\mathcal{H}_{\mathrm{VW}}(M_{3})$$ where the reduction of grading mod 8 is understood on the right-hand side. Clarifying the role of such relation, and understanding under what conditions it should be expected, is one of the motivations in the discussion below. It will help us to understand better the structure of $\mathcal{H}_{\mathrm{VW}}(M_{3})$ for general $M_{3}$ and for $M_{3}=\Sigma_{g}\times S^{1}$ in particular. Recall, that the Floer homology $HF(\Sigma_{g}\times S^{1})$ is isomorphic to the quantum cohomology of $\mathrm{Bun}_{G_{C}}$, the space of holomorphic bundles on $\Sigma_{g}$ (see e.g. [Mn99]): (3.6) $$HF^{*}(\Sigma_{g}\times S^{1})\;\cong\;QH^{*}(\mathrm{Bun}_{G_{C}})$$ which, in turn, is isomorphic to the space of flat $G$-connections, $\mathrm{Bun}_{G_{C}}\cong\mathcal{M}_{\text{flat}}(\Sigma_{g},G)$, according to the celebrated theorem of Narashimhan and Seshadri. More precisely, here by $\mathrm{Bun}_{G_{C}}$ we mean the space of bundles with fixed determinant, such that it is simply-connected for any $g$ and $\mathrm{Bun}_{G_{C}}=\mathrm{pt}$ when $g=1$. The Poincaré polynomial of $\mathrm{Bun}_{G_{C}}(\Sigma_{g})$ is given by the Harder-Narasimhan formula: (3.7) $$P(\mathrm{Bun}_{SL(2)}(\Sigma_{g}))=\frac{(1+t^{3})^{2g}-t^{2g}(1+t)^{2g}}{(1-t^{2})(1-t^{4})}$$ If we wish to work with all bundles (rather than bundles with fixed determinant), which in gauge theory language corresponds to replacing $G=SU(N)$ by $G=U(N)$, then (3.8) $$H^{*}(\mathrm{Bun}_{GL(N)}(\Sigma_{g}))\cong H^{*}(T^{2g})\otimes H^{*}(\mathrm{Bun}_{SL(N)}(\Sigma_{g}))$$ Returning to the quantum cohomology (3.6) and the corresponding Floer homology, it has the following generators: $$\displaystyle\alpha\in HF^{2}(\Sigma_{g}\times S^{1})$$ (3.9) $$\displaystyle\psi_{i}\in HF^{3}(\Sigma_{g}\times S^{1})$$ $$\displaystyle\quad 1\leq i\leq 2g$$ $$\displaystyle\beta\in HF^{4}(\Sigma_{g}\times S^{1})$$ The action of $\mathrm{Diff}(\Sigma_{g})$ on $HF^{*}(\Sigma_{g}\times S^{1})$ factors through $Sp(2g,\mathbb{Z})$ on $\psi_{i}$, so that the invariant part (3.10) $$HF^{*}(\Sigma_{g}\times S^{1})^{Sp(2g,\mathbb{Z})}\;=\;\mathbb{C}[\alpha,\beta,\gamma]/J_{g}$$ is generated by $\alpha$, $\beta$, and the $Sp(2g,\mathbb{Z})$-invariant combination (3.11) $$\gamma=-2\sum_{i=1}^{g}\psi_{i}\psi_{i+g}$$ Moreover, (3.12) $$HF^{*}(\Sigma_{g}\times S^{1})\;=\;\bigoplus_{k=0}^{g}\Lambda_{0}^{k}HF^{3}\otimes\mathbb{C}[\alpha,\beta,\gamma]/J_{g-k}$$ where $\Lambda_{0}^{k}HF^{3}=\ker(\gamma^{g-k+1}:\Lambda^{k}HF^{3}\to\Lambda^{2g-k+2}HF^{3})$ is the primitive part of $\Lambda^{k}HF^{3}$ and the explicit description of $J_{g}$ can be found e.g. in [Mn99]. Note, $p$-form observables in Donaldson-Witten theory correspond to cohomology classes of homological degree $4-p$. This can be understood as a consequence of the standard descent procedure, $$\displaystyle 0\;=\;i\{Q,W_{0}\}$$ $$\displaystyle\qquad,\qquad$$ $$\displaystyle dW_{0}\;=\;i\{Q,W_{1}\}$$ (3.13) $$\displaystyle dW_{1}\;=\;i\{Q,W_{2}\}$$ $$\displaystyle\qquad,\qquad$$ $$\displaystyle dW_{2}\;=\;i\{Q,W_{3}\}$$ $$\displaystyle dW_{3}\;=\;i\{Q,W_{4}\}$$ $$\displaystyle\qquad,\qquad$$ $$\displaystyle dW_{4}\;=\;0$$ applied to the local observable $W_{0}=u=\mathrm{Tr}\,\phi^{2}$ that we already met earlier in (3.3) and that also is a generator of $HF(S^{3})\cong\mathcal{H}_{\mathrm{VW}}(S^{3})$, cf. (3.5). Indeed, in the conventions such that the homological grading in (3.9) is twice the $U(1)_{t}$ degree, $\beta=-4u$ has $U(1)_{t}$ degree $2$ and the topological supercharge $Q$ carries $U(1)_{t}$ degree $+\frac{1}{2}$. In this conventions, $W_{p}$ constructed as in (3.13) is a $p$-form on $M_{4}$ of $U(1)_{t}$ degree (3.14) $$\deg_{t}(W_{p})=\deg_{t}(W_{0})-\frac{p}{2}$$ Therefore, integrating $W_{p}$ over a $p$-cycle $\gamma$ in $M_{4}$ we obtain a topological observable with $U(1)_{t}$ grading $\deg_{t}(W_{0})-\frac{p}{2}$, (3.15) $$\mathcal{O}^{(\gamma)}\;:=\;\int_{\gamma}W_{\dim(\gamma)}$$ Since in the conventions used here the homological grading and $U(1)_{t}$ grading differ by a factor of 2, the homological degree of the observable $\mathcal{O}^{(\gamma)}$ is $2\deg_{t}(W_{0})-p$. The relation (3.6) has an analogue in the Vafa-Witten theory and can be understood in the general framework a la Atiyah-Floer. Indeed, the push-forward, or the “fiber integration,” of the 4d TQFT functor along $\Sigma_{g}$ gives a 2d TQFT, namely the A-model with target space given by the space of solutions to gauge theory PDEs on $\Sigma_{g}$. When this process, called topological reduction [BJSV95], is applied to the Donaldson-Witten theory it gives precisely A-model with target space $\mathrm{Bun}_{G_{C}}$. This is in excellent agreement with the Atiyah-Floer conjecture which, among other things, asserts that upon such fiber integration (or, topological reduction) a 3-manifold $M_{3}^{b}$ with boundary $\Sigma_{g}=\partial(M_{3}^{b})$ defines a boundary condition (“brane”) $\mathcal{B}(M_{3}^{b})$ in the A-model with target space $\mathrm{Bun}_{G_{C}}(\Sigma_{g})$. In this way, the instanton Floer homology of the Heegaard decomposition (3.16) $$M_{3}=M_{3}^{+}\cup_{\Sigma_{g}}M_{3}^{-}$$ can be understood as the Lagrangian Fukaya-Floer homology of $\mathcal{B}(M_{3}^{+})$ and $\mathcal{B}(M_{3}^{-})$ in the ambient moduli space $\mathrm{Bun}_{G_{C}}\cong\mathcal{M}_{\text{flat}}(\Sigma_{g},G)$. In this relation, gauge instantons that provide a differential in the Floer complex become disk instantantons in the A-model on $\mathrm{Bun}_{G_{C}}(\Sigma_{g})$. Similarly, in the Vafa-Witten theory the space of solutions to the PDE on $\Sigma_{g}$, i.e. target space of the topological sigma-model, $\mathcal{M}_{\mathrm{VW}}(\Sigma_{g},G)$, is the space of $\mathcal{E}$-valued $G$-Higgs bundles on $\Sigma$ that we already encountered in section 2. There are some notable differences, however. Thus, one important novelty of the Vafa-Witten theory is that it has much larger configuation space (space of fields), larger (super)symmetry and, correspondingly, larger structure in the push-forward to the topological sigma-model along $\Sigma_{g}$. In particular, the resulting sigma-model has no disk instantons (cf. e.g. [Leu02, Lemma 15]) since in the context of Vafa-Witten theory $\mathcal{B}(M_{3}^{+})$ and $\mathcal{B}(M_{3}^{-})$ are holomorphic Lagrangians submanifolds of $\mathcal{M}_{\mathrm{VW}}(\Sigma_{g},G)$. This strongly suggests that the original 4d gauge theory also has no instantons that contribute to $\mathcal{H}_{\mathrm{VW}}(M_{3})$. This conclusion is also supported by the fact that on $\mathbb{R}\times M_{3}$ or on $S^{1}\times M_{3}$ Vafa-Witten theory is equivalent to another, amphicheiral twist of $\mathcal{N}=4$ super-Yang-Mills [GM01] which is known to have no instantons [LL97]. Based on all of these, we expect: Conjecture 12. In Vafa-Witten theory on 3-manifolds, there are no (disk) instantons that contribute to $\mathcal{H}_{\mathrm{VW}}(M_{3})$. In what follows we assume the validity of this conjecture which drastically simplifies the computation of the homology groups $\mathcal{H}_{\mathrm{VW}}(M_{3})$. It basically means that the chain complex underlying this homology theory is obtained by restricting the original configuration space to the space of fields on $M_{3}$, with the differential induced by the action of $Q$ in (3.1), justifying (3.2). As it often happens in topological sigma-models, the action of $Q$ can be interpreted geometrically as the suitable differential acting on the differential forms on the target manifold. In the context of Vafa-Witten theory, where the target space $\mathcal{M}_{\mathrm{VW}}(\Sigma_{g},G)$ is the space of $\mathcal{E}$-valued $G$-Higgs bundles on $\Sigma$, this also clarifies and further justifies the claim in Conjecture 9. Namely, from the cohomological perspective discussed here, the infinite-dimensionality of $\mathcal{H}_{\mathrm{VW}}(M_{3})$ is attributed to the non-compactness of $\mathcal{M}_{\mathrm{VW}}(\Sigma_{g},G)$. Just like the space of holomorphic functions on $\mathbb{C}$ is infinite-dimensional, the non-compactness of $\mathcal{M}_{\mathrm{VW}}(\Sigma_{g},G)$ leads to $\dim\mathcal{H}_{\mathrm{VW}}(M_{3})=\infty$ for any $M_{3}$. This analogy is, in fact, realized in the Vafa-Witten theory with gauge group $G=U(1)$ that we already briefly discussed around eq.(3.4) in this section. The computation of $\mathcal{H}_{\mathrm{VW}}(M_{3})$ for $M_{3}=\Sigma_{g}\times S^{1}$ is similar to the computation of (3.3), except many other fields besides $\phi$ have zero-modes on $M_{3}$ and contribute to $\mathcal{H}_{\mathrm{VW}}(M_{3})$. Equivalently, as explained around Figure 4, $p$-form observables in the original theory integrated over $p$-cycles in $M_{3}$ give rise to non-trivial $Q$-cohomology classes, cf. (3.15). The counting of zero-modes is especially simple in the case of abelian theory, on which more general consideration can be modelled. For example, when $G=U(1)$, 1-form observables include $\psi_{1}$, which can be integrated over 1-cycles and give $2g+1$ Grassmann (odd) zero-modes on $M_{3}=\Sigma_{g}\times S^{1}$. More precisely, the zero-modes of $A$ parametrize $\mathcal{M}_{\text{flat}}(\Sigma_{g},U(1))=\mathrm{Jac}(\Sigma_{g})\cong T^{2g}$, the fields $\phi$, $\overline{\phi}$, $C$, $\eta$, and $\zeta$ have one zero-mode each, while each of the remaining fields has $2g+1$ zero-modes, modulo constraints (field equations). Altogether, these modes parametrize666Notice that bosonic (even) and Grassmann (odd) dimensions of this superspace are equal; this is a consequence of the fact that Vafa-Witten theory is balanced [DM97]. The bosonic (even) part of this superspace is $\mathbb{C}^{2}\times\mathcal{M}_{\text{flat}}(M_{3},U(1)_{\mathbb{C}})$, and the same expression holds for general 3-manifold $M_{3}$. (3.17) $$\mathcal{M}_{\mathrm{VW}}(\Sigma_{g},U(1))\;=\;T^{*}\mathrm{Jac}(\Sigma_{g})\times\Pi\mathbb{C}\times\Pi\mathbb{C}\times\left(\mathbb{C}\times\Pi\mathbb{C}^{g}\right)\times\left(\mathbb{C}\times\Pi\mathbb{C}^{g}\right)$$ where $\Pi\mathbb{C}^{n}$ represents Grassmann (odd) space and contributes to $Q$-cohomology a tensor product of $n$ copies of the fermionic Fock space, $\mathcal{F}=\Lambda^{*}[\xi]\cong H^{*}(\mathbb{C}\mathbf{P}^{1})$. In comparison, each copy of $\mathbb{C}$ in the moduli space (3.17) contributes to the $Q$-cohomology a factor of $\mathcal{T}^{+}$, the Fock space of a single boson described in (1.5) and illustrated in Figure 2. The result (3.17) agrees with the analysis in section 2, where it can be understood as the moduli space of $\mathcal{E}$-valued $G$-Higgs bundles on $\Sigma$, with $G=U(1)$. Indeed, a Higgs field on $\Sigma$ with $R$-charge $R$ (cf. (2.6)) contributes $H^{0}(K_{\Sigma}^{R/2})$ bosonic zero-modes and $H^{0}(K_{\Sigma}^{1-R/2})$ fermionic zero-modes, all valued in $\mathrm{Lie}(G)$. In the case of Vafa-Witten theory we have three Higgs fields with $R=2$, 0, and 0, respectively, which leads precisely to (3.17). Furthermore, in the notations of section 2, the symmetry $U(1)_{x}$ acts on the fiber of $T^{*}\mathrm{Jac}(\Sigma)$ and one of the $\Pi\mathbb{C}$ factors in (3.17), whereas symmetries $U(1)_{y}$ and $U(1)_{t}$ each act on the corresponding copy of $\left(\mathbb{C}\times\Pi\mathbb{C}^{g}\right)$ in (3.17). For a non-abelian $G$, the analysis is similar, though $\mathcal{M}_{\mathrm{VW}}(\Sigma_{g},G)$ is no longer a product a la (3.17); $T^{*}\mathrm{Jac}(\Sigma)$ is replaced by $\operatorname{Hom}\left(\pi_{1}(\Sigma),G_{\mathbb{C}}\right)$ and each copy of $\mathbb{C}$ (resp. $\Pi\mathbb{C}$) is replaced by $\mathfrak{g}_{\mathbb{C}}$ (resp. $\Pi\mathfrak{g}_{\mathbb{C}}$), subject to the constraints and gauge transformations that act simultaneously on all of the factors. Note, unlike (3.3), the cohomology of (3.17) and its non-abelian generalization $\mathcal{H}_{\mathrm{VW}}(\Sigma_{g}\times S^{1})$ transform non-trivially under the modular $SL(2,\mathbb{Z})$ action discussed in section 2. In particular, the $S$ element of $SL(2,\mathbb{Z})$ acts on $\mathrm{Jac}(\Sigma_{g})$ as the Fourier-Mukai transform. As already noted earlier, the spectrum of fields and the (super)symmetry algebra in the Vafa-Witten theory is much larger compared to what one finds in the Donaldson-Witten theory, cf. (3.5). As a result, there is more structure in $Q$-cohomology of Vafa-Witten theory, to which we turn next. 3.2. Differentials and spectral sequences There are two ways in which spectral sequences typically arise in a cohomological TQFT: one can change the differential $Q$ while keeping the theory intact, or one can deform the theory. (See [GNS${}^{+}$16] for an extensive discussion and realizations in various dimensions.) In the first case, we obtain a different cohomological invariant in the same theory, whereas in the latter case we obtain a relation between cohomological invariants of two different theories. In the present context of Vafa-Witten theory, a natural class of deformations consists of relevant deformations that, in a physical theory, initiate RG flows to new conformal fixed points. If we want the resulting SCFT to allow a topological twist on general 4-manifolds, we need to preserve at least $\mathcal{N}=2$ supersymmetry of the physical theory and the RG-flow. (This condition is necessary, but may not be sufficient; there can be further constraints.) In such a scenario, one can expect the following: Conjecture 13. A 4d $\mathcal{N}=2$ theory that can be reached from 4d $\mathcal{N}=4$ super-Yang-Mills via an RG-flow leads to a spectral sequence that starts with $\mathcal{H}_{\mathrm{VW}}(M_{3})$ and converges to Floer-like homology in that $\mathcal{N}=2$ theory. One interesting feature of spectral sequences induced by RG-flows is that a local relevant operator $\mathcal{O}$ that triggers the flow may transform non-trivially under the subgroup of $SO(6)_{R}$ that becomes the $R$-symmetry of the 4d $\mathcal{N}=2$ SCFT. If so, under the topological twist it may no longer remain a scalar (a 0-form) on $M_{4}$, thus requiring a choice of additional structure. For example, a mass deformation to $\mathcal{N}=2^{*}$ theory, already considered in [VW94, LL98], makes all the fields in the right column of (3.1) massive777At the level of $Q$-cohomology, it modifies the right-hand side in (3.1). and does not require any additional choices or structures on $M_{3}$ or $M_{4}$. It initiates an RG-flow to 4d $\mathcal{N}=2$ super-Yang-Mills, whose topologically twisted version is the Donaldson-Witten theory, providing another perspective on the connection between these two topological theories, cf. (3.5). Now, let us consider the other mechanism that leads to spectral sequences, in which the theory remains unchanged, but the definition of $Q$ changes. This is only possible if a theory admits more than one BRST operator (scalar supercharge) that squares to zero. Luckily, the Vafa-Witten theory is a good example; in addition to the original differential $Q$ it has the second differential, $Q^{\prime}$, that acts as follows (3.18) $$\begin{aligned} Q^{\prime}A&=\widetilde{\chi}_{1}\\ Q^{\prime}\phi&=\zeta\\ Q^{\prime}\overline{\phi}&=0\\ Q^{\prime}\eta&=i[C,\overline{\phi}]\\ Q^{\prime}\psi_{1}&=-\widetilde{H}_{1}-s_{1}+d_{A}C\\ Q^{\prime}\chi^{+}_{2}&=i[B^{+}_{2},\overline{\phi}]\\ QD^{+}_{2}&=i[\chi^{+}_{2},\phi]-Qs_{2}^{+}\end{aligned}\qquad\qquad\begin{aligned} Q^{\prime}B^{+}_{2}&=\chi^{+}_{2}\\ Q\widetilde{\psi}^{+}_{2}&=\\ Q^{\prime}\widetilde{\chi}_{1}&=-d_{A}\overline{\phi}\\ Q\widetilde{H}_{1}&=i[\widetilde{\chi}_{1},\phi]-Qs_{1}\\ Q^{\prime}C&=-\eta\\ Q^{\prime}\zeta&=i[\overline{\phi},\phi]\end{aligned}$$ Since Vafa-Witten theory on a general 4-manifold has $SU(2)_{R}$ symmetry, under which $Q$ and $Q^{\prime}$ transform as a two-dimensional representation, ${\bf 2}$, it is convenient to write the action of $Q$ and $Q^{\prime}$ in a way that makes this symmetry manifest. Therefore, we introduce $Q^{a}=(Q,Q^{\prime})$, where $a=1,2$. Similarly, we can combine all odd (Grassmann) fields of the Vafa-Witten theory into three $SU(2)_{R}$ doublets: $\zeta$ and $\eta$ into a doublet of 0-forms $\eta^{a}$, $\psi_{1}$ and $\widetilde{\chi}_{1}$ into a doublet of 1-forms $\psi_{1}^{a}$, $\widetilde{\psi}_{2}^{+}$ and $\chi_{2}^{+}$ into a doublet of self-dual 2-forms $\chi_{2}^{a}$. The bosonic (or, even) fields of the Vafa-Witten theory likewise combine into a triplet of 0-forms $\phi^{ab}=(\phi,C,\overline{\phi})$, and the rest are $SU(2)_{R}$ singlets: $A$, $D_{2}^{+}$, $B_{2}^{+}$, and $\widetilde{H}_{1}$. Then, the action of the BRST differentials (3.1) and (3.18) can be written in a more compact form: (3.19) $$\begin{aligned} Q^{a}A&=\psi_{1}^{a}\\ Q^{a}\phi^{bc}&=\tfrac{1}{2}\epsilon^{ab}\eta^{c}+\frac{1}{2}\epsilon^{ac}\eta^{b}\\ Q^{a}\psi_{1}^{b}&=d_{A}\phi^{ab}+\epsilon^{ab}\widetilde{H}_{1}\\ Q^{a}\chi^{b}_{2}&=[B_{2}^{+},\phi^{ab}]+\epsilon^{ab}G_{2}^{+}\end{aligned}\qquad\qquad\begin{aligned} Q^{a}B^{+}_{2}&=\chi_{2}^{a}\\ Q^{a}\eta^{b}&=-\epsilon_{cd}[\phi^{ac},\phi^{bd}]\\ Q^{a}\widetilde{H}_{1}&=-\tfrac{1}{2}d_{A}\eta^{a}-\epsilon_{cd}[\phi^{ac},\psi_{1}^{d}]\\ Q^{a}G_{2}^{+}&=-\tfrac{1}{2}[B_{2}^{+},\eta^{a}]-\epsilon_{bc}[\phi^{ab},\chi_{2}^{c}]\end{aligned}$$ where $\epsilon_{ab}$ is the invariant tensor of $SU(2)_{R}$, and we use the conventions $\epsilon_{12}=1$, $\epsilon^{ac}\epsilon_{cb}=-\delta^{a}_{b}$, $\varphi_{a}=\varphi^{b}\epsilon_{ba}$, $\varphi^{a}=\epsilon^{ab}\varphi_{b}$. In addition to the differentials $Q^{a}=(Q,Q^{\prime})$, the Vafa-Witten theory also has a doublet of vector supercharges (described in detail in Appendix A). On a 4-manifold of the form (1.2) they produce another doublet of BRST differentials, $\overline{Q}^{a}$, that are scalars with respect to the holonomy group of $M_{3}$. Altogether, the total number of scalar supercharges is $N_{T}=4$, as noted earlier e.g. in Table 1. In order to write their action on fields in Vafa-Witten theory, it is convenient to describe the latter as forms on $M_{3}$. Since $\Omega^{0}(\mathbb{R}\times M_{3})\cong\Omega^{0}(M_{3})$, all 0-forms (scalars) remain 0-forms on $M_{3}$. In other words, $\phi^{ab}$ and $\eta^{a}$ are not affected by the reduction to $M_{3}$. In the case of 1-forms, we have $\Omega^{1}(\mathbb{R}\times M_{3})\cong\Omega^{1}(M_{3})\oplus\Omega^{0}(M_{3})$, and so $A$, $\widetilde{H}_{1}$, $\psi_{1}^{a}$ produce additional 0-forms on $M_{3}$ that, following [GM01], we denote $\rho$, $Y$, and $\overline{\eta}^{a}$, respectively. Finally, using $\Omega^{2,+}(\mathbb{R}\times M_{3})\cong\Omega^{1}(M_{3})$, we can replace the self-dual forms $B_{2}^{+}$, $G_{2}^{+}$, $\chi_{2}^{a}$ by 1-forms on $M_{3}$: $V_{1}$, $\overline{B}_{1}$, and $\overline{\psi}_{1}^{a}$, respectively. It is also convenient to denote $\widetilde{H}_{1}+[V_{1},\rho]=B_{1}$. Then, the action of all four BRST operators can be written as (3.20) $$\begin{aligned} Q^{a}A&=\psi_{1}^{a}\\ Q^{a}\phi^{bc}&=\tfrac{1}{2}\epsilon^{ab}\eta^{c}+\tfrac{1}{2}\epsilon^{ac}\eta^{b}\\ Q^{a}\eta^{b}&=-\epsilon_{cd}[\phi^{ac},\phi^{bd}]\\ Q^{a}\psi_{1}^{b}&=d_{A}\phi^{ab}-\epsilon^{ab}[V_{1},\rho]+\epsilon^{ab}B_{1}\\ Q^{a}B_{1}&=-\tfrac{1}{2}d_{A}\eta^{a}+\tfrac{1}{2}[V,\overline{\eta}^{a}]\\ &~{}-\epsilon_{cd}[\phi^{ac},\psi_{1}^{d}]-[\rho,\overline{\psi}_{1}^{a}]\\ Q^{a}V_{1}&=\overline{\psi}_{1}^{a}\\ Q^{a}\rho&=\tfrac{1}{2}\overline{\eta}^{a}\\ Q^{a}\overline{\eta}^{b}&=2[\rho,\phi^{ab}]+\epsilon^{ab}Y\\ Q^{a}\overline{\psi}^{b}_{1}&=[V_{1},\phi^{ab}]+\epsilon^{ab}d_{A}\rho+\epsilon^{ab}\overline{B}_{1}\\ Q^{a}\overline{B}_{1}&=-\tfrac{1}{2}d_{A}\overline{\eta}^{a}-\tfrac{1}{2}[V_{1},\eta^{a}]\\ &~{}-\epsilon_{cd}[\phi^{ac},\overline{\psi}_{1}^{d}]+[\rho,\psi^{a}_{1}]\\ Q^{a}Y&=-[\rho,\eta^{a}]-\epsilon_{cd}[\phi^{ac},\overline{\eta}^{d}]\end{aligned}\qquad\qquad\begin{aligned} \overline{Q}^{a}A&=\overline{\psi}_{1}^{a}\\ \overline{Q}^{a}\phi^{bc}&=\tfrac{1}{2}\epsilon^{ab}\overline{\eta}^{c}+\tfrac{1}{2}\epsilon^{ac}\overline{\eta}^{b}\\ \overline{Q}^{a}\overline{\eta}^{b}&=-\epsilon_{cd}[\phi^{ac},\phi^{bd}]\\ \overline{Q}^{a}\overline{\psi}_{1}^{b}&=d_{A}\phi^{ab}-\epsilon^{ab}[V_{1},\rho]-\epsilon^{ab}B_{1}\\ \overline{Q}^{a}B_{1}&=\tfrac{1}{2}d_{A}\overline{\eta}^{a}+\tfrac{1}{2}[V,\eta^{a}]\\ &~{}+\epsilon_{cd}[\phi^{ac},\overline{\psi}_{1}^{d}]-[\rho,\psi_{1}^{a}]\\ \overline{Q}^{a}V_{1}&=-\psi_{1}^{a}\\ \overline{Q}^{a}\rho&=-\tfrac{1}{2}\eta^{a}\\ \overline{Q}^{a}\eta^{b}&=-2[\rho,\phi^{ab}]-\epsilon^{ab}Y\\ \overline{Q}^{a}\psi^{b}_{1}&=-[V_{1},\phi^{ab}]-\epsilon^{ab}d_{A}\rho+\epsilon^{ab}\overline{B}_{1}\\ \overline{Q}^{a}\overline{B}_{1}&=-\tfrac{1}{2}d_{A}\eta^{a}+\tfrac{1}{2}[V_{1},\overline{\eta}^{a}]\\ &~{}-\epsilon_{cd}[\phi^{ac},\psi_{1}^{d}]-[\rho,\overline{\psi}^{a}_{1}]\\ \overline{Q}^{a}Y&=-[\rho,\overline{\eta}^{a}]+\epsilon_{cd}[\phi^{ac},\eta^{d}]\end{aligned}$$ Modulo gauge transformations, these operators obey (3.21) $$\{Q^{a},Q^{b}\}=0\,,\qquad\{Q^{a},\overline{Q}^{b}\}=-\epsilon^{ab}H\,,\qquad\{\overline{Q}^{a},\overline{Q}^{b}\}=0$$ where the “Hamiltonian” $H$ generates translations along $\mathbb{R}$ in $M_{4}=\mathbb{R}\times M_{3}$, cf. (1.2). Note, this algebra has the same structure as the familiar 2d $\mathcal{N}=(2,2)$ supersymmetry algebra [HKK${}^{+}$03]; namely, it is the 1d version of this superalgebra obtained by reduction to supersymmetric quantum mechanics. Along the same lines, the subalgebra generated by $Q^{a}$ should be compared to the 2d $\mathcal{N}=(0,2)$ supersymmetry algebra. Only this subalgebra is relevant to defining homological invariants of 3-manifolds that extend to a four-dimensional TQFT-like structure. However, from a purely three-dimensional perspective, one might consider a more general combination of BRST differentials (3.22) $$d=s_{a}Q^{a}+r_{b}\overline{Q}^{b}$$ Then, a simple calculation gives (3.23) $$d^{2}=-\epsilon^{ab}s_{a}r_{b}H=(s_{2}r_{1}-s_{1}r_{2})H$$ and the vanishing of the right-hand side defines a quadric in the $\mathbb{C}\mathbf{P}^{3}$, parametrized by $(s_{a},r_{a})$ modulo the overall scale, cf. [GNS${}^{+}$16]. It is $S^{2}\times S^{2}$, which therefore is the space of possible choices of the BRST differential in the Vafa-Witten theory on a general 3-manifold. It would be interesting to analyze further how the computation of $Q$-cohomology varies over $S^{2}\times S^{2}$, if at all. 4. Applications and future directions We conclude with a brief discussion of various potential applications of $\mathcal{H}_{\mathrm{VW}}(M_{3})$ and directions for future work. Figure 5. An example of a plumbing graph. More general 3-manifolds One clear goal that also motivated this work is the computation of $\mathcal{H}_{\mathrm{VW}}(M_{3})$ for more general 3-manifolds. The infinite family considered in this paper is part of a more general family of Seifert 3-manifolds which, in turn, belongs to a larger family of plumbed 3-manifolds. The latter can be conveniently described by a combinatorial data, as illustrated in Figure 5, and therefore it would be nice to formulate $\mathcal{H}_{\mathrm{VW}}(M_{3})$ directly in terms of such combinatorial data. Mapping tori In general, there could be various ways of defining $\mathcal{H}_{\mathrm{VW}}(M_{3})$, but all such versions should enjoy the action of the mapping class group $\mathrm{MCG}(M_{3})$, cf. (2.8). Studying this action more systematically and computing the invariants of the mapping tori of $M_{3}$, (4.1) $$M_{4}=\frac{M_{3}\times I}{(x,0)\sim(\varphi(x),1)}$$ as $\mathrm{Tr}\,_{\mathcal{H}_{\mathrm{VW}}(M_{3})}\varphi$ could help to identify the particular definition of $\mathcal{H}_{\mathrm{VW}}(M_{3})$ that matches the existing techniques of computing 4-manifold invariants $Z_{\mathrm{VW}}(M_{4})$. This could have important implications for understanding the functoriality in the Vafa-Witten theory, and developing the corresponding TQFT-like structure. Trisections Another construction of 4-manifolds that has close ties with the examples of this paper is based on decomposing $M_{4}$ into three basic pieces along a genus-$g$ central surface $\Sigma_{g}$. Such trisections, analogous to Heegaard decompositions in dimension 3, allow to construct an arbitrary smooth 4-manifold and the initial steps of the corresponding computation of $Z_{\mathrm{VW}}(M_{4})$ based on this technique were discussed in [FG20]. Variants It is standard in gauge theory that, depending on how certain analytical details are treated, one can obtain different versions of the homological invariant. For example, the standard definition of the $Q$-cohomology in Donaldson-Witten and in Vafa-Witten theories leads to $HF(M_{3})$ and $\mathcal{H}_{\mathrm{VW}}(M_{3})$, such that both are isomorphic to $\mathcal{T}^{+}$ for $M_{3}=S^{3}$, in particular illustrating (3.5). This should not be confused with a finite-dimensional version of the instanton Floer homology, $I_{*}(M_{3})$, such that e.g. for the Poincaré sphere $P=\Sigma(2,3,5)$: (4.2) $$I_{n}(P)\;=\;\begin{cases}\mathbb{Z},&\text{if }n=0,4\\ 0,&\text{otherwise}\end{cases}$$ Here, $n$ is the mod 8 grading, and the two generators in degree $n=0$ and 4 correspond to the two irreducible representations $\pi_{1}(P)\to SU(2)$. (See e.g. [FS90] for more details.) Another variant is the framed instanton homology, $I^{\#}(M_{3}):=I(M_{3}\#T^{3})$, that was already mentioned in (2.11). Similarly, in the context of the Vafa-Witten theory the computation of $Q$-cohomology naturally leads to $\mathcal{H}_{\mathrm{VW}}(M_{3})$, which is expected to be infinite-dimensional for any 3-manifold, cf. Conjecture 9. In particular, as Proposition 11 illustrates, we expect an isolated reducible representation $\pi_{1}(M_{3})\to SL(2,\mathbb{C})$ to contribute to $\mathcal{H}_{\mathrm{VW}}(M_{3})$ a tower of states $\mathcal{T}^{+}$, as in (1a). Moreover, the discussion in section 3 makes it clear that isolated irreducible representations $\pi_{1}(M_{3})\to SL(2,\mathbb{C})$ (modulo conjugation) also contribute to $\mathcal{H}_{\mathrm{VW}}(M_{3})$, though their contribution may be finite-dimensional, as it also happens in the monopole Floer homology or $HF^{+}(M_{3})$. For example, $M_{3}=\Sigma(2,3,7)$ has a total of four $SL(2,\mathbb{C})$ flat connections, one of which is trivial, while the other three correspond to irreducible representations $\pi_{1}(M_{3})\to SL(2,\mathbb{C})$, modulo conjugation. Therefore, we expect four contributions to $\mathcal{H}_{\mathrm{VW}}(\Sigma(2,3,7))$: one copy of $\mathcal{T}^{+}$ as in the case of $M_{3}=S^{3}$, and three finite-dimensional contributions due to irreducible $SL(2,\mathbb{C})$ flat connections. The instanton homology $I_{*}(\Sigma(2,3,7))$ has a similar structure, except that it does not have a contribution of a trivial flat connection — something we already saw in (4.2) — and instead of three contributions from irreducible flat connections it has only two: (4.3) $$I_{n}(\Sigma(2,3,7))\;=\;\begin{cases}\mathbb{Z},&\text{if }n=2,6\\ 0,&\text{otherwise}\end{cases}$$ The reason for this is that two out of three irreducible $SL(2,\mathbb{C})$ flat connections can be conjugated to $SU(2)$, i.e. they correspond to irreducible representations $\pi_{1}(M_{3})\to SU(2)$, whereas the last one is not. On the other hand, much like $\mathcal{H}_{\mathrm{VW}}(M_{3})$, a sheaf-theoretic model for $SL(2,\mathbb{C})$ Floer homology [AM20] receives contributions from all three irreducible flat connections on $M_{3}=\Sigma(2,3,7)$, (4.4) $$HP^{*}(\Sigma(2,3,7))=\mathbb{Z}_{(0)}\oplus\mathbb{Z}_{(0)}\oplus\mathbb{Z}_{(0)}$$ and a similar consideration for other Brieskorn spheres suggests that, at least in this family, we may expect an isomorphism (4.5) $$\mathcal{H}_{\mathrm{VW}}^{*}(\Sigma(p,q,r))\;\stackrel{{\scriptstyle?}}{{\cong}}\;\mathcal{T}^{+}\oplus HP^{*}(\Sigma(p,q,r))$$ As a natural direction for future work, it would be interesting to either prove or disprove this relation (and, in the former case, generalize to more general 3-manifolds, such as plumbings mentioned earlier, cf. Figure 5). Towards surgery formulae in Vafa-Witten theory There are two types of surgery formulae that we can consider: surgeries in three dimensions relevant to the computation of $\mathcal{H}_{\mathrm{VW}}(M_{3})$, and surgeries in four dimensions that produce $Z_{\mathrm{VW}}(M_{4})$ via cutting-and-gluing along 3-manifolds. The infinite family of 3-manifolds considered in this paper has some direct connections to notable surgery operations in four dimensions. Thus, $M_{3}=S^{2}\times S^{1}$ is relevant to the Gluck twist, whereas $M_{3}=T^{3}$ is relevant to knot surgery and the log-transform. All these surgery operations consist of cutting a 4-manifold along the corresponding $M_{3}$ and then re-gluing pieces back in a new way, or gluing in new four-dimensional pieces with the same boundary $M_{3}$. This again requires understanding how a given element of the mapping class group, $\varphi\in\mathrm{MCG}(M_{3})$, acts on $\mathcal{H}_{\mathrm{VW}}(M_{3})$, cf. (2.8). For example, our preliminary analysis indicates that the Gluck involution that generates $\mathbb{Z}_{2}\subset\mathrm{MCG}(S^{2}\times S^{1})$, associated with $\pi_{1}(SO(3))=\mathbb{Z}_{2}$, acts trivially on $\mathcal{H}_{\mathrm{VW}}(S^{2}\times S^{1})$. This implies that $Z_{\mathrm{VW}}(M_{4},G)$ can not detect the Gluck twist when $\pi_{1}(G)=1$. (The last condition appears in view of Remark 8.) We plan to return to this in future work and also analyze other important elements of $\mathrm{MCG}(M_{3})$ acting on $\mathcal{H}_{\mathrm{VW}}(M_{3})$. In the case of gluing along $M_{3}=T^{3}$, applying (2.7) to a family of elliptic fibrations $M_{4}=E(n)$ with $\chi=12n$, $\sigma=-8n$, and $n-1$ basic SW classes, we quickly learn that (2.7) can not be consistent with a simple multiplicative gluing formula a la [MMS97, Tau01]. Indeed, representing $E(n)$ as an iterated fiber sum, we have (4.6) $$E(n)\;=\;\big{(}E(n-2)\setminus N_{F}\big{)}\cup_{T^{3}}\big{(}E(2)\setminus N_{F}\big{)}$$ where $N_{F}\cong T^{2}\times D^{2}$ is a neighborhood of a generic fiber. Then, assuming e.g. $n=\text{even}$, a simple multiplicative gluing formula along $M_{3}=T^{3}$ would imply (4.7) $$Z_{\mathrm{VW}}(E(n))=\left(\frac{Z_{\mathrm{VW}}(E(4))}{Z_{\mathrm{VW}}(E(2))}\right)^{\frac{n-2}{2}}Z_{\mathrm{VW}}(E(2))$$ which is not the case: Proposition 14. For $M_{4}=E(n)$ with $n=\text{even}$, (2.7) gives (4.8) $$Z_{\mathrm{VW}}(E(n))\;=\;\begin{cases}(-1)^{\frac{n}{2}+1}{n-2\choose\frac{n}{2}-1}\frac{1}{2}\left(\frac{G(q^{2})}{4}\right)^{\frac{n}{2}},&\text{if }\;n>2\\ \frac{1}{8}G(q^{2})+\frac{1}{4}G(q^{1/2})+\frac{1}{4}G(-q^{1/2}),&\text{if }\;n=2\end{cases}$$ where $$G(q)=\frac{1}{\eta^{24}}=\frac{(2\pi)^{12}}{\Delta(\tau)}=\frac{1}{q}\Big{(}1+24q+324q^{2}+3200q^{3}+25650q^{4}+\ldots\Big{)}$$ Proof. The elliptic surface $E(n)$ has the following basic topological invariants (4.9) $$b_{1}=0\,,\qquad 2\chi+3\sigma=0\,,\qquad\frac{\chi+\sigma}{4}=n$$ Moreover, $F\cdot F=0$ and the basic classes are $(n-2j)F$, with $j=1,\ldots,n-1$ and the corresponding Seiberg-Witten invariants: (4.10) $$\mathrm{SW}_{E(n)}({\mathfrak{s}}_{j})\;=\;(-1)^{j+1}{n-2\choose j-1}$$ In order to obtain the $G=SU(2)$ invariant of $M_{4}=E(n)$, we need to substitute all these into (2.7), evaluate it at $v=0$ and also multiply by a factor of $\frac{1}{2}$ (associated with the center of $SU(2)$). For $n=\text{even}$, a straightforward calculation gives (4.11) $$Z_{\mathrm{VW}}(E(n))=(-1)^{\frac{n}{2}+1}{n-2\choose\frac{n}{2}-1}\frac{1}{2}\left(\frac{G(q^{2})}{4}\right)^{\frac{n}{2}}+\\ +\sum_{j=1}^{n-1}(-1)^{j+1}{n-2\choose j-1}\Big{[}\left(\frac{G(q^{1/2})}{4}\right)^{\frac{n}{2}}+\left(\frac{G(-q^{1/2})}{4}\right)^{\frac{n}{2}}\Big{]}$$ By the binomial formula, the second line in this expression is non-zero only for $n=2$ and, therefore, we otain (4.8). ∎ This result is not surprising because gluing along $M_{3}$ is expected [FG20] to be governed by $\text{MTC}[M_{3}]$, which is very non-trivial for $M_{3}=T^{3}$. Indeed, modulo delicate questions related to KK modes (which will be discussed elsewhere), the calculations in section 2 can be interpreted as the computation of (2.10) for $M_{3}=T^{3}$. Indeed, since $M_{4}=T^{2}\times\Sigma_{g}$ can be obtained by gluing basic pieces888These basic pieces are products of $T^{2}$ with pairs-of-pants, illustrated in Figure 3. along 3-tori and the corresponding calculation of Vafa-Witten invariants for $G=SU(2)$ is expressed as a sum (2.12) where $\lambda$ takes 10 possible values, cf. (2.19), we expect the gluing formula for (4.6) to be also a sum over the same set of $\lambda$. In comparison, the naive multiplicative gluing formula a la (4.7) would mean that the sum over $\lambda$ consists of a single term. Surgery operations that involve cutting and gluing $M_{3}$ itself, i.e. surgery formulae for $\mathcal{H}_{\mathrm{VW}}(M_{3})$, are also interesting. For example, continuing the parallel with the Donaldson-Witten theory, one might expect the standard surgery exact triangles (4.12) $$\ldots\xrightarrow[~{}~{}~{}~{}]{~{}~{}\phantom{i}~{}~{}}\mathcal{H}_{\mathrm{VW}}(S^{3})\xrightarrow[~{}~{}~{}~{}]{~{}~{}\phantom{i}~{}~{}}\mathcal{H}_{\mathrm{VW}}(S^{3}_{0}(K))\xrightarrow[~{}~{}~{}~{}]{~{}~{}\phantom{i}~{}~{}}\mathcal{H}_{\mathrm{VW}}(S^{3}_{+1}(K))\xrightarrow[~{}~{}~{}~{}]{~{}~{}\phantom{i}~{}~{}}\ldots$$ or, more generally, (4.13) $$\ldots\xrightarrow[~{}~{}~{}~{}]{~{}~{}\phantom{i}~{}~{}}\mathcal{H}_{\mathrm{VW}}(Y_{0}(K))\xrightarrow[~{}~{}~{}~{}]{~{}~{}i~{}~{}}\mathcal{H}_{\mathrm{VW}}(Y_{+1}(K))\xrightarrow[~{}~{}~{}~{}]{~{}~{}j~{}~{}}\mathcal{H}_{\mathrm{VW}}(Y)\xrightarrow[~{}~{}~{}~{}]{~{}~{}k~{}~{}}\mathcal{H}_{\mathrm{VW}}(Y_{0}(K))\xrightarrow[~{}~{}~{}~{}]{~{}~{}\phantom{i}~{}~{}}\ldots$$ where $K$ is a knot in a homology 3-sphere $Y$. Although such surgery exact triangles are ubiquitous in gauge theory — apart from the original Floer homology, they also exist in many variants of the monopole Floer homology and its close cousin, the Heegaard Floer homology — it is not clear whether they hold in Vafa-Witten theory. One important difference that already entered our analysis in a number of places has to do with degree shifts. Indeed, all maps in (4.13) and, similarly, in the oppositely-oriented exact sequence associated with the inverse surgery operation, (4.14) $$\ldots\xrightarrow[~{}~{}~{}~{}]{~{}~{}\phantom{i}~{}~{}}\mathcal{H}_{\mathrm{VW}}(Y_{-1}(K))\xrightarrow[~{}~{}~{}~{}]{~{}~{}\phantom{i}~{}~{}}\mathcal{H}_{\mathrm{VW}}(Y_{0}(K))\xrightarrow[~{}~{}~{}~{}]{~{}~{}\phantom{i}~{}~{}}\mathcal{H}_{\mathrm{VW}}(Y)\xrightarrow[~{}~{}~{}~{}]{~{}~{}\phantom{i}~{}~{}}\mathcal{H}_{\mathrm{VW}}(Y_{-1}(K))\xrightarrow[~{}~{}~{}~{}]{~{}~{}\phantom{i}~{}~{}}\ldots$$ are induced by oriented cobordisms. For exampe, the map $j$ in (4.13) is induced by a cobordism $W$ with $b_{1}(W)=b_{2}^{+}(W)=0$ and $b_{2}^{-}(W)=1$, cf. [BD95]. Given such topological data of a cobordism, the degree shift of the corresponding map can be computed by evaluating the virtual dimension (“ghost number” anomaly) of a given theory on $W$. However, the balanced property of the Vafa-Witten theory that appeared a number of times earlier makes this quantity vanish for any $W$, so that all maps in (4.13) and (4.14) have degree zero. Another, perhaps even more important feature of the Vafa-Witten theory in regard to the existence of surgery exact triangles directly follows from the computations in this paper. Namely, earlier we saw that $\mathcal{H}_{\mathrm{VW}}(S^{2}\times S^{1})$ is much larger than $\mathcal{H}_{\mathrm{VW}}(S^{3})$: while the latter is isomorphic to $\mathcal{T}^{+}$, the former has GK-dimension 4 (i.e. looks like $(\mathcal{T}^{+})^{\otimes 4}$). The reason $M_{3}=S^{3}$ and $S^{2}\times S^{1}$ are relevant to the question about surgery exact triangles is that they tell us about all terms in (4.12) when $K=\mathrm{unknot}$: in this csae $S^{3}_{+1}(K)\cong S^{3}$ and $S^{3}_{0}(K)\cong S^{2}\times S^{1}$. It is not clear how such a sequence could be exact, suggesting that the standard form of the surgery exact triangles may not hold in the Vafa-Witten theory. One possible scenario is that a spectral sequence as in section 3 can reduce the size of $\mathcal{H}_{\mathrm{VW}}(S^{2}\times S^{1})$ to roughly twice the size of $\mathcal{H}_{\mathrm{VW}}(S^{3})$, thus providing a more natural definition of $\mathcal{H}_{\mathrm{VW}}(M_{3})$ in view of (4.12). This example, with $K=\mathrm{unknot}$, also illustrates well the origin of the problem: the reason $\mathcal{H}_{\mathrm{VW}}(S^{2}\times S^{1})$ is much larger than $\mathcal{H}_{\mathrm{VW}}(S^{3})$ has to do with non-compactness of the moduli spaces, as we saw earlier in section 2 and, therefore, suggests that addressing the non-compactness of the moduli spaces may help with the surgery exact triangles. It would be interesting to shed more light on this question. Appendix A Supersymmetry algebra In addition to scalar (0-form) supercharges (3.19), the topological Vafa-Witten theory also has vector (1-form) supercharges [GM01]: $$\displaystyle\overline{Q}^{a}_{\mu}A_{\nu}$$ $$\displaystyle=\delta_{\mu\nu}\eta^{a}+\chi_{\mu\nu}^{a}$$ $$\displaystyle\overline{Q}^{a}_{\mu}B_{\rho\sigma}^{+}$$ $$\displaystyle=-\delta_{\mu[\rho}\psi_{\sigma]}^{a}-\epsilon_{\mu\nu\rho\sigma}\psi^{\nu a}$$ $$\displaystyle\overline{Q}^{a}_{\mu}\phi^{bc}$$ $$\displaystyle=-\tfrac{1}{2}\epsilon^{ab}\psi^{c}_{\mu}-\tfrac{1}{2}\epsilon^{ac}\psi^{b}_{\mu}$$ $$\displaystyle\overline{Q}^{a}_{\mu}\eta^{b}$$ $$\displaystyle=D_{\mu}\phi^{ab}+\epsilon^{ab}H_{\mu}$$ $$\displaystyle\overline{Q}^{a}_{\mu}\psi_{\nu}^{b}$$ $$\displaystyle=-\epsilon^{ab}F_{\mu\nu}+\delta_{\mu\nu}\epsilon_{cd}[\phi^{ac},\phi^{bd}]+\epsilon^{ab}G^{+}_{\mu\nu}-[B^{+}_{\mu\nu},\phi^{ab}]$$ $$\displaystyle\overline{Q}^{a}_{\mu}H_{\nu}$$ $$\displaystyle=D_{\mu}\psi^{a}_{\nu}-\tfrac{1}{2}D_{\nu}\psi^{a}_{\mu}+\epsilon_{cd}[\phi^{ac},\chi^{d}_{\mu\nu}-\delta_{\mu\nu}\eta^{d}]+[B^{+}_{\mu\nu},\eta^{a}]$$ $$\displaystyle\overline{Q}^{a}_{\mu}\chi^{b}_{\rho\sigma}$$ $$\displaystyle=\delta_{\mu[\rho}D_{\sigma]}\phi^{ab}+\epsilon_{\mu\nu\rho\sigma}D^{\nu}\phi^{ab}-\epsilon^{ab}\delta_{\mu[\rho}H_{\sigma]}-\epsilon^{ab}\epsilon_{\mu\nu\rho\sigma}H^{\nu}-\epsilon^{ab}D_{\mu}B^{+}_{\rho\sigma}$$ $$\displaystyle\overline{Q}^{a}_{\mu}G^{+}_{\rho\sigma}$$ $$\displaystyle=D_{\mu}\chi^{a}_{\rho\sigma}-\delta_{\mu[\rho}D_{\sigma]}\eta^{a}-\epsilon_{\mu\nu\rho\sigma}D^{\nu}\eta^{a}-\epsilon_{cd}[\phi^{ac},\delta_{\mu[\rho},\psi^{d}_{\sigma]}+\epsilon_{\mu\nu\rho\sigma}\psi^{\nu d}]+\tfrac{1}{2}[\psi^{a}_{\mu},B^{+}_{\rho\sigma}]$$ The algebra of these supercharges, up to gauge transformations, is $$\displaystyle\{Q^{a},Q^{b}\}$$ $$\displaystyle=$$ $$\displaystyle 0$$ (A.1) $$\displaystyle\{Q^{a},\overline{Q}^{b}_{\mu}\}$$ $$\displaystyle=$$ $$\displaystyle-\epsilon^{ab}\partial_{\mu}$$ $$\displaystyle\{\overline{Q}^{a}_{\mu},\overline{Q}^{b}_{\nu}\}$$ $$\displaystyle=$$ $$\displaystyle 0$$ For $\mu=0$, it should be compared with the supersymmetry algebra in quantum mechanics. With two supercharges, the supersymmetry algebra in quantum mechanics looks like $\{Q,Q^{\dagger}\}=2H$. With the enhanced supersymmetry, it has the form (A.2) $$\{Q_{i},Q_{j}^{\dagger}\}=2\delta_{ij}H\,,\qquad\{Q_{i},Q_{j}\}=0$$ In the main text we also encountered a closely related 2d $\mathcal{N}=(2,2)$ supersymmetry algebra: (A.3) $$\{Q_{+},\overline{Q}_{+}\}=\tfrac{1}{2}(H+P)=H_{L}\,,\qquad\{Q_{-},\overline{Q}_{-}\}=\tfrac{1}{2}(H-P)=H_{R}$$ and the topological A-model, that involves linear combinations of the above supercharges, $Q_{A}=Q_{+}+\overline{Q}_{-}$ and $\overline{Q}_{A}=\overline{Q}_{+}+Q_{-}$, such that $\{Q_{A},\overline{Q}_{A}\}=H$ and $Q_{A}^{2}=0$. References [AGP16] Jørgen Ellegaard Andersen, Sergei Gukov, and Du Pei. The Verlinde formula for Higgs bundles. 8 2016. [AM20] Mohammed Abouzaid and Ciprian Manolescu. A sheaf-theoretic model for ${\rm SL}(2,\mathbb{C})$ Floer homology. J. Eur. Math. Soc. (JEMS), 22(11):3641–3695, 2020. [BD95] P. J. Braam and S. K. Donaldson. Floer’s work on instanton homology, knots and surgery. In The Floer memorial volume, volume 133 of Progr. Math., pages 195–256. Birkhäuser, Basel, 1995. [BJSV95] M. Bershadsky, A. Johansen, V. Sadov, and C. Vafa. Topological reduction of 4-d SYM to 2-d sigma models. Nucl. Phys. B, 448:166–186, 1995. [BT97] Matthias Blau and George Thompson. Aspects of N(T) $>$= two topological gauge theories and D-branes. Nucl. Phys. B, 492:545–590, 1997. [DM97] Robbert Dijkgraaf and Gregory W. Moore. Balanced topological field theories. Commun. Math. Phys., 185:411–440, 1997. [Don83] S. K. Donaldson. An application of gauge theory to four-dimensional topology. J. Differential Geom., 18(2):279–315, 1983. [Don02] S. K. Donaldson. Floer homology groups in Yang-Mills theory, volume 147 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge, 2002. With the assistance of M. Furuta and D. Kotschick. [DPS97] Robbert Dijkgraaf, Jae-Suk Park, and Bernd J. Schroers. N=4 supersymmetric Yang-Mills theory on a Kahler surface. 12 1997. [FG20] Boris Feigin and Sergei Gukov. ${\rm VOA}[M_{4}]$. J. Math. Phys., 61(1):012302, 27, 2020. [Flo88] Andreas Floer. An instanton-invariant for $3$-manifolds. Comm. Math. Phys., 118(2):215–240, 1988. [Flo90] Andreas Floer. Instanton homology, surgery, and knots. In Geometry of low-dimensional manifolds, 1 (Durham, 1989), volume 150 of London Math. Soc. Lecture Note Ser., pages 97–114. Cambridge Univ. Press, Cambridge, 1990. [FS90] Ronald Fintushel and Ronald J. Stern. Instanton homology of Seifert fibred homology three spheres. Proc. London Math. Soc. (3), 61(1):109–137, 1990. [FS98] Ronald Fintushel and Ronald J. Stern. Knots, links, and $4$-manifolds. Invent. Math., 134(2):363–400, 1998. [GK20] Lothar Göttsche and Martijn Kool. Virtual refinements of the Vafa-Witten formula. Comm. Math. Phys., 376(1):1–49, 2020. [GM01] Bodo Geyer and Dietmar Mulsch. N(T) = 4 equivariant extension of the 3-D topological model of Blau and Thompson. Nucl. Phys. B, 616:476–494, 2001. [GNS${}^{+}$16] Sergei Gukov, Satoshi Nawata, Ingmar Saberi, Marko Stošić, and Piotr Sułkowski. Sequencing BPS Spectra. JHEP, 03:004, 2016. [GP17] Sergei Gukov and Du Pei. Equivariant Verlinde formula from fivebranes and vortices. Comm. Math. Phys., 355(1):1–50, 2017. [GPV17] Sergei Gukov, Pavel Putrov, and Cumrun Vafa. Fivebranes and 3-manifold homology. JHEP, 07:071, 2017. [GSY20] Amin Gholampour, Artan Sheshmani, and Shing-Tung Yau. Localized Donaldson-Thomas theory of surfaces. Amer. J. Math., 142(2):405–442, 2020. [Hit87] N. J. Hitchin. The self-duality equations on a Riemann surface. Proc. London Math. Soc. (3), 55(1):59–126, 1987. [HKK${}^{+}$03] Kentaro Hori, Sheldon Katz, Albrecht Klemm, Rahul Pandharipande, Richard Thomas, Cumrun Vafa, Ravi Vakil, and Eric Zaslow. Mirror symmetry, volume 1 of Clay Mathematics Monographs. American Mathematical Society, Providence, RI; Clay Mathematics Institute, Cambridge, MA, 2003. With a preface by Vafa. [HL16] Daniel Halpern-Leistner. The equivariant Verlinde formula on the moduli of Higgs bundles. 8 2016. [Hut10] Michael Hutchings. Embedded contact homology and its applications. In Proceedings of the International Congress of Mathematicians. Volume II, pages 1022–1041. Hindustan Book Agency, New Delhi, 2010. [KM07] Peter Kronheimer and Tomasz Mrowka. Monopoles and three-manifolds, volume 10 of New Mathematical Monographs. Cambridge University Press, Cambridge, 2007. [KM10] Peter Kronheimer and Tomasz Mrowka. Knots, sutures, and excision. J. Differential Geom., 84(2):301–364, 2010. [KMOS07] P. Kronheimer, T. Mrowka, P. Ozsváth, and Z. Szabó. Monopoles and lens space surgeries. Ann. of Math. (2), 165(2):457–546, 2007. [Leu02] Naichung Conan Leung. Lagrangian submanifolds in hyperKähler manifolds, Legendre transformation. J. Differential Geom., 61(1):107–145, 2002. [LL97] J. M. F. Labastida and Carlos Lozano. Mathai-Quillen formulation of twisted N=4 supersymmetric gauge theories in four-dimensions. Nucl. Phys. B, 502:741–790, 1997. [LL98] J. M. F. Labastida and Carlos Lozano. Mass perturbations in twisted N=4 supersymmetric gauge theories. Nucl. Phys. B, 518:37–58, 1998. [Loz99] Carlos Lozano. Duality in topological quantum field theories. Thesis, 7 1999. [Mar95] Neil Marcus. The Other topological twisting of N=4 Yang-Mills. Nucl. Phys. B, 452:331–345, 1995. [MM21] Jan Manschot and Gregory W. Moore. Topological correlators of $SU(2)$, $\mathcal{N}=2^{*}$ SYM on four-manifolds. 4 2021. [MMS97] John W. Morgan, Tomasz S. Mrowka, and Zoltán Szabó. Product formulas along $T^{3}$ for Seiberg-Witten invariants. Math. Res. Lett., 4(6):915–929, 1997. [Mn99] Vicente Muñoz. Ring structure of the Floer cohomology of $\Sigma\times{\bf S}^{1}$. Topology, 38(3):517–528, 1999. [OS04a] Peter Ozsváth and Zoltán Szabó. Heegaard diagrams and holomorphic disks. In Different faces of geometry, volume 3 of Int. Math. Ser. (N. Y.), pages 301–348. Kluwer/Plenum, New York, 2004. [OS04b] Peter Ozsváth and Zoltán Szabó. Holomorphic disks and knot invariants. Adv. Math., 186(1):58–116, 2004. [Tau01] Clifford Henry Taubes. The Seiberg-Witten invariants and 4-manifolds with essential tori. Geom. Topol., 5:441–519, 2001. [Tau09] Clifford Henry Taubes. Notes on the Seiberg-Witten equations, the Weinstein conjecture and embedded contact homology. In Current developments in mathematics, 2007, pages 221–245. Int. Press, Somerville, MA, 2009. [TT20] Yuuji Tanaka and Richard P. Thomas. Vafa-Witten invariants for projective surfaces I: stable case. J. Algebraic Geom., 29(4):603–668, 2020. [VW94] Cumrun Vafa and Edward Witten. A Strong coupling test of S duality. Nucl. Phys. B, 431:3–77, 1994. [Wit88] Edward Witten. Topological Quantum Field Theory. Commun. Math. Phys., 117:353, 1988.
Electromagnetic $K^{+}$ production on the deuteron with hyperon recoil polarization K. Miyagawa ${}^{\rm a}$, H. Yamamura ${}^{\rm a}$, T. Mart ${}^{\rm b}$, C. Bennhold ${}^{\rm c}$, H. Haberzettl ${}^{\rm c}$  and W. Glöckle ${}^{\rm d}$ Abstract Photo- and electroproduction processes of $K^{+}$ on the deuteron are investigated theoretically. Modern hyperon-nucleon forces as well as an updated kaon production operator on the nucleon are used. Sizable effects of the hyperon-nucleon final state interaction are seen in various observables. Especially the photoproduction double polarization observable $C_{z}$ is shown to provide a handle to distinguish different hyperon-nucleon force models. Electromagnetic $K^{+}$ production on the deuteron with hyperon recoil polarization K. Miyagawa ${}^{\rm e}$, H. Yamamura ${}^{\rm e}$, T. Mart ${}^{\rm f}$, C. Bennhold ${}^{\rm g}$, H. Haberzettl ${}^{\rm g}$  and W. Glöckle ${}^{\rm h}$ ${}^{\rm a}$Department of Applied Physics, Okayama University of Science, 1-1 Ridai-cho, Okayama 700, Japan ${}^{\rm b}$Jurusan Fisika, FMIPA, Universitas Indonesia, Depok 16424, Indonesia ${}^{\rm c}$Center for Nuclear Studies, The George Washington University, Washington, D.C. 20052 ${}^{\rm d}$Institut für Theoretische Physik II, Ruhr-Universität Bochum, D-44780 Bochum, Germany ${}^{\rm e}$Department of Applied Physics, Okayama University of Science, 1-1 Ridai-cho, Okayama 700, Japan ${}^{\rm f}$Jurusan Fisika, FMIPA, Universitas Indonesia, Depok 16424, Indonesia ${}^{\rm g}$Center for Nuclear Studies, The George Washington University, Washington, D.C. 20052 ${}^{\rm h}$Institut für Theoretische Physik II, Ruhr-Universität Bochum, D-44780 Bochum, Germany 1 INTRODUCTION Recent rigorous calculations of light hypernuclei [1] have contributed interesting insight into low-energy properties of the $YN$ interaction above the $\Lambda$ threshold. However, no clear understanding of the $YN$ interaction has emerged around the $\Sigma$ threshold. Electro- and photoproduction processes of $K^{+}$ on light nuclei offer a unique possibility for studying the $YN$ interaction in the continuum, especially near the $\Sigma$ threshold. An inclusive $d(e,e^{\prime}K^{+})YN$ experiment has already been performed, and the data for $d(\gamma,K^{+}Y)N$ and ${}^{3}$He$(\gamma,K^{+}Y)N$ are being analyzed at TJNAF. We have analyzed the inclusive $d(\gamma,K^{+})$ and exclusive $d(\gamma,K^{+}\Lambda(\Sigma))$ processes [2], and report here preliminary results of the electroproduction process $d(e,e^{\prime}\,K^{+})$. This study aims to investigate the coupled $\Lambda N-\Sigma N$ interaction in the final state and incorporates the modern $YN$ interactions of the Nijmegen group, NSC97f [3] and NSC89 which have been found to give a reasonable binding energy for the hypertriton. Kaon photoproduction on the deuteron is also important since it allows access to the elementary cross sections on the neutron, such as $\gamma+n\rightarrow K^{+}+\Sigma^{-}$, in kinematic regions where final-state interaction effects are small. 2 PHOTOPRODUCTION Numerical results of the inclusive $d(\gamma,K^{+})$ cross sections, using an updated production operator [4], are shown as a function of lab momentum $P_{K}$ in Fig. 1. For the details of our theoretical formulation, we refer the reader to Ref. [2]. The incident photon energy is 1.3 GeV, while the outgoing kaon angle is fixed to 1 degree. The two pronounced peaks around $P_{K}=945$ and $809$ MeV/c are due to the quasifree scattering between photon and one of the nucleons. The results with the final state $YN$ interaction NSC97f are compared to the PWIA results. Sizable FSI effects are seen around both $\Lambda$ and $\Sigma$ thresholds, and to a lesser degree at the two quasifree peak positions. For the same $E_{\gamma}$ and $\theta_{K}$, the exclusive $d(\gamma,K^{+}\Lambda)n$ cross section and double polarization observable $C_{z}$ at $P_{K}=870$ MeV/c are shown in Figs. 2 and 3, respectively. Figures 4 and 5 depict these observables for $d(\gamma,K^{+}\Sigma^{-})p$ at $P_{K}=810$ MeV/c. As indicated in Fig. 1, the former value of $P_{K}$ is close to the $K^{+}\Sigma N$ threshold, while the latter one corresponds to the $\Sigma$ quasifree peak position. While the values for $C_{z}$ in PWIA are almost 100%, the FSI results show dramatic deviations. Furthermore, the two $YN$ forces of NSC97f and NSC89 become clearly distinguishable for this observable. Experimentally, measuring this observable involves using circularly polarized photons along with detecting the recoil polarization of the hyperon in the final state. 3 ELECTROPRODUCTION Here we present first preliminary results for the electroproduction process $d(e,e^{\prime}K^{+})$. Two sets of results are shown in Figs. 6 and 7. The incident electron energy $E_{e}$ and momentum transfer $Q^{2}$ in Fig. 6 is set to reproduce the conditions of the recent Hall C experiment at TJLAB [5]. As in the case of the photoproduction $d(\gamma,K^{+})$, $YN$ FSI effects are seen near both $\Lambda$ and $\Sigma$ threshold. However, in Fig. 6, the FSI has effects in a wide range above the $\Sigma$ threshold. Figure 7 shows a prominent enhancement around the $\Sigma$ threshold which is not a simple threshold effect but is caused by a $YN$ $t$-matrix pole in the complex momentum plane [6]. 4 OUTLOOK We investigate cross sections and hyperon polarization for the $K^{+}$ photoproduction on the deuteron, and find large hyperon-nucleon FSI effects in the double polarization observable $C_{z}$. Also, in the electroproduction, for suitable $Q^{2}$ values, cross sections show a prominent enhancement around the $\Sigma$ threshold. A systematic analysis for a wide range of kinematics for both photo- and electroproduction processes is in progress. Future studies will investigate final-state interaction effects in kaon photo- and electroproduction on the $A=3$ system. References [1] K. Miyagawa et al., Few-Body Systems Suppl. 12 (2000) 324, nucl-th/0002035; see also E. Hiyama in these proceedings. [2] H. Yamamura et al., Phys. Rev. C 61 (1999) 014001. [3] Th. A. Rijken et al., Phys. Rev. C 59 (1999) 21, and references therein. [4] F.X. Lee et al., nucl-th/9907119; C. Bennhold et al., nucl-th/9909022. [5] J. Reinhold et al., Nucl Phys. A639 (1998) 197c. [6] K. Miyagawa et al., Phys. Rev. C 60 (1999) 024003.
On cube tilings of tori and classification of perfect codes in the maximum metric Claudio Qureshi Institute of Mathematics State University of Campinas, Brazil cqureshi@gmail.com  and  Sueli I.R. Costa Institute of Mathematics State University of Campinas, Brazil sueli@ime.unicamp.br Abstract. We describe odd-length-cube tilings of the $n$-dimensional $q$-ary torus what includes $q$-periodic integer lattice tilings of $\mathbb{R}^{n}$. In the language of coding theory these tilings correspond to perfect codes with respect to the maximum metric. A complete characterization of the two-dimensional tillings is presented and in the linear case, a description of general matrices, isometry and isomorphism classes is provided. Several methods to construct perfect codes from codes of smaller dimension or via sections are derived. We introduce a special type of matrices (perfect matrices) which are in correspondence with generator matrices for linear perfect codes in arbitrary dimensions. For maximal perfect codes, a parametrization obtained allows to describe isomorphism classes of such codes. We also approach the problem of what isomorphism classes of abelian groups can be represented by $q$-ary $n$-dimensional perfect codes of a given cardinality $N$. Work partially supported by FAPESP grants 2012/10600-2 and 2013/25977-7 and by CNPq grants 312926/2013-8 and 158670/2015-9. 1. Introduction Despite the fact that cubes are among the simplest and important objects in Euclidean geometry, many interesting problems arise related to them. Sometimes complicated machineries from different areas of mathematics have had to be employed to solve some of the known results related to cubes and many basic problems still remain open, as it is pointed out in the survey [33] on what is known about cubes. We focus here on a particular problem: lattice cube tilings. In 1906, an interesting conjecture was proposed by Minkowski [22] while he was considering a problem in diophantine approximation. The Minkowski’s conjecture states that in every tiling of $\mathbb{R}^{n}$ by cubes of the same length, where the centers of the cubes form a lattice, there exists two cubes that meet at a $(n-1)$-dimensional face. This conjecture was proved in 1942 by Hajós [8]. A similar conjecture was proposed by Keller [10] removing the restriction that the centers of the cubes have to form a lattice, however this more stronger version was shown to be true in dimensions $n\leq 6$ [25], false in dimensions $n\geq 8$ [19, 21] and it remains open in dimension $n=7$. A lot of variants and problems related to cube tilings have been considered [14, 15, 16, 28, 29] as well as application to other areas such as combinatorics, graph theory [4], coding theory [20], algebra [31], harmonic analysis [13] among others. In this work we fix a triplet $(n,e,q)$ of positive integers and consider all the cube tilings of length $2e+1$ whose centers form an integer $q$-periodic lattice $\Lambda$ (i.e. an additive subgroup of $\mathbb{R}^{n}$ such that $q\mathbb{Z}^{n}\subseteq\Lambda\subseteq\mathbb{Z}^{n}$) and we classify these tilings with respect to the quotient group $\Lambda/q\mathbb{Z}^{n}$. Periodic cube tilings of $\mathbb{R}^{n}$ are in natural correspondence with cube tilings of the flat torus $\mathbb{R}^{n}/q\mathbb{Z}^{n}$, and the set $\Lambda/q\mathbb{Z}^{n}$ corresponds to the centers of the projected cubes in this torus. We consider the problem of describing all the quotient groups $\Lambda/q\mathbb{Z}^{n}$ and classify them up to isometries and up to isomorphism. We also approach the problem of what group isomorphism classes are represented by these groups, obtaining structural information about these classes. One of our main motivation to consider these problems comes from coding theory, which gives us a natural framework as it is used in [20]. Every $n$-dimensional $q$-ary linear code (i.e. a subgroup of $\mathbb{Z}_{q}^{n}$) with respect to the maximum metric (induced from the maximum metric in $\mathbb{Z}^{n}$) is in correspondence with a $q$-periodic integer lattice via the so called Construction A [3]. This association has the property that each $e$-perfect code (i.e. a code in which every element of $\mathbb{Z}_{q}^{n}$ belong to exact one ball of radii $e$ centered at a codeword) corresponds to a cube tiling of length $2e+1$ of $\mathbb{Z}^{n}$, in such a way that its centers form a $q$-periodic integer lattice. We use the terminology of coding theory from some standard references such as [23, 18] and [3]. Notation and results from coding theory and lattices, used in this paper, are included in Section 2. In Section 3 we study two-dimensional perfect codes, characterizing them (Corollary 3.6 and Theorem 3.10), describing what are the group isomorphism classes represented by the linear codes (Theorem 3.14) and providing a parametrization of such codes by a ring in such a way that isometry classes and isomorphism classes correspond with certain generalized cosets (Theorem 3.22). In Section 4 several constructions of perfect codes are derived, such as the linear-construction (Proposition 4.7) and via sections (Proposition 4.23) which allow us to describe some interesting families of perfect codes as those in Corollaries 4.8 and 4.9 and to generalize some results from dimension two to arbitrary dimensions. In Section 5 we introduce a special type of matrices (perfect matrices) which characterize the generator matrices of perfect codes (Proposition 5.13). For maximal codes (Definition 5.20) a parametrization is given (Theorem 5.27) and the induced parametrization of their isomorphism classes (Theorem 5.29) as well as the number of isomorphism classes expressed in terms of certain generating function (Corollary 5.37) are derived. We also describe the group isomorphism classes that can be represented by an $n$-dimensional $q$-ary code with packing radius $e$ (admissible structures) for the maximal case (Theorem 5.34). Finally, in Section 6 we list further interesting research problems. 2. Preliminaries, definitions and notations In this section we summarize results and notations which will be used in this paper. We use the language of coding theory to approach the geometric problem of tiling a flat torus by cubes. The maximum (or Chebyshev) metric have been used in the context of coding theory and telecommunication mostly over permutation codes (rank modulation codes), some references include [12, 30, 32] Let $\mathbb{Z}_{q}=\{0,1,\ldots,q-1\}$ be the set of integers modulo $q$. Associated with $\mathbb{Z}_{q}$ we have a (non-directed) circular graph whose vertices are the elements of $\mathbb{Z}_{q}$ and the edges are given by $\{x,x+1\}$ for $x\in\mathbb{Z}_{q}$. The distance $d$ with respect to this graph is given by $d(x,y)=\min\{|x-y|,q-|x-y|\}$. For $p\in[1,\infty]$ and $x=(x_{1},\ldots,x_{n}),y=(y_{1},\ldots,y_{n})\in\mathbb{Z}_{q}^{n}$, the $p$-Lee metric in $\mathbb{Z}_{q}^{n}$ is given by $d_{p}(x,y)=\left\{\begin{array}[]{ll}\sqrt[p]{\sum_{i=1}^{n}d(x_{i},y_{i})^{p}% }&\textrm{if }p\in[1,\infty),\\ \max_{i=1}^{n}d(x_{i},y_{i})&\textrm{if }p=\infty,\end{array}\right.$. We denote by $|x|_{p}=d_{p}(x,0)$ for $x\in\mathbb{Z}_{q}^{n}$. The case $p=1$ known as Lee metric is, apart from the Hamming metric, one of the most often used metric in coding theory due to several applications [2, 5, 26, 27]. Also of interest in this field are the cases $p=2$ (Euclidean distance) and $p=\infty$ (maximum or Chebyshev metric) which is our focus in this paper because of its association with cube packings. We denote the maximum metric simply by $d(x,y)$ and $B(x,r)$ is the ball centered at the point $x\in\mathbb{Z}_{q}^{n}$ and radius $r\in\mathbb{N}$ respect to this metric. An $n$-dimensional $q$-ary code is a subset $C$ of $\mathbb{Z}_{q}^{n}$ and we assume here $\#C>1$. We refer to elements of $C$ as codewords and say that $C$ is a linear code when it is a subgroup of $(\mathbb{Z}_{q}^{n},+)$. The minimum distance of $C$ is given by $\mbox{dist}(C)=\min\{d(x,y):x,y\in C,x\neq y\}$ and if $C$ is linear it is also given by $\mbox{dist}(C)=\min\{|x|_{\infty}:x\in C,x\neq 0\}$. The packing radius of $C$ is the greatest non-negative integer $e=e(C)$ (sometimes denoted by $r_{p}(C)$) such that the balls $B(c,e)$ are disjoint where $c$ runs over the codewords. An $n$-dimensional $q$-ary code with packing radius $e$ is called an $(n,e,q)$-code. The covering radius is the least positive integer $r_{c}=r_{c}(C)$ such that $\mathbb{Z}_{q}^{n}=\bigcup_{c\in C}B(c,r_{c})$. A code $C$ is perfect when $r_{p}(C)=r_{c}(C)=e$, in this case we have a partition of the space into disjoint balls of the same radius $e$, that is $\mathbb{Z}_{q}^{n}=\biguplus_{c\in C}B(c,e)$. For perfect codes in the maximum metric we have $\mbox{dist}(C)=2e+1$. If $C\subseteq\mathbb{Z}_{q}^{n}$ is a perfect code with packing radius $e$ we have a map $f_{C}:\mathbb{Z}_{q}^{n}\rightarrow C$, called the correcting-error function of $C$, with the property $d(x,f_{C}(x))\leq e$ for $x\in\mathbb{Z}_{q}^{n}$. The set of all perfect $(n,e,q)$-codes is denoted by $PL^{\infty}(n,e,q)$ and its subset of linear codes is denoted by $LPL^{\infty}(n,e,q)$. The sphere packing condition states that for all $C\in PL^{\infty}(n,e,q)$ we have $\#C\times(2e+1)^{n}=q^{n}$ which implies that $q=(2e+1)t$ for some $t\in\mathbb{Z}^{+}$ and $\#C=t^{n}$. Conversely, if $q=(2e+1)t$ with $e,t\in\mathbb{Z}^{+}$ the code $C=(2e+1)\mathbb{Z}_{q}^{n}$ is an $(n,e,q)$-perfect code (called the cartesian code), so $LPL^{\infty}(n,e,q)\neq\emptyset$. From now on, we assume that $q=(2e+1)t$ where $e,t\in\mathbb{Z}^{+}$. An $n$-dimensional lattice $\Lambda$ is a subset of $\mathbb{R}^{n}$ of the form $\Lambda=\nu_{1}\mathbb{Z}+\nu_{2}\mathbb{Z}+\ldots+\nu_{n}\mathbb{Z}$ where $\{\nu_{1},\ldots,\nu_{n}\}$ is a basis of $\mathbb{R}^{n}$ (as $\mathbb{R}$-vector space). There is a strong connection between $n$-dimensional linear codes over $\mathbb{Z}_{q}$ and $q$-periodic integer lattices in $\mathbb{R}^{n}$. Associated with a linear $(n,e,q)$-code we have the lattice $\Lambda_{C}=\pi^{-1}(C)$ where $\pi:\mathbb{Z}^{n}\rightarrow\mathbb{Z}_{q}^{n}$ is the canonical projection (taking modulo q in each coordinate). This way of obtaining a lattice from a code is known as Construction A [3]. On the other hand, if $\Lambda\subseteq\mathbb{R}^{n}$ is a $q$-periodic integer lattice the image $\pi(\Lambda)$ is clearly an $n$-dimensional $q$-ary linear code and in fact, the map $\pi$ induces a correspondence between both sets. Considering the maximum metric in $\mathbb{R}^{n}$, the packing radius, covering radius and perfection for lattices with respect to this metric can be defined in an analogous way to how it was done for codes. In this sense, the above correspondence establish a bijection between $(n,e,q)$-perfect codes and $q$-periodic integer perfect lattices. We remark that the preservation of perfection by Construction A does not hold for $p$-Lee metrics in general. For example when $p=1$, the existence of an $e$-perfect $q$-ary code with respect to the Lee metric does not assure that the associated lattice is $q$-perfect if $q<2e+1$ (for example consider the binary code $C=\{(0,0,0),(1,1,1)\}$). The above correspondence between codes and lattices allows us to use the machinery of lattices to approach problems in coding theory, specially those related with perfect codes. Next we introduce further notations. Notation 2.1. Let $A$ be a ring (in particular a $\mathbb{Z}$-module) and $a\in A$. $\bullet$ $\mathcal{M}_{m\times n}(A)$ denotes the set of rectangular matrices $m\times n$ with coefficients in $A$. We identify $A^{n}$ with $\mathcal{M}_{1\times n}(A)$ and when $m=n$, we set $\mathcal{M}_{n}(A)=\mathcal{M}_{n\times n}(A)$. $\bullet$ $\nabla_{n}(A)$ denotes the set of upper triangular matrices in $\mathcal{M}_{n}(A)$ and $\nabla_{n}(a,A)$ is the subset of $\nabla_{n}(A)$ whose elements in the principal diagonal are all equals to $a$. For $A=\mathbb{Z}$, we set $\nabla_{n}(a)=\nabla_{n}(a,\mathbb{Z})$. $\bullet$ For $x\in\mathbb{Z}^{n}$ we denote by $\overline{x}=x+q\mathbb{Z}^{n}\in\mathbb{Z}_{q}^{n}$. If $M\in\mathcal{M}_{n}(\mathbb{Z})$ we denote by $\overline{M}$ the matrix obtained from $M$ taking modulo $q$ in each coordinate. $\bullet$ Let $M\in\mathcal{M}_{m\times n}(A)$ whose rows are $M_{1},\ldots,M_{m}$. We denote by $\mbox{span}(M)=M_{1}\mathbb{Z}+\ldots+M_{m}\mathbb{Z}$. In other words $\mbox{span}(M)$ is the subgroup of $A^{n}$ generated by the rows of $M$. $\bullet$ We denote by $[n]=\{1,2,\ldots,n\}$ and by $\{e_{i}:i\in[n]\}$ the standard basis of $\mathbb{R}^{n}$ (or in general $e_{i}$ denotes the element of $A^{n}$ with a $1$ in the i-st coordinate and $0$ elsewhere). $\bullet$ We denote by $S_{n}$ the group of bijection of the set $[n]$ (permutations). A standard way to define a code is from a generator matrix. For convenience, we will modify slightly the definition of generator matrix for a $q$-ary code. Definition 2.2. Let $C$ be a $(n,e,q)$-code and $\Lambda_{C}$ be its associated lattice via Construction A. A generator matrix for $C$ is a matrix $M\in\mathcal{M}_{n}(\mathbb{Z})$ whose rows form a $\mathbb{Z}$-basis for the lattice $\Lambda_{C}$. Remark 2.3. Let $M$ be a generator matrix for a $q$-ary code $C$ and $\overline{M}$ the matrix obtained from $M$ taking modulo $q$ in each coordinate. Clearly the rows of $\overline{M}$ form a generating set for $C$ (as $\mathbb{Z}$-module), but the converse is false if $q\mathbb{Z}^{n}\not\subseteq\mbox{span}(M)$. In fact, if $M$ is a matrix such that $\overline{M}$ generates $C$ as $\mathbb{Z}$-module, we have that $M$ is a generator matrix for $C$ if and only if the matrix $qM^{-1}$ has integer coefficients. In addition to the problem of describing the set $LPL^{\infty}(n,e,q)$ we are also interested in describing isometry classes and isomorphism classes of such codes. We consider here only linear isometries of $\mathbb{Z}_{q}^{n}$ (i.e. homomorphisms $f:\mathbb{Z}_{q}^{n}\rightarrow\mathbb{Z}_{q}^{n}$ which preserve the maximum norm). It is easy to see that for $q>3$, the isometry group $\mathcal{G}$ of $\mathbb{Z}_{q}^{n}$ is given by $\mathcal{G}=\{\theta\eta_{a}:\theta\in S_{n},a\in\mathbb{Z}_{2}^{n}\}$ where for $\theta(e_{i})=e_{\theta(i)}$ and $\eta_{a}(e_{i})=(-1)^{a_{i}}e_{i}$ for $1\leq i\leq n$. For $q\leq 3$ every monomorphism is an isometry. Notation 2.4. Let $\phi:\mathcal{G}\times LPL^{\infty}(n,e,q)\rightarrow LPL^{\infty}(n,e,q)$ the action given by $(f,C)\mapsto f(C)$. We denote the quotient space by this action as $LPL^{\infty}(n,e,q)/\mathcal{G}$ and the orbit of an element $C$ by $[C]_{\mathcal{G}}$. If $[C_{1}]_{\mathcal{G}}=[C_{2}]_{\mathcal{G}}$ we say that $C_{1}$ and $C_{2}$ are geometrically equivalent and we denote this relation by $C_{1}\underset{\mathcal{G}}{\sim}C_{2}$. In order to describe isomorphism classes of codes we consider the equivalence relation in $LPL^{\infty}(n,e,q)$: $C_{1}\underset{\mathcal{A}}{\sim}C_{2}$ if there exists an isomorphism $f:\mathbb{Z}_{q}^{n}\rightarrow\mathbb{Z}_{q}^{n}$ such that $C_{1}=f(C_{2})$. When $C_{1}\underset{\mathcal{A}}{\sim}C_{2}$ we say that $C_{1}$ and $C_{2}$ are algebraically equivalent and we denote by $LPL^{\infty}(n,e,q)/\mathcal{A}$ the quotient set and by $[C]_{\mathcal{A}}$ the equivalence class of $C$ by this relation. It is important to remark that if $M$ is a generator matrix for a code $C\in LPL^{\infty}(n,e,q)$, then the group structure of $C$ is determined by the Smith normal form of $A=qM^{-1}$. Moreover, if the Smith normal form of $A$ is $D=\mbox{diag}(a_{1},a_{2},\ldots,a_{n})$ then $C\simeq\mathbb{Z}_{a_{1}}\times\mathbb{Z}_{a_{2}}\times\ldots\times\mathbb{Z}_% {a_{n}}$ and $\#C=\det(A)=q^{n}/\det(M)$ (Proposition 3.1 of [1]). In Section 3 we parametrize isomorphism classes of perfect codes through certain generalized cosets of $\mathbb{Z}_{d}$ defined as follow. Definition 2.5. Let $A$ be an abelian ring with unit and $A^{*}$ the multiplicative group of its invertible elements. A generalized coset is a set of the form $xC$ where $C<A^{*}$ (i.e. C is a multiplicative subgroup of $A^{*}$). We denote by $A/C=\{xC:x\in A\}$. Remark 2.6. Let $C<A^{*}$ and $x,y\in A$. If $xC\cap yC\neq\emptyset$ then $xC=yC$, so $C$ induces an equivalence relation in $A$ whose quotient set coincides with $A/C$. The $n$-dimensional $q$-ary flat torus $\mathcal{T}_{q}^{n}$ is obtained from the cube $[0,q]^{n}$ by identifying its opposite faces, see Figure 1. It can also be obtained through the quotient $\mathcal{T}_{q}^{n}=\mathbb{R}^{n}/q\mathbb{Z}^{n}$, inheriting a natural group structure induced by this quotient. Given an invariant-by-translation metric $d$ in $\mathbb{R}^{n}$, every ball $B=B(0,r)$ in this metric have an associated polyomino $P_{B}\subseteq\mathbb{R}^{n}$ given by $$P_{B}=\biguplus_{x\in B\cap\mathbb{Z}^{n}}x+\left[-1/2,1/2\right]^{n}.$$ In this way, tiling $Z^{n}$ by translated copies of $B$ is equivalent to tiling $\mathbb{R}^{n}$ by translated copies of its associated polyomino $P_{B}$. When $P_{B}\subseteq\left[\frac{-q}{2},\frac{q}{2}\right]^{n}$, $q$-periodic tilings of $\mathbb{R}^{n}$ by translated copies of $P_{B}$ are in correspondence to tilings of $\mathcal{T}_{q}^{n}$ by translated copies of $\overline{P_{B}}$. This association provides an important geometric tool to study perfect codes over $\mathbb{Z}$. In the seminal paper of Golomb and Welch [7], the authors use this approach based on polyominoes to settle several results on perfect Lee codes over large alphabets. Since we consider the maximum metric, the polyominoes associated with balls in this metric correspond to cubes of odd length centered at points of $\mathbb{Z}^{n}$ (and only this type of cubes will be considered in this paper). The condition $q=(2e+1)t$ guarantees a correspondence between tiling of the torus $\mathcal{T}_{q}^{n}$ by cubes of length $2e+1$ and perfect codes in $LPL^{\infty}(n,e,q)$. Definition 2.7. An $n$-dimensional cartesian code is a code of the form $(2e+1)\mathbb{Z}_{q}^{n}$ for some $q\in\mathbb{Z}^{+}$ and $e\in\mathbb{N}$ such that $2e+1\mid q$. A linear $q$-ary code is a subgroup of $(\mathbb{Z}_{q}^{n},+)$ (example in Figure 2). A cyclic $q$-ary code is a linear $q$-ary code which is cyclic as abelian group. The code $C=\mathbb{Z}_{q}^{n}$ is a perfect code and we refer to it as the trivial code. Definition 2.8. We say that a code $C\in PL^{\infty}(n,e,q)$ is standard if there exists a canonical vector $e_{i}$ for some $i:1\leq i\leq n$ such that $C+(2e+1)e_{i}\subseteq C$. In this case we say also that $C$ is of type $i$, see Figure 3. As we will see later (Remark 3.7), a code could have no type or it could have more than one type (for example $n$-dimensional cartesian codes are of type $i$ for $1\leq i\leq n$). The following theorem of Hajos [31], known also as Minkowski’s Conjecture, is of fundamental importance when we approach perfect codes in arbitrary dimensions. Theorem 2.9 (Minkowski-Hajos). Every tiling of $\mathbb{R}^{n}$ by cubes of the same length whose centers form a lattice contains two cubes that meet in an $n-1$ dimensional face. Corollary 2.10. Every linear perfect code $C\in LPL^{\infty}(n,e,q)$ is standard. 3. Perfect codes in dimension two In this section we study two-dimensional perfect codes with respect to the maximum metric. First we describe the set of all (non-necessarily linear) $(2,e,q)$-perfect codes and how they can be obtained from an one-dimensional perfect code using horizontal or vertical construction. In particular we obtain that every two-dimensional perfect code is standard, what is not true for higher dimensions. This result in dimension $2$ is mentioned with no proof in [11]. The proof presented here illustrates well the coding theory approach to be used in further results. Then we focus on the linear case providing generator matrices for perfect codes and describing isometry classes and isomorphism classes of the $(2,e,q)$-perfect codes. The following result (whose proof is straightforward) characterizes the parameters for which there exists a perfect code with these parameters. Proposition 3.1. A necessary and sufficient condition for the existence of an $n$-dimensional $q$-ary $e$-perfect code in the maximum metric is that $q=(2e+1)t$ for some integer $t>1$. Moreover, if this condition is satisfied, there exists a code in $LPL^{\infty}(n,q,e)$. Corollary 3.2. There exists a non trivial perfect code over $\mathbb{Z}_{q}$ if and only if $q$ is neither a power of $2$ nor a prime number. These results led us restricting to the case $q=(2e+1)t$ where $e\geq 0$ and $t>1$ are integers and we will maintain this assumption along this paper. 3.1. Linear and non-linear two-dimensional perfect codes It is immediate to see that the only perfect codes $C$ in $PL^{\infty}(1,e,q)$ are of the form $a+(2e+1)\mathbb{Z}_{q}$ where $q=(2e+1)t$. If we fix a map $h:\mathbb{Z}_{t}\rightarrow\mathbb{Z}_{q}$ we can construct a two-dimensional $q$-ary perfect code as follows: $\bullet$ (Horizontal construction) $C_{1}(a,h)=\{(h(k)+(2e+1)s,a+(2e+1)k):k,s\in\mathbb{Z}_{t}\}$. $\bullet$ (Vertical construction) $C_{2}(a,h)=\{(a+(2e+1)k,h(k)+(2e+1)s):k,s\in\mathbb{Z}_{t}\}$. The above construction give us $(2,e,q)$-codes of cardinality $t^{2}$ and minimum distance $d\geq 2e+1$ from which it is easy to deduce perfection. In fact we obtain $(2,e,q)$-perfect codes of type $1$ (if horizontal construction is used) or of type $2$ (if vertical construction is used). Moreover, every two-dimensional perfect code can be obtained in this way as we will see next. Lemma 3.3. Let $\pi_{i}:\mathbb{Z}_{q}^{n}\rightarrow\mathbb{Z}_{q}$ be the canonical projection (i.e. $\pi(x_{1},\ldots,x_{n})=x_{i}$), $C\in PL^{\infty}(n,e,q)$, $f_{C}$ be its error-correcting function and $x$ be an element of $\mathbb{Z}_{q}^{n}$. Then, • if $f_{C}(x)\neq f_{C}(x-e_{i})$ then $\pi_{i}\circ f_{C}(x)=\pi_{i}(x)+e\cdot e_{i}$. • If $f_{C}(x)\neq f_{C}(x+e_{i})$ then $\pi_{i}\circ f_{C}(x)=\pi_{i}(x)-e\cdot e_{i}$. Proof. Let $c=f_{C}(x)$, $x_{i}=\pi_{i}(x)$ and $c_{i}=\pi_{i}(c)$. Denoting by $d$ the Lee metric in $\mathbb{Z}_{q}$, $f_{C}(x)=c$ implies $M_{i}=\max\{d(x_{j},c_{j}):1\leq j\leq n,j\neq i\}\leq e$ and $d(x_{i},c_{i})\leq e$. Since $f_{C}(x-e_{i})\neq c$ we have that $d(x-e_{i},c)=\max\{M_{i},d(x_{i}-1,c_{i})\}\geq e+1$ and therefore $d(x_{i}-1,c_{i})\geq e+1$. We obtain the inequalities $\|(c_{i}-x_{i})\|\leq e$ and $\|(c_{i}-x_{i})-1\|\geq$ $e+1$ which imply $c_{i}-x_{i}=e$. The other case can be obtained from this considering the isometry $\eta_{i}$ of $\mathbb{Z}_{q}^{n}$ given by $\eta_{i}(x_{1},\ldots,x_{i},\ldots,x_{n})=(x_{1},\ldots,-x_{i},\ldots,x_{n})$. ∎ Lemma 3.4. If $C\in PL^{\infty}(2,e,q)$ verifies $(2e+1)\mathbb{Z}_{q}\times\{0\}\subseteq C$ then $C$ is a standard code of type $1$. Proof. Assume, to the contrary, that there is a codeword $c=(c_{1},c_{2})\in C$ such that $c+(2e+1)e_{1}=(c_{1}+2e+1,c_{2})\not\in C$, and we take $c$ with this property such that $c_{2}$ is minimum. We claim that $c_{2}\geq 2e+1$. Indeed, if $0\leq c_{2}<2e+1$ and we express $c_{1}=(2e+1)k+r$ with $|r|\leq e$, then $((2e+1)k,e)$ belongs to both balls $B_{\infty}((2e+1)k,e)$ and $B_{\infty}(c,e)$ which is a contradiction, so $c_{2}\geq 2e+1$. We consider now $p=(c_{1},c_{2}-(e+1))$, then $f_{C}(p+e_{2})=c\neq f_{C}(p)$ and by Lemma 3.3 we have $f_{C}(p)=(a,c_{2}-(2e+1))$ for some $a\in\mathbb{Z}_{q}$. We observe that $c_{2}-(2e+1)\geq 0$ and by the minimality of $c_{2}$ we have that $(a+(2e+1)k,c_{2}-(2e+1))\in C$ for $0\leq k<t$. Consider now $p^{\prime}=(c_{1}+e+1,c_{2}-e)$ and express $c_{1}+e+1-a=(2e+1)v+w$ with $v,w\in\mathbb{Z}_{q}$ and $|w|\leq e$. Clearly, $f_{C}(p^{\prime}-e_{1})=c\neq f_{C}(p^{\prime})$ and $f_{C}(p^{\prime}-e_{2})=(a+(2e+1)v,c_{2}-(2e+1))\neq f_{C}(p^{\prime})$, thus, by Lemma 3.3 we have $f_{C}(p^{\prime})=p^{\prime}+(e,e)=c+(2e+1)e_{1}$ which is a contradiction. ∎ By Minkowski-Hajos Theorem (Theorem 2.9) every linear $n$-dimensional prefect code in the maximum metric is standard, but in dimension two this is also true for all perfect codes. Proposition 3.5. Every two-dimensional perfect code in the maximum metric is standard. Proof. Let $C\in PL^{\infty}(2,e,q)$ with $q=(2e+1)t$ and $t>1$ an integer. Let we suppose that $C$ is not of type 2, so there exists a codeword $c\in C$ such that $c+(2e+1)e_{2}\not\in C$. Composing with a translation if it is necessary we can assume $c=0$. Consider $p=(0,e+1)$, by Lemma 3.3 we have $f_{C}(p)=(a,2e+1)$ with $|a|\leq e$ and $a\neq 0$. Composing with the isometry $(x,y)\mapsto(-x,y)$ if necessary we may assume $0<a\leq e$. We consider the following statement: $\{(p_{h}=(2e+1)h,2e+1),q_{h}=(a+(2e+1)h,2e+1)\}\subseteq C$, which is valid for $h=0$ (from above). Assume that this property holds for a fixed $h$, $0\leq h<t$ and consider $p=((2e+1)h+e+1,e)$. This point verifies $f_{C}(p-e_{1})=p_{h}\neq f_{C}(p)$ and $f_{C}(p+e_{2})=q_{h}\neq f_{C}(p)$, so by Lemma 3.3 we have $f_{C}(p)=p+(e,-e)=((2e+1)(h+1),0)=p_{h+1}\in C$. Now consider $p^{\prime}=(a+(2e+1)h+e+1,e+1)$ which verifies $f_{C}(p^{\prime}-e_{1})=q_{h}\neq f_{C}(p^{\prime})$ and $f_{C}(p^{\prime}-e_{2})=p_{h+1}\neq f_{C}(p^{\prime})$, so by Lemma 3.3 we have $f_{C}(p^{\prime})=p^{\prime}+(e,e)=(a+(2e+1)(h+1),2e+1)=q_{h+1}\in C$. By induction we have that $(2e+1)\mathbb{Z}_{q}\times\{0\}\subseteq C$ and by Lemma 3.4 our code $C$ is standard. ∎ Corollary 3.6. Every two-dimensional perfect code $C$ in the maximum metric is of the form $C=C_{1}(a,h)$ or $C=C_{2}(a,h)$ for some $a\in\mathbb{Z}_{q}$ and some function $h:\mathbb{Z}_{t}\rightarrow\mathbb{Z}_{q}$. Remark 3.7. Proposition 3.5 cannot be generalized to higher dimensions. For example the code $C=\{(0,0,0),(5,0,0),(1,0,5),(6,0,5),(1,5,0),(6,5,1),$ $(1,5,5),$ $(6,5,6)\}\in PL^{\infty}(3,2,10)$ is a three-dimensional non-standard perfect code. Corollary 3.8. The number of $(2,e,q)$-perfect codes is $(2e+1)^{2}\left(2(2e+1)^{t-1}-1\right)$. Proof. We consider the sets $L=PL^{\infty}(2,e,q),L^{0}=\{C\in L:0\in C\}$ and $L_{i}^{0}=\{C\in L^{0}:C\textrm{ is of type }i\}$ for $i=1,2$. The map $L\twoheadrightarrow L^{0}$ given by $C\mapsto C-f_{C}(0)$ is $(2e+1)^{2}$ to $1$, so $\#L=(2e+1)^{2}\#L^{0}$. By Proposition 3.5, $L^{0}=L_{1}^{0}\cup L_{2}^{0}$ and considering the involution $L_{1}^{0}\rightarrow L_{2}^{0}$ given by $C\mapsto\theta(C)$ where $\theta(x,y)=(y,x)$, which has exactly one fixed point (given by $(2e+1)\mathbb{Z}_{q}^{n}$) we have $\#L^{0}=2\#L_{2}^{0}-1$. Finally, codes in $L_{2}^{0}$ are univocally determined by a function $h:\mathbb{Z}_{t}\rightarrow\mathbb{Z}_{2e+1}$ verifying $h(0)=0$, thus $\L_{2}^{0}=(2e+1)^{t-1}$ and so $\#L=(2e+1)^{2}\left(2(2e+1)^{t-1}-1\right)$. ∎ 3.2. Generator matrices and admissible structures for two-dimensional perfect codes In this part we provide generator matrices for linear perfect codes and we describe all two-dimensional cyclic perfect codes in the maximum metric. A description of which group structure can be represented by two-dimensional linear perfect codes is given. Notation 3.9. We denote by $LPL^{\infty}(2,e,q)_{o}$ the set of $(2,e,q)$-perfect codes of type $2$. By Proposition 3.5, every two-dimensional perfect code is of type $1$ or is of type $2$ and the isometry $\pi(x,y)=(y,x)$ induces a correspondence between the codes of type $1$ and codes of type $2$. So, without loss of generality we can restrict our study to type $2$ perfect codes. Theorem 3.10. Let $q=(2e+1)t$ with $t>1$, $d_{1}=\gcd(2e+1,t)$ and $h_{1}=\frac{2e+1}{d_{1}}$. Every integer matrix of the form $M=\left(\begin{matrix}2e+1&kh_{1}\\ 0&2e+1\end{matrix}\right)$ with $k\in\mathbb{Z}$ is the generator matrix of some type $2$ perfect code $C\in LPL^{\infty}(2,e,q)_{o}$. Conversely, every type $2$ perfect code $C\in LPL^{\infty}(2,e,q)_{o}$ has a generator matrix of this form. Proof. Let $M=\left(\begin{matrix}2e+1&kh_{1}\\ 0&2e+1\end{matrix}\right)$ with $k\in\mathbb{Z}$. Since $qM^{-1}=\left(\begin{matrix}t&-k\frac{t}{d_{1}}\\ 0&t\end{matrix}\right)$ has integer coefficient, $M$ is the generator matrix of the $q$-ary code $C=\langle\overline{c_{1}},\overline{c_{2}}\rangle\subseteq\mathbb{Z}_{q}^{2}$ where $c_{1}=(2e+1,kh1)$ and $c_{2}=(0,2e+1)$. Every codeword is of the form $\overline{c}=x\overline{c_{1}}+y\overline{c_{2}}$ with $x,y\in\mathbb{Z}$. Since $||\overline{c}||_{\infty}=\max\{|\overline{(2e+1)x}|_{1},|\overline{kh_{1}x+(2% e+1)y|_{1}}\}$, the inequality $||\overline{c}||_{\infty}<2e+1$ implies $\overline{c}=0$, thus the minimum distance of $C$ is $\mbox{dist}(C)\geq 2e+1$. In addition, the cardinality of $C$ is $\#C=q^{2}/\det(M)=t^{2}$, so by the sphere packing condition the code $C$ is perfect with packing radius $e$ and it is of type $2$ because is linear and $\overline{c_{2}}=(2e+1)\overline{e_{2}}\in C$. To prove the converse we consider a code $C\in LPL^{\infty}(2,e,q)_{o}$, since $C$ is linear then $0\in C$ and $C=C_{2}(0,h)$ for some $h:\mathbb{Z}_{t}\rightarrow\mathbb{Z}_{q}$. In particular, $C$ has two codewords $c_{1}=(\overline{2e+1},\overline{y_{1}})$ and $c_{2}=(\overline{0},\overline{2e+1})$ where $y_{1}\in\mathbb{Z}$ is such that $\overline{y_{1}}=h(1)\in\mathbb{Z}_{q}$. Let $ty_{1}=(2e+1)s+r$ with $s,r\in\mathbb{Z}$ and $0\leq r<2e+1$. By linearity $tc_{1}-sc_{2}=(\overline{0},\overline{r})\in C$ which has minimum distance $2e+1$, so $r=0$ and $y_{1}=h_{1}k$ for some integer $k$. The code $C^{\prime}$ generated by $c_{1}$ and $c_{2}$ has generator matrix $M=\left(\begin{matrix}2e+1&kh_{1}\\ 0&2e+1\end{matrix}\right)$, therefore by the first part, the code $C^{\prime}$ generated by $c_{1}$ and $c_{2}$ is a $(2,e,q)$-perfect code which is contained in $C$, so $C^{\prime}=C$. ∎ Notation 3.11. We denote by $LC_{q}(e,k)$ the $q$-ary perfect code whose generator matrix is given by $\left(\begin{matrix}2e+1&kh_{1}\\ 0&2e+1\end{matrix}\right)$. Remark 3.12. Replacing the first row by its sum with an integer multiple of the second row if it were necessary, we can always suppose that the number $k$ in the statement of Theorem 3.10 verify $0\leq k<d_{1}$. In fact, it is possible replace $k$ by any integer congruent to $k$ modulo $d_{1}$ with this elementary operation in rows, so $LC_{q}(e,k)=LC_{q}(e,k_{0})$ if $k\equiv k_{0}\pmod{d_{1}}$. Now we approach the problem of what group isomorphism classes are represented by linear $(2,e,q)$-perfect codes (admissible structures). By the sphere packing condition, if $C\in LPL^{\infty}(2,e,q)$ then $\#C=t^{2}$. The structure theorem for finitely generated abelian groups [6, p. 338] in this case establishes that $C\simeq\mathbb{Z}_{t/d}\times\mathbb{Z}_{dt}$ for some $d|t$ (where $d$ determines the isomorphism class of $C$). In this way, the question of what isomorphism classes are represented by two-dimensional perfect codes in the maximum metric is equivalent to determining for what values of $d|t$ there exists $C\in LPL^{\infty}(2,e,q)$ such that $C\simeq\mathbb{Z}_{t/d}\times\mathbb{Z}_{dt}$. Lemma 3.13. Let $q=(2e+1)t,d_{1}=\gcd(2e+1,t),h_{1}=\frac{2e+1}{d_{1}},d_{2}=\gcd(d_{1},k),h_{2% }=\frac{d_{1}}{d_{2}},k_{1}=\frac{k}{d_{2}}$ and $k^{\prime}\in\mathbb{Z}$ such that $k_{1}k\equiv 1\pmod{h_{2}}$. Then $N=\left(\begin{matrix}(2e+1)h_{2}&0\\ (2e+1)k^{\prime}&h_{1}d_{2}\end{matrix}\right)$ is a generator matrix for $LC_{q}(e,k)$. Proof. Let $M$ be the generator matrix for $LC_{q}(e,k)$ given in Theorem 3.10 and $U=\left(\begin{matrix}h_{2}&-k_{1}\\ k^{\prime}&\frac{1-k_{1}k^{\prime}}{h_{2}}\end{matrix}\right)$. Since $\det(U)=1$ and $UM=N$ we have that $N$ is also a generator matrix for $LC_{q}(e,k)$. ∎ Theorem 3.14. Let $q=(2e+1)t$, $k$ be an integer and $h_{2}=\frac{\gcd(2e+1,t)}{\gcd(2e+1,t,k)}$. (i) $LC_{q}(e,k)\simeq\mathbb{Z}_{{t}/{h_{2}}}\times\mathbb{Z}_{th_{2}}$ (isomorphic as groups). (ii) There exists $C\in LPL^{\infty}(2,e,q)$ such that $C\simeq\mathbb{Z}_{{t}/{d}}\times\mathbb{Z}_{td}$ if and only if $d\mid\gcd(2e+1,t)$. Proof. To prove (i) we consider the homomorphism $T:\mathbb{Z}^{2}\rightarrow\mathbb{Z}_{q}^{2}$ given by $T(x)=x\overline{N}$, where $N$ is as in Lemma 3.13. We have that $\ker(T)=\frac{t}{h_{2}}\mathbb{Z}\times th_{2}\mathbb{Z}$ and by the referred lemma $\mbox{Im}(T)=LC_{q}(e,k)$, so (i) follows from the First group isomorphism theorem [6, p. 307]. To prove (ii) we observe that for every $k$ we have $h_{2}\mid\gcd(2e+1,t)$ and for $d\mid d_{1}$ where $d_{1}=\gcd(2e+1,t)$, then $LC_{q}(e,\frac{d_{1}}{d})\simeq\mathbb{Z}_{{t}/{d}}\times\mathbb{Z}_{td}$. ∎ Corollary 3.15. There exists a two-dimensional perfect code $C\simeq\mathbb{Z}_{a}\times\mathbb{Z}_{b}$ with $a\mid b$ if and only if $ab$ is a perfect square and $b/a$ is an odd number. Proof. ($\Rightarrow$) By Theorems 3.10 and 3.14, if $\mathbb{Z}_{a}\times\mathbb{Z}_{b}\simeq C$ with $a|b$ for some perfect code $C$, then there exists integers $t,h_{2}$ and $e$ such that $a=\frac{t}{h_{2}},b=th_{2}$ and $h_{2}\mid 2e+1$ (in particular $h_{2}$ is odd), thus $ab=t^{2}$ is a perfect square and $\frac{b}{a}=h_{2}^{2}$ is odd. ($\Leftarrow$) Let $ab=t^{2}$ and $b=as$ with $s$ odd. Since $a^{2}s=t^{2}$ we have $s=(2e+1)^{2}$ and $(2e+1)a=t$. Defining $q=(2e+1)t$, by Theorem 3.14 we have $LC_{q}(e,1)\simeq\mathbb{Z}_{a}\times\mathbb{Z}_{b}$. ∎ Corollary 3.16. Let $C\in LPL^{\infty}(2,e,q)$ with $q=(2e+1)t$. Then, $C\simeq\mathbb{Z}_{t}\times\mathbb{Z}_{t}\Leftrightarrow C$ is the cartesian code $C=(2e+1)\mathbb{Z}_{q}^{2}$. Proof. By Theorem 3.10 and Remark 3.12 every code is of the form $C=LC_{q}(e,k)$ for some $k$ with $0\leq k<d_{1}$ and by Theorem 3.14 we have $C\simeq\mathbb{Z}_{t}\times\mathbb{Z}_{t}\Leftrightarrow h_{2}=1% \Leftrightarrow d_{1}=d_{2}\Leftrightarrow d_{1}\mid k\Leftrightarrow 2e+1\mid kh% _{1}\Leftrightarrow k=0\Leftrightarrow C=(2e+1)\mathbb{Z}_{q}^{2}$. ∎ Corollary 3.17. There exists a linear two-dimensional $q$-ary perfect code $C$ that is non-cartesian if and only if $q=p^{2}a$ where $p$ is an odd prime number and $a$ is a positive integer. Proof. By Theorem 3.14 part (ii), there exists a $q$-ary non-cartesian perfect code if and only if $q=(2e+1)t$ for some integers $e$ and $t$ such that $\gcd(2e+1,t)>1$. This last condition is equivalent to $2e+1=pm$ and $t=pn$ for some odd prime $p$ and $m,n\in Z^{+}$, thus $q=p^{2}a$ where $p$ is an odd prime and $a$ is a positive integer. ∎ Example 3.18. The first value of $q$ for which there exists a two-dimensional $q$-ary perfect code that is neither cartesian nor cyclic is $3^{2}\cdot 2$. An example of such code has generators $\{(0,9),(1,3)\}\subseteq\mathbb{Z}_{18}^{2}$, see Figure 4. Corollary 3.19. There exists a two-dimensional cyclic $q$-ary perfect code if and only if $q=p^{2}a$ where $p$ is an odd prime number and $a$ is an odd positive integer. Proof. By Theorem 3.14 part (ii), there exists a $q$-ary cyclic perfect code if and only if $q=(2e+1)t$ for some integers $e$ and $t$ such that $\gcd(2e+1,t)=t>1$. This last condition is equivalent to $2e+1=mt$ for some odd integer $m$, thus $q=mt^{2}$ where $a$ is an odd integer and $t>1$ which is equivalent to $q=ap^{2}$ where $a$ is an odd integer and $p$ is an odd prime number. ∎ Corollary 3.20. Let $q=(2e+1)t$. There exists a cyclic code in $LPL^{\infty}(2,e,q)$ if and only if $t\mid 2e+1$. Under this condition $LC_{q}(e,k)$ is cyclic if and only if $\gcd(k,t)=1$. Proof. By Theorem 3.14 part (ii), there exists a cyclic code in $LPL^{\infty}(2,e,q)$ if and only if $\gcd(2e+1,t)=t$ if and only if $t\mid 2e+1$. In this case, by Theorem 3.14 part (i), we have $LP_{q}(e,k)\equiv\mathbb{Z}_{t^{2}}\Leftrightarrow h_{2}=t\Leftrightarrow$ and $\gcd(2e+1,t,k)=\gcd(t,k)=1$. ∎ 3.3. Isometry and isomorphism classes of two-dimensional perfect codes Since every linear $(2,e,q)$-perfect code is isometric to an type $2$ linear perfect code we can restrict to $LPL^{\infty}(2,e,q)_{o}$. Our main result here is a parametrization of the set $LPL^{\infty}(2,e,q)_{o}$ by the ring $\mathbb{Z}_{d_{1}}$ (where $d_{1}=\gcd(2e+1,t)$) in such a way that isometry classes and isomorphism classes correspond to certain generalized cosets. The following lemma can be obtained from the Chinese remainder theorem. Lemma 3.21. There exist $u\in\mathbb{Z}_{d}^{*}$ such that $a\equiv ub\pmod{d}\Leftrightarrow\gcd(a,d)=\gcd(b,d)$. Theorem 3.22. Let $q=(2e+1)t,d_{1}=\gcd(2e+1,t)$ and $h_{1}=\frac{2e+1}{d_{1}}$. We have the parametrization (bijection): $$\psi:\mathbb{Z}_{d_{1}}\rightarrow LPL^{\infty}(2,e,q)_{o}$$ $$k+d_{1}\mathbb{Z}\mapsto LC_{q}(e,k),\qquad$$ which induces the parametrizations: $$\psi_{\mathcal{G}}:\frac{\mathbb{Z}_{d_{1}}}{\{1,-1\}}\rightarrow LPL^{\infty}% (2,e,q)_{o}/\mathcal{G}$$ $$k\cdot\{1,-1\}\mapsto[\psi(k)]_{\mathcal{G}},\qquad\quad$$ and $$\psi_{\mathcal{A}}:\frac{\mathbb{Z}_{d_{1}}}{\mathbb{Z}_{d_{1}}^{*}}% \rightarrow LPL^{\infty}(2,e,q)_{o}/\mathcal{A}$$ $$k\cdot\mathbb{Z}_{d_{1}}^{*}\mapsto[\psi(k)]_{\mathcal{A}},\qquad\quad$$ Proof. By Theorem 3.10 and Remark 3.12 the map $\psi$ is well defined and is a surjection, so it remains to prove that $LC_{q}(e,k_{1})=LC_{q}(e,k_{2})\Leftrightarrow k_{1}\equiv k_{2}\pmod{d_{1}}$. Since both codes have the same cardinality $t^{2}$, we have $$LC_{q}(e,k_{1})=LC_{q}(e,k_{2})\Leftrightarrow LC_{q}(e,k_{1})\subseteq LC_{q}% (e,k_{2})\Leftrightarrow(h_{1}k_{1},2e+1)\in LC_{q}(e,k_{2})$$ $$\Leftrightarrow\exists x,y,\in\mathbb{Z}:\left\{\begin{array}[]{l}(2e+1)x+h_{1% }k_{2}y\equiv h_{1}k_{1}\pmod{q}\\ (2e+1)y\equiv 2e+1\pmod{q}\end{array}\right.$$ $$\Leftrightarrow\exists x,y,\in\mathbb{Z}:\left\{\begin{array}[]{l}y\equiv 1% \pmod{t}\\ d_{1}x+k_{2}y\equiv k_{1}\pmod{td_{1}}\end{array}\right.\Rightarrow\exists y% \in\mathbb{Z}:\left\{\begin{array}[]{l}y\equiv 1\pmod{d_{1}}\\ k_{2}y\equiv k_{1}\pmod{d_{1}}\end{array}\right.$$ which implies $k_{1}\equiv k_{2}\pmod{d_{1}}$. Let $\eta_{i}:\mathbb{Z}_{q}^{2}\rightarrow\mathbb{Z}_{q}^{2}$ for $i=1,2$ given by $\eta_{1}(x,y)=(-x,y)$ and $\eta_{2}(x,y)=(x,-y)$. We have that $\eta_{1}(LC_{q}(e,k))=\langle(-(2e+1),0),(-kh_{1},2e+1)\rangle=LC_{q}(e,-k)$ and the same is valid for $\eta_{2}$, thus $[\psi(k)]_{\mathcal{G}}=\{\psi(-k),\psi(k)\}$ and so $\psi_{\mathcal{G}}$ is well defined and is a bijection. By Theorem 3.14 we have $\psi(k)=LC_{q}(e,k)\simeq\mathbb{Z}_{t/h_{2}}\times\mathbb{Z}_{th_{2}}$ where $h_{2}=\frac{d_{1}}{\gcd(d_{1},k)}$, therefore $[\psi(k_{1})]_{\mathcal{A}}=[\psi(k_{2})]_{\mathcal{A}}\Leftrightarrow\gcd(k_{% 1},d_{1})=\gcd(k_{2},d_{1})$ and so $k_{1}\equiv uk_{2}\pmod{d_{1}}$ for some $u\in\mathbb{Z}$ with $gcd(u,d_{1})=1$ (Lemma 3.21), which is equivalent to $k_{1}\mathbb{Z}_{d_{1}}^{*}=k_{2}\mathbb{Z}_{d_{1}}^{*}$. ∎ Example 3.23. Let $p>2$ be a prime number and we take $q=p^{2}$ and $e\geq 1$ such that $2e+1=p$. In this case $d_{1}=p$ and we have exactly $p$ codes in $LPL^{\infty}(2,p,p^{2})$ given by $LC_{p^{2}}(p,k)$ for $0\leq k<p$, where the code $LC_{p^{2}}(p,k)$ has generator matrix $M_{k}=\left(\begin{array}[]{cc}p&0\\ k&p\end{array}\right)\in\mathcal{M}_{2\times 2}(\mathbb{Z}_{p^{2}})$. There exist exactly $\frac{p+1}{2}$ of such perfect codes up to isometry, given by $LC_{p^{2}}(p,k)$ for $0\leq k\leq\frac{p-1}{2}$. Since $p$ is prime, we have $\mathbb{Z}_{p}=\{0\}\uplus\mathbb{Z}_{p}^{*}$, so there exist exactly $2$ perfect codes in $LPL^{\infty}(2,p^{2},p)$ up to isomorphism, one of which is the cartesian code (which corresponds to $k=0$) and the other is $LC_{p^{2}}(p,1)$ (which is isomorphic to $LC_{p^{2}}(p,k)$ for $1<k<p$). Corollary 3.24. The set $\{LC_{q}(e,k):0\leq k\leq\frac{d_{1}-1}{2}\}$ is a set of representative of $LPL^{\infty}(2,e,q)/\mathcal{G}$ and the set $\{LC_{q}(e,k):\gcd(k,d_{1})=1\textrm{, }1\leq k\leq d_{1}\}$ is a set of representative of $LPL^{\infty}(2,e,q)/\mathcal{A}$. Corollary 3.25. There exist exactly $d_{1}=\gcd(2e+1,t)$ codes in $LPL^{\infty}(2,e,q)$ where $q=(2e+1)t$. There exist exactly $\frac{d_{1}+1}{2}$ of such codes up to isometry and there exist exactly $\sigma_{0}(d_{1})$ of such codes up to isomorphism where $\sigma_{0}$, as usual, denotes the number-of-divisors function. Proof. The first two assertion are immediate. For the third assertion we use that $\mathbb{Z}_{d_{1}}=\biguplus_{d\mid d_{1}}d\mathbb{Z}_{d_{1}}^{*}$ and use Theorem 3.22. ∎ 4. Constructions of perfect codes in arbitrary dimensions In this section we give some constructions of perfect codes in the maximum metric from perfect codes in smaller dimensions. We also present a section construction which plays an important role in the next section. The simplest way to obtain perfect codes is using cartesian product. Using the sphere packing condition we obtain the following proposition. Proposition 4.1 (Cartesian product construction). If $C_{1}\in PL^{\infty}(n_{1},e,q)$ and $C_{2}\in PL^{\infty}(n_{1},e,q)$ then $C_{1}\times C_{2}\in PL^{\infty}(n_{1}+n_{2},e,q)$. This construction preserves linearity. Corollary 4.2. There exists a linear non-cartesian $n$-dimensional $q$-ary perfect code if and only if $q=p^{2}a$ where $p$ is an odd prime number and $a$ is a positive integer. Corollary 4.3. If $q=(2e+1)t$ and $d_{1},d_{2},\ldots,d_{k}$ are divisors (not necessarily distinct) of $\gcd(2e+1,t)$, there exists a code $C\in LPL^{\infty}(2k,e,q)$ such that $$C\simeq\mathbb{Z}_{\frac{t}{d_{1}}}\times\mathbb{Z}_{\frac{t}{d_{2}}}\times% \ldots\mathbb{Z}_{\frac{t}{d_{k}}}\times\mathbb{Z}_{d_{1}t}\times\mathbb{Z}_{d% _{2}t}\times\ldots\mathbb{Z}_{d_{k}t}$$ and a code $C\in LPL^{\infty}(2k+1,e,q)$ such that $$C\simeq\mathbb{Z}_{\frac{t}{d_{1}}}\times\mathbb{Z}_{\frac{t}{d_{2}}}\times% \ldots\mathbb{Z}_{\frac{t}{d_{k}}}\times\mathbb{Z}_{t}\times\mathbb{Z}_{d_{1}t% }\times\mathbb{Z}_{d_{2}t}\times\ldots\mathbb{Z}_{d_{k}t}.$$ Remark 4.4. There are other linear perfect codes whose group structure is not of the form given in Corollary 4.3 (for example those in Corollary 4.8). The next construction is specific for linear codes, this allows to construct a linear perfect $q$-ary code from other codes of smaller dimensions. Notation 4.5. If $H$ is a subgroup of an abelian group $G$ and $t\in\mathbb{Z}^{+}$, we denote by $t^{-1}H=\{g\in G:tg\in H\}$. Remark 4.6. With the above notation, $t^{-1}H$ is a subgroup of $G$ that contains $H$. Proposition 4.7 (Linear construction). If $C\in LPL^{\infty}(n,e,q)$ with $q=(2e+1)t$ and $x\in t^{-1}C$, then $\widetilde{C}=C\times\{0\}+(x,2e+1)\mathbb{Z}\in LPL^{\infty}(n+1,e,q)$. Proof. Since $tx\in C$ every codeword $v\in\widetilde{C}$ can be written as $v=(c+xk,(2e+1)k)$ with $c\in C$ and $0\leq k<t$ and we have (1) $$\|(c+xk,(2e+1)k)\|_{\infty}=\max\{\|c+xk\|_{\infty},\|(2e+1)k\|_{\infty}\}.$$ If $k=0$, then $\|(c+xk,(2e+1)k)\|_{\infty}=\|c\|_{\infty}\geq 2e+1$ if $c\neq 0$ (because $C$ have packing radius $e$). If $0<k<t$, then $\|(2e+1)k\|_{\infty}\geq 2e+1$ and by (1) we have $\|(c+xk,(2e+1)k)\|_{\infty}\geq 2e+1$. We conclude that $C$ has packing radius at least $e$. We want to calculate the cardinality of $C$, that is (2) $$\#C=\frac{\#C\times\{0\}\cdot\#(x,2e+1)\mathbb{Z}}{\#C\times\{0\}\cap(x,2e+1)% \mathbb{Z}}.$$ We have $\#C\times\{0\}=\#C=t^{n}$. Let $\theta$ the additive order of $tx$ in $\mathbb{Z}_{q}^{n}$ (i.e. the least positive integer $\theta$ such that $\theta tx=0$). It is straightforward to check that the order of $(x,2e+1)$ in $\mathbb{Z}_{q}^{n+1}$ is $t\theta$ and that $C\times\{0\}\cap(x,2e+1)\mathbb{Z}=(tx,0)\mathbb{Z}$. Using equation (2) we have $\#C=\frac{t^{n}\cdot t\theta}{\theta}=t^{n+1}$ and by the sphere packing condition the code $\widetilde{C}\subseteq\mathbb{Z}_{q}^{n+1}$ is perfect with packing radius $e$. ∎ Corollary 4.8. If $q=(2e+1)t$ with $t^{n-1}\mid 2e+1$ and $n\geq 1$, then the $q$-ary cyclic code $$\mathcal{C}_{n,e,q}=\left\langle\left(\frac{2e+1}{t^{n-1}},\frac{2e+1}{t^{n-2}% },\ldots,\frac{2e+1}{t},2e+1\right)\right\rangle\in LPL^{\infty}(n,e,q).$$ Proof. We denote by $p_{n}=\left(\frac{2e+1}{t^{n-1}},\frac{2e+1}{t^{n-2}},\ldots,\frac{2e+1}{t},2e% +1\right)\in\mathbb{Z}_{q}^{n}$ and proceed by induction. For $n=1$ it is clear. If $C_{n,e,q}\in LPL^{\infty}(n,e,q)$ holds for some $n\geq 1$, we apply the linear construction with $x=\left(\frac{2e+1}{t^{n}},\frac{2e+1}{t^{n-2}},\ldots,\frac{2e+1}{t}\right)$. Since $tx=p_{n}\in C_{n,e,q}$ then $\widetilde{C}=\langle(p_{n},0),(x,2e+1)=p_{n+1}\rangle\in LPL^{\infty}(n+1,e,q).$ We remark that $tp_{n+1}=(p_{n},0)$ (because $(2e+1)t\equiv 0\pmod{q}$), so $\widetilde{C}=\langle p_{n+1}\rangle$. ∎ In particular, if $2e+1=t^{n-1}$ we obtain the following family of cyclic perfect codes. Corollary 4.9. If $q=t^{n}$ where $t$ is an odd number, then the $q$-ary code $C=\langle(1,t,t^{2},\ldots,t^{n-1})\rangle\in LPL^{\infty}(n,e,q)$, for the packing radius $e=(t^{n-1}-1)/2$. Proposition 4.10. Let $q=(2e+1)t$. There exists a cyclic code in $LPL^{\infty}(n,e,q)$ if and only if $t^{n-1}\mid 2e+1$. Proof. If $C\in LPL^{\infty}(n,e,q)$ is cyclic, there exists $c\in C$ with order $t^{n}=|C|$. Since $qc=0$ we have $t^{n}\mid q$, and so $t^{n-1}\mid 2e+1$. The converse follows from Corollary 4.8. ∎ The next construction generalize horizontal and vertical constructions for two-dimensional perfect code in the maximum metric presented in the previous section. Proposition 4.11 (Non linear construction). Let $C\in PL^{\infty}(n,e,q)$ and $h:C\rightarrow\mathbb{Z}_{q}$ be a map (called height function). If $\widehat{C}=\{(c,h(c)+(2e+1)k):c\in C,k\in\mathbb{Z}\}$, then $\widehat{C}\in PL^{\infty}(n+1,e,q)$. Proof. Since $(2e+1)t=q$ we have $\#\widehat{C}=\#C\cdot t=t^{n+1}$, thus it suffices to prove that the minimum distance of $\widehat{C}$ is at least $2e+1$. Let $\widehat{c}_{i}=(c_{i},h(c_{i})+(2e+1)k_{i})\in\widehat{C}$ with $c_{i}\in C$ for $i=1,2$ and suppose that $\parallel\widehat{c_{1}}-\widehat{c_{2}}\parallel_{\infty}<2e+1$. The relation $$\parallel\widehat{c_{1}}-\widehat{c_{2}}\parallel_{\infty}=\parallel c_{1}-c_{% 2}\parallel_{\infty}+\parallel(h(c_{1})-h(c_{2}))+(2e+1)(k_{1}-k_{2})\parallel% _{\infty}$$ implies $\parallel c_{1}-c_{2}\parallel_{\infty}<2e$ and $\parallel(h(c_{1})-h(c_{2}))+(2e+1)(k_{1}-k_{2})\parallel_{\infty}<2e+1$ and so $c_{1}=c_{2}$ (because the minimum distance of $C$ is $2e+1$) and $k_{1}=k_{2}$. Therefore the minimum distance of $\widehat{C}$ is also $2e+1$ and $\widehat{C}\in PL^{\infty}(n+1,e,q)$. ∎ Remark 4.12. The non linear construction generalize horizontal and vertical constructions. Indeed, let $NL(C,h)$ be the code obtained from the non-linear construction from the code $C$ and the height function $h$. Considering $C_{a}=a+(2e+1)\in PL^{\infty}(1,e,q)$ and $h_{a}(k)=h(a+(2e+1)k)$ then $C_{2}(a,h_{a})=NL(C_{a},h)$ and $C_{1}(a,h_{a})=\sigma NL(C_{a},h)$ where $\sigma=(1\ 2)$. Remark 4.13. If $C$ is linear, it is possible to choose a height function in such a way that $\widetilde{C}$ is also linear, but for arbitrary choice of $h$ this is not true. Remark 4.14. Every code constructed from the non linear construction is standard. Consequently, there are codes that cannot be constructed from the non-linear construction (for example the code given in the Remark 3.7). On the other hand, by Corollary 2.10 we can obtain every linear perfect code using this construction (with good choices for the height functions) in a finite number of steps. The next construction allows to obtain perfect codes in lower dimension from a given perfect code via cartesian sections. This construction plays a fundamental role in the next section, when we introduce the concept of ordered code. Definition 4.15. Let $S\subseteq\mathbb{Z}_{q}^{n}$. A perfect code in $S$ is a subset $C\subseteq S$ for which there exists $e\in\mathbb{N}$ such that $S=\biguplus_{c\in C}\left(B(c,e)\cap S\right)$. In this case, $e$ is determined by $C$ (by the packing sphere condition) and we call it the packing radius of $C$. Notation 4.16. Let $[n]=\{1,2,\ldots,n\}$. If $I\subseteq[n]$, we denote by $H_{I}=\{x\in\mathbb{Z}_{q}^{n}:x_{i}=0,\forall i\in I\}$ (these sets are called cartesian subgroups). We define its dimension as $\dim(H_{I})=n-\#I$. Definition 4.17. Let $I\subseteq[n]$. The orthogonal projection over $H_{I}$ is the unique morphism $\pi_{I}:\mathbb{Z}_{q}^{n}\rightarrow\mathbb{Z}_{q}^{n}$ verifying $\pi_{I}(e_{i})=\left\{\begin{array}[]{ll}0&\forall i\in I\\ e_{i}&\forall i\in I^{c}\end{array}\right.$. Notation 4.18 (Generalized balls). If $H\subseteq\mathbb{Z}_{q}^{n}$ and $e\in\mathbb{N}$, we denote by $$B(H,e)=\{x\in\mathbb{Z}_{q}^{n}:d(x,h)\leq e\textrm{ for some }h\in H\}=% \bigcup_{h\in H}B(h,e).$$ The following lemmas can be obtained easily from the previous definition. Lemma 4.19. Let $I\subseteq[n]$. For $h\in H_{I}$ and $x\in\mathbb{Z}_{q}^{n}$ we have $d(x,h)\geq d(\pi_{I}(x),h)$. Lemma 4.20. If $x\in\mathbb{Z}_{q}^{n}$, then $x\in B\left(H_{I},e\right)\leftrightarrow|x_{i}|_{1}\leq e$ $\forall i\in I$. Remark 4.21. If $C\in LPL^{\infty}(n,e,q)$ and $f_{C}:\mathbb{Z}_{q}^{n}\rightarrow C$ is the associated error correcting function, then $B(H,e)\cap C=f_{C}(H)$. Definition 4.22. Let $C\subseteq\mathbb{Z}_{q}^{n}$ and $I\subseteq[n]$. The cartesian section of $C$ (respect to the cartesian subgroup $H_{I}$) is given by $C\langle I\rangle:=\pi_{I}(B(H_{I},e)\cap C)=\pi_{I}\circ f_{C}(H_{I})$. Proposition 4.23 (Section construction). If $C\in PL^{\infty}(n,e,q)$ and $I\subseteq[n]$, then $C\langle I\rangle$ is a perfect code in $H_{I}$ with packing radius $e$. Proof. Let $h\in H_{I}$ and $c=\pi_{I}(f_{C}(h))\in C\langle I\rangle$. By Lemma 4.19 $d(c,h)\leq d(f_{C}(h),h)\leq e$ and we have that $h\in B(c,e)$, so $H\subseteq\bigcup_{c\in C\langle I\rangle}B(c,e)$. Since $H=\bigcup_{c\in C\langle I\rangle}\left(B(c,e)\cap H\right)$ the covering radius of $C\langle I\rangle$ is at most $e$ (as code in $H_{I}$). On the other hand, if $\widehat{c}_{1},\widehat{c}_{2}\in C\langle I\rangle$ verify $d(\widehat{c}_{1},\widehat{c}_{2})\leq 2e$ and we express $\widehat{c}_{i}=\pi_{I}(c_{i})$ for $i=1,2$ where $c_{i}\in B(H,e)\cap C$, we have (3) $$|c_{1}(i)-c_{2}(i)|_{1}=|\widehat{c}_{1}(i)-\widehat{c}_{2}(i)|_{1}\leq 2e,\ % \forall i\in I^{c},$$ and by Lemma 4.20 we have (4) $$|c_{1}(i)-c_{2}(i)|_{1}\leq|c_{1}(i)|_{1}+|c_{2}(i)|_{1}\leq 2e\ \forall i\in I.$$ Equations (3) and (4) imply $d(c_{1},c_{2})\leq 2e$ and since $C$ has packing radius $e$ we have $c_{1}=c_{2}$ so $\widehat{c}_{1}=\widehat{c}_{2}$. Therefore $C\langle I\rangle$ has minimum distance $d\geq 2e+1$ and its packing radius is at least $e$. We conclude that $C\langle I\rangle$ is a perfect code in $H_{I}$ with packing radius $e$. ∎ In the linear case, under some conditions we can prove that the resulting code is also linear. Proposition 4.24. If $C\in LPL^{\infty}(n,e,q)$ is of type $i$ for all $i\in I$, then $C\langle I\rangle$ is a linear perfect code in $H_{I}$ with packing radius $e$. Moreover, $C\langle I\rangle=\pi_{I}(C)$. Proof. We just need to check linearity and for this it suffices to prove that $\pi_{I}(C)=C\langle I\rangle$. It is clear that $C\langle I\rangle=\pi_{I}(C\cap B(H_{I},e))\subseteq\pi_{I}(C)$. For the other inclusion, let $c\in C$ and for each $i\in I$ we consider $k_{i}\in\mathbb{Z}$ such that $|c+(2e+1)k_{i}|_{1}\leq e$. Since $C$ is of type $i$ for all $i\in I$, then $(2e+1)k_{i}e_{i}\in C$ and also the vector $v=\sum_{i\in I}(2e+1)k_{i}e_{i}\in C$. By Lemma 4.20 $c+v\in C\cap B(H_{I},e)$ and $(c+v)_{j}=c_{j}$ for all $j\in I^{c}$, therefore $\pi_{I}(c)=\pi_{I}(c+v)\in\pi_{I}(B(H_{I},e)\cap C)=C\langle I\rangle$ and we have $\pi_{I}(C)\subseteq C\langle I\rangle$. ∎ 5. Perfect codes in arbitrary dimensions 5.1. Permutation associated with perfect codes The type of a code is an important concept when we deal with two-dimensional perfect codes, in part because every two-dimensional linear perfect code is isometric to a code of type $2$ which have an upper triangular generator matrix (Theorem 3.10). This last property is false for greater dimensions so we need a more general concept in order to describe all perfect codes with given parameters $(n,e,q)$. Notation 5.1. For $C\in LPL^{\infty}(n,e,q)$ we denote by $$\tau(C)=\max\{i:1\leq i\leq n,C\textrm{ is of type }i\}.$$ Definition 5.2. Let $C\in LPL^{\infty}(n,e,q)$ with $q=(2e+1)t$ and $t>1$. We consider the following sequence: $$\left\{\begin{array}[]{ll}\wp_{1}=\tau(C),\ J_{1}=\{\wp_{1}\},\ C_{1}=C\langle J% _{1}\rangle&\\ \wp_{i+1}=\tau(C_{i}),\ J_{i+1}=J_{i}\cup\{\wp_{i+1}\},\ C_{i+1}=C\langle J_{i% +1}\rangle&\textrm{for $1\leq i<n$}\end{array}\right.$$ The permutation of $[n]$ associated with $C$ is $\wp(C)=(\wp_{1},\wp_{2},\ldots,\wp_{n})$. Remark 5.3. Since we start from a linear code C, Minkowski-Hajos Theorem (Theorem 2.9) and Proposition 4.24 guarantee the existence of $\wp_{i}$ in each step and the linearity of the corresponding code $C_{i}$. Since $(2e+1)e_{k}\not\in H_{I}$ for $k\in I$, we have that the numbers $\wp_{i}$ are pairwise different, so $\wp\in S_{n}$. Example 5.4. We consider the code $C=\mbox{span}\left(\begin{matrix}1&3&0&0\\ 0&0&1&3\\ 3&0&1&0\\ 0&0&3&0\end{matrix}\right)$ over $\mathbb{Z}_{81}^{4}$. This code is perfect with parameters $(n,e,q)=(4,1,81)$. Let us calculate its associated permutation. In the first step we have: $$\bullet\quad\wp_{1}=\tau(C)=3,J_{1}=\{3\},C_{1}=C\langle 3\rangle=\mbox{span}% \left(\begin{matrix}1&3&0&0\\ 0&0&0&3\\ 3&0&0&0\\ 0&0&0&0\end{matrix}\right),$$ in the second step we have: $$\bullet\quad\wp_{2}=\tau(C_{1})=4,J_{2}=\{3,4\},C_{2}=C\langle 3,4\rangle=% \mbox{span}\left(\begin{matrix}1&3&0&0\\ 0&0&0&0\\ 3&0&0&0\\ 0&0&0&0\end{matrix}\right),$$ in the third step we have: $$\bullet\quad\wp_{3}=\mathcal{I}(C_{2})=1,J_{2}=\{1,3,4\},C_{3}=C\langle 1,3,4% \rangle=\mbox{span}\left(\begin{matrix}0&3&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{matrix}\right),$$ and in the last step we have $\wp_{4}=\tau(C_{3})=2$ so, the permutation associated with $C$ is $\wp(C)=(3,4,1,2)$. Definition 5.5. We say that a perfect code $C\in LPL^{\infty}(n,e,q)$ is ordered if its associated partition is given by $\wp(C)=(n,n-1,\ldots,2,1)$. Proposition 5.6. For all $C\in LPL^{\infty}(n,e,q)$ there exists $\theta=\theta_{C}\in S_{n}$ such that the code $\theta(C):=\left\{\left(c_{\theta^{-1}(1)},\ldots,c_{\theta^{-1}(n)}\right):(c% _{1},\ldots,c_{n})\in C\right\}$ is ordered. Proof. Let $C\in LPL^{\infty}(n,e,q)$ with $\wp(C)=(\wp_{1},\wp_{2},\ldots,\wp_{n})$ and $\theta$ be the permutation given by $\theta(\wp_{i})=n+1-i$. For all $i$ with $1\leq i<n$ we have $(2e+1)e_{\wp_{i+1}}\in C\langle\wp_{1},\ldots,\wp_{i}\rangle=\pi_{H_{I}}(C)$ where $I=\{\wp_{1},\ldots,\wp_{i}\}$, so there exists $c\in C\cap\pi_{H_{I}}^{-1}((2e+1)e_{\wp_{i+1}})$. We have $c_{\wp_{i+1}}=2e+1$ and $c_{k}=0$ for $k\not\in\{\wp_{1},\wp_{2},\ldots,\wp_{i+1}\}$. Since $\theta(c)_{i}=c_{\theta^{-1}(i)}$ we have $\theta(c)_{n-i}=2e+1$ and $\theta(c)_{k}=0$ for $k:n\geq k\geq n-i$, so $$(2e+1)e_{n-i}=\pi_{H_{\theta(I)}}\left(\theta(c)\right)\in\theta(C)\langle n,n% -1,\ldots,n+1-i\rangle$$ for $1\leq i<n$. This last condition together with the fact that $(2e+1)e_{n}=\theta\left((2e+1)e_{\wp_{1}}\right)\in\theta(C)$ imply $\wp\left(\theta(C)\right)=(n,n-1,\ldots,1)$. ∎ Notation 5.7. We denote by $LPL^{\infty}(n,e,q)_{o}=\{C\in LPL^{\infty}(n,e,q):C\textrm{ is ordered}\}$. Example 5.8. Let $C$ be the code defined in Example 5.4. The permutation $\theta=(1\ 2)(3\ 4)$ verify $\theta(\wp_{i})=5-i$, so the resulting code $\theta(C)\in LPL^{\infty}(4,1,81)_{o}$. We remark that this permutation is not unique, for example if we take $\tau=(1\ 3\ 4\ 2)$ we have $\tau(C)\in LPL^{\infty}(4,1,81)_{o}$ and $\tau(C)\neq\theta(C)$. 5.2. Perfect matrices In this section we characterize matrices associated with perfect codes in the maximum metric. Definition 5.9. Let $q=(2e+1)t$. A matrix $M\in\nabla_{n}(2e+1)$ is a $(e,q)$-perfect matrix if there exist matrices $A\in\nabla_{n}(t)$ and $B\in\nabla_{n}(1)$ such that $AM=qB$. Remark 5.10. For $n=2$ a matrix $M=\left(\begin{matrix}2e+1&a\\ 0&2e+1\end{matrix}\right)$ is $(e,q)$-perfect if only if there exists $x,y\in\mathbb{Z}$ satisfying $\left(\begin{matrix}t&x\\ 0&t\end{matrix}\right)\left(\begin{matrix}2e+1&a\\ 0&2e+1\end{matrix}\right)=\left(\begin{matrix}q&qz\\ 0&q\end{matrix}\right)$ and this is equivalent to $ta+(2e+1)x=qz$ or $qz-(2e+1)x=ta$. This last diophantine equation has solution if and only if $\gcd(q,2e+1)=2e+1\mid ta$ which is equivalent to $a=kh_{1}$ for some $k\in\mathbb{Z}$ (where $h_{1}=\frac{2e+1}{\gcd(2e+1,t)}$). Is summary, a $2\times 2$ matrix is a $(e,q)$-perfect matrix if and only if it is the generating matrix of a type $2$ perfect code in $LPL^{\infty}(2,e,q)$. Proposition 5.11. If $q=(2e+1)t$ and $M$ is a $n\times n$ integer matrix with rows $M_{1},M_{2},\ldots,M_{n}$, then $M$ is a $(e,q)$-perfect matrix if and only if the following condition are satisfied: (1) $M$ is upper triangular, (2) $M_{ii}=2e+1$, (3) $t\overline{M_{i}}\in\textrm{span}(\overline{M_{i+1}},\ldots,\overline{M_{n}})$ for $1\leq i<n$, where for $x\in\mathbb{Z}^{n}$ we denote by $\overline{x}=x+q\mathbb{Z}^{n}\in\mathbb{Z}_{q}^{n}$ the residual class of $x$ modulo $q$. Proof. Conditions (1) and (2) are equivalent to $M\in\nabla_{n}(2e+1)$ and condition (3) is equivalent to the existence of integers $\alpha_{ij}\in\mathbb{Z}$ and vectors $B_{i}\in\mathbb{Z}^{n}$ for $1\leq i<j\leq n$ verifying $tM_{i}=\sum_{j=i+1}^{n}\alpha_{ij}M_{j}+qN_{i}$. These equations can be expressed in matricial form as $AM=qB$ where the matrix $A$ is upper triangular with $A_{ij}=\left\{\begin{array}[]{ll}t&\textrm{for }i=j\\ -\alpha_{ij}&\textrm{for }i<j\end{array}\right.$ and $B$ has rows $B_{1},B_{2},\ldots,B_{n}$. ∎ Lemma 5.12. Let $q=(2e+1)t$, $C\in LPL^{\infty}(n,e,q)$ and $H$ be a $k$-dimensional cartesian subgroup of $\mathbb{Z}_{q}^{n}$. If $S\subseteq C\cap H$ and $\#S=t^{k}$, then $S=C\cap H$ and the code $S$ is a perfect code in $H$ with packing radius $e$. Proof. We have that $t^{k}=\#S\leq\#(C\cap H)\leq\frac{q^{k}}{(2e+1)^{k}}=t^{k}$ (the last inequality is consequence of the sphere packing condition) so $S=C\cap H$. Let $e^{\prime}$ be the packing radius of $S$. Since $C$ has packing radius $e$ we have $e^{\prime}\geq e$. By the sphere packing condition $(2e^{\prime}+1)^{k}\leq\frac{q^{k}}{\#S}=(2e+1)^{k}$ hence $e^{\prime}=e$ and $(2e+1)^{k}\cdot\#S=q^{k}$, therefore $S=C\cap H$ is a perfect code in $H$ with packing radius $e$. ∎ Proposition 5.13. Every ordered perfect code $C\in LPL^{\infty}(n,e,q)$ has a generator matrix which is a $(e,q)$-perfect matrix. Proof. Let $C\in LPL^{\infty}(n,e,q)$. By the Hermite normal form theorem we have a generator matrix $M$ for $C$ which is upper triangular. Let $M_{1},M_{2},\ldots,M_{n}$ be the rows of $M$ and we denote by $m_{i}=M_{ii}$ the elements in the principal diagonal. Multiplying by $-1$ if it were necessary we can suppose that each $m_{i}>0$ ($m_{i}\neq 0$ because $M$ is non-singular). We will prove the following assertion by induction: (5) $$\left\{\begin{array}[]{l}t\overline{M_{n-i}}\in\mbox{span}\left(\overline{M_{n% -(i-1)}},\overline{M_{n-(i-2)}},\ldots,\overline{M_{n}}\right)\\ m_{n-(i-1)}=m_{n-(i-2)}=\ldots=m_{n}=2e+1\end{array}\right.$$ for $1\leq i<n$, where as usual $\overline{X}=X+q\mathbb{Z}^{n}\in\mathbb{Z}_{q}^{n}$ is the residual class modulo $q$ of $X\in\mathbb{Z}^{n}$. For $i=1$ we express $m_{n}=(2e+1)a+r$ with $a$ and $b$ non-negative integer $0\leq r<2e+1$. Since $C$ is of type $n$ (because $C$ is ordered) we have that $v=(2e+1)e_{n}\in\Lambda_{C}$ and also $re_{n}=M_{n}-av\in\Lambda_{C}$. The packing radius of $\Lambda_{C}$ (which is equal to the packing radius of $C$) is $e$ and consequently its minimum distance is $2e+1$, but $||re_{n}||_{\infty}=r$ which imply $r=0$ and $M_{n}=av$. Substituting $M_{n}$ by $v$ we have another generator matrix $M^{\prime}$ for $C$, since $\det(M)=\det(M^{\prime})=\det(\Lambda_{C})$ and $\det(M)=a\det(M^{\prime})$ we have $a=1$ and $m_{n}=2e+1$. Since $C$ is ordered, the code $C\langle n\rangle$ is of type $n-1$ and since $m_{n-1}e_{n-1}\in C\langle n\rangle$, using a similar argument as in the proof of $m_{n}=2e+1$ we can prove that $m_{n-1}=2e+1$, so $t\overline{M_{n-1}}\in H_{\{1,\ldots,n-1\}}\cap C$. On the other hand, since $M_{n}=(2e+1)e_{n}$ we have $\mbox{span}(\overline{M_{n}})\subseteq C\cap H_{\{1,\ldots,n-1\}}$ and $\#\mbox{span}(\overline{M_{n}})=t$, thus by Lemma 5.12 we have $H_{\{1,\ldots,n-1\}}\cap C=\mbox{span}(\overline{M_{n}})$ so the assertion (5) is true for $i=1$. Now consider $j$ with $2\leq j<n$ and let us suppose that the assertion (5) is true for $i$ with $1\leq i<j$. By inductive hypotesis $t\overline{M_{n-i}}\in\mbox{span}\left(\overline{M_{n-(i-1)}},\overline{M_{n-(% i-2)}},\ldots,\overline{M_{n}}\right)$ for $1\leq i<j$ so linear construction (Prop. 4.7) guarantees that $C^{\prime}=\mbox{span}\left(\overline{M_{n-(j-1)}},\overline{M_{n-(j-2)}},% \ldots,\overline{M_{n}}\right)$ is a perfect code in $H_{\{1,2,\ldots,n-j\}}$ with packing radius $e$. In particular $\#C^{\prime}=t^{j}$ and by Lemma 5.12 we have that $C^{\prime}=C\cap H_{1,2,\ldots,n-j}$. Since $C$ is ordered, $C\langle n-(j-1),\ldots,n\rangle$ is of type $n-j$ and using that $m_{n-j}e_{n-j}\in C\langle n-(j-1),\ldots,n\rangle$ and a similar argument used in the proof of $m_{n}=2e+1$ we can prove that $m_{n-j}=2e+1$, then $t\overline{M_{n-j}}\in H_{\{1,2,\ldots,n-j\}}\cap C=\mbox{span}\left(\overline% {M_{n-(j-1)}},\overline{M_{n-(j-2)}},\ldots,\overline{M_{n}}\right)$, so assertion (5) is true for $i=j$. Finally, assertion (5) for $1\leq i<n$ and Proposition 5.11 imply that $M$ is an $(e,q)$-perfect matrix. ∎ Remark 5.14. In order to obtain a $(e,q)$-perfect generator matrix for an ordered perfect code $C\in LPL^{\infty}(n,e,q)$ from a given generator matrix we can apply the Hermite normal form algorithm and multiply some rows by $-1$, if necessary. Definition 5.15. We say that a matrix $M\in\nabla_{n}(2e+1)$ is reduced if $|M_{ij}|\leq e$ for $1\leq i<j\leq n$. Notation 5.16. We denote by $\mathcal{P}_{n}(e,q)=\{M\in\nabla_{n}(2e+1):M\textrm{ is }(e,q)-\textrm{% perfect}\}$. The subset of reduced matrices in $\nabla_{n}(2e+1)$ and $\mathcal{P}_{n}(e,q)$ is denoted by $\nabla_{n}(2e+1)_{\textrm{red}}$ and $\mathcal{P}_{n}(e,q)_{\textrm{red}}$ respectively. Proposition 5.17. Let $M,M^{\prime}\in\mathcal{P}_{n}(e,q)$. If $M_{ij}\equiv M^{\prime}_{ij}\pmod{2e+1}$ then $\mbox{span}(M)=\mbox{span}(M^{\prime})$. Proof. We observe that a reduced $(e,q)$-perfect generator matrix for a code $C\in LPL^{\infty}(n,e,q)$ is just a modified version of the Hermite normal form, so $\mbox{span}(M)=\mbox{span}(M^{\prime})$ is a consequence of the uniqueness of the Hermite normal form. ∎ Corollary 5.18. There is a surjection $\mathcal{P}_{n}(e,q)_{\textrm{red}}\twoheadrightarrow LPL^{\infty}(n,e,q)_{o}$ given by $M\mapsto\mbox{span}(M)/q\mathbb{Z}^{n}$. Proof. By Proposition 5.13 and Remark 5.14 we can obtain a $(e,q)$-perfect generator matrix $M$ from the Hermite normal form of any generator matrix with the condition $0\leq M_{ij}<2e+1$ if $i<j$. For $i=2,3,\ldots,n$ and for $1\leq j<i$, if the element $ji$ is greater than $e$ we can substract the $i$-th row to the $j$-th row obtaining a new equivalent matrix whose element $ji$ has absolute value at most $e$. Repeating this process we obtain a reduced $(e,q)$-perfect generator matrix for a given ordered code $C\in LPL^{\infty}(n,e,q)_{o}$. ∎ Corollary 5.19. Let $q=(2e+1)t$. We have the following inequality: (6) $$\log_{2e+1}\left(\#LPL^{\infty}(n,e,q)\right)\leq\binom{n}{2}$$ Proof. Using Lemma 5.18 we obtain: $$\#LPL^{\infty}(n,e,q)\leq\#\mathcal{P}_{n}(e,q)_{\mbox{red}}\leq\#\nabla_{n}(2% e+1)_{\mbox{red}}=(2e+1)^{\binom{n}{2}}.$$ ∎ Definition 5.20. If the parameters $(n,e,q)$ verify equality in Corollary 5.19 we say that the parameters $(e,q)$ are $n$-maximals and we call a code with these parameters a maximal code. Remark 5.21. If $(e,q)$ is $n$-maximal then $\mathcal{P}_{n}(e,q)_{\mbox{red}}=\nabla_{n}(2e+1)_{\mbox{red}}$. 5.3. The $n$-maximal case In this subsection we show that there are infinitely many maximal codes in each dimension establishing conditions which guarantee maximality. We extend some results obtained for two-dimensional codes in Section 3, to maximal codes including a parametrization theorem for such codes and for their isometry and isomorphism classes. Lemma 5.22. If $(e,q)$ is $n$-maximal, then $(e,q)$ is $i$-maximal for all $i,1\leq i\leq n$. Proof. Let $(e,q)$ be an $n$-maximal pair and $M^{\prime}\in\nabla_{i}(2e+1)$ with $1\leq i<n$. We consider the matrix $M=\left(\begin{matrix}(2e+1)I_{n-i}&0\\ 0&M^{\prime}\end{matrix}\right)\in\nabla_{n}(2e+1)$, since $(e,q)$ is $n$-maximal there exists $A\in\nabla_{n}(t),B\in\nabla_{n}(1)$ such that $AM=qB$. If we denote by $A^{\prime}$ and $B^{\prime}$ the submatrices consisting of the last $i$ rows and the last $i$ columns of $A$ and $B$ respectively. Clearly, $A^{\prime}\in\nabla_{i}(t)$ and $B^{\prime}\in\nabla_{i}(1)$ and $A^{\prime}M^{\prime}=qB^{\prime}$, therefore $M^{\prime}$ is $(e,q)$-perfect, so the pair $(e,q)$ is $i$-maximal. ∎ Lemma 5.23. Let $q=(2e+1)t$ and $\overline{X}=X+q\mathbb{Z}^{n}\in\mathbb{Z}_{q}^{n}$ be the residual class of $X\in\mathbb{Z}^{n}$ modulo $q$. The following assertions are equivalent: (i) $(e,q)$ is $n$-maximal. (ii) For all $M\in\nabla_{n}(2e+1)$, there exists $A\in\nabla_{n}(t),B\in\nabla_{n}(1)$ such that $AM=qB$. (iii) $t\mathbb{Z}_{q}^{i}\subseteq\mbox{span}(\overline{M})$ for all $M\in\nabla_{i}(2e+1)$ and for all $i$, $1\leq i<n$. Proof. We have (ii) $\Leftrightarrow\nabla_{n}(2e+1)=\mathcal{P}_{n}(e,q)\Leftrightarrow(i)$. Note that if (iii) holds then condition (3) in Proposition 5.11 is always satisfied, thus (iii) $\Rightarrow$ (ii). In order to prove (ii) $\Rightarrow$ (iii), by Lemma 5.22 it suffices to prove (ii) $\Rightarrow t\mathbb{Z}_{q}^{n-1}\subseteq\mbox{span}(\overline{M})$ for all $M\in\nabla_{n-1}(2e+1)$. Let $M^{\prime}\in\nabla_{n-1}(2e+1)$ and $w\in\mathbb{Z}^{n-1}$, we want to prove that $t\overline{w}\in\mbox{span}(\overline{M})$. Consider the matrix $M=\left(\begin{matrix}2e+1&w\\ 0^{t}&M^{\prime}\end{matrix}\right)\in\nabla_{n}(2e+1)$. By (ii) there exist matrices $A\in\nabla_{n}(t),B\in\nabla_{n}(1)$ such that $AM=B$. Expressing $A=\left(\begin{matrix}t&v\\ 0^{t}&A^{\prime}\end{matrix}\right)$ and $B=\left(\begin{matrix}1&u\\ 0^{t}&B^{\prime}\end{matrix}\right)$ with $A^{\prime}\in\nabla_{n-1}(t)$ and $B^{\prime}\in\nabla_{n-1}(1)$, from the equality $AM=qB$ we obtain $tw+vM^{\prime}=qu$, thus $t\overline{w}=-v\overline{M^{\prime}}\in\mbox{span}(\overline{M})$. ∎ Lemma 5.24. If $(2e+1)^{n}\mid q$, then $(2e+1)^{n-1}\mathbb{Z}_{q}^{n-1}\subseteq\mbox{span}(\overline{M})$ for all $M\in\nabla_{n-1}(2e+1)$. Proof. For $n=1$ the assertion is true because $(0)\subseteq\mbox{span}(\overline{M})$ and for $n=2$ the assertion is true because $(2e+1)\mathbb{Z}_{q}\subseteq\mbox{span}(2e+1)=(2e+1)\mathbb{Z}_{q}$. Let us suppose that the assertion is true for $n-1$ where $n\geq 3$ and $(2e+1)^{n}\mid q$. Let $M\in\nabla_{n-1}(2e+1)$ written as $M=\left(\begin{matrix}2e+1&w\\ 0^{t}&M^{\prime}\end{matrix}\right)$ with $M^{\prime}\in\nabla_{n-2}(2e+1)$ and $w\in\mathbb{Z}^{n-2}$. Since $(2e+1)^{n-2}\mid q$ we have $(2e+1)^{n-2}\mathbb{Z}_{q}^{n-2}\subseteq\mbox{span}(\overline{M})$, then $(2e+1)^{n-2}H_{\{1\}}\subseteq\mbox{span}\overline{(0^{t},M^{\prime})}$ and we obtain the following chain of inequalities: $$(2e+1)^{n-1}H_{\{1\}}\subseteq(2e+1)^{n-2}H_{\{1\}}\subseteq\mbox{span}% \overline{(0^{t},M^{\prime})}\subseteq\mbox{span}(\overline{M}).$$ To conclude the proof we need to show that $(2e+1)^{n-1}\overline{e_{1}}\in\mbox{span}(\overline{M})$. We have that $(2e+1)^{n-1}\overline{e_{1}}-(2e+1)^{n-2}(2e+1,w)=(0,-(2e+1)^{n-2}w)\in(2e+1)^% {n-2}H_{\{1\}}\subseteq\mbox{span}(\overline{M}),$ so $(2e+1)^{n-1}\overline{e_{1}}\in\mbox{span}(\overline{M})$. In conclusion, we have that $$(2e+1)^{n-1}\mathbb{Z}_{q}^{n-1}=(2e+1)^{n-1}\mathbb{Z}\overline{e_{1}}\oplus(% 2e+1)^{n-1}H_{\{1\}}\subseteq\mbox{span}(\overline{M}).$$ ∎ Corollary 5.25. If $(2e+1)^{n}\mid q$ then $(2e+1)^{i}\mathbb{Z}_{q}^{i}\subseteq\mbox{span}(\overline{M})$ for all $M\in\nabla_{i}(2e+1)$ and for all $i$, $1\leq i<n$. Theorem 5.26. Let $q=(2e+1)t$. The pair $(e,q)$ is $n$-maximal if and only if $(2e+1)^{n-1}\mid t$. Proof. First, we suppose that $(2e+1)^{n-1}\mid t$ (or equivalently $(2e+1)^{n}\mid q$). By Corollary 5.25, for all $M\in\nabla(2e+1)$ and $1\leq i<n$ we have: $$t\mathbb{Z}_{q}^{i}\subseteq(2e+1)^{n-1}\mathbb{Z}_{q}^{i}\subseteq(2e+1)^{i}% \mathbb{Z}_{q}^{i}\subseteq\mbox{span}(\overline{M}),$$ and by Lemma 5.23 the pair $(e,q)$ is $n$-maximal. Now we suppose that $(e,q)$ is $n$-maximal and consider the bidiagonal matrix $M\in\nabla_{n}(2e+1)$ which has $1$ in the secondary diagonal (i.e. in the diagonal above the principal diagonal). Since $(e,q)$ is $n$-maximal, there exists $A\in\nabla_{n}(t),B\in\nabla_{n}(1)$ such that $AM=qB$. If we denote the first row of $A$ by $A_{1}=(a_{11},a_{12},\ldots,a_{1n})$ and the first row of $B$ by $B_{1}$, we have that $qB_{1}=(q,a_{11}+(2e+1)a_{12},a_{12}+(2e+1)a_{13},\ldots,a_{1,n-1}+(2e+1)a_{1n})$ using $a_{11}=t$ we deduce that: $$t+(-1)^{n}(2e+1)^{n-1}a_{1n}=\sum_{i=1}^{n-1}(-2e-1)^{i-1}(a_{1i}+(2e+1)a_{1,i% +1})\equiv 0\pmod{q}.$$ If $h\in\mathbb{Z}$ is such that $t+(-1)^{n}(2e+1)^{n-1}a_{1n}=qh$ we have $t(1-(2e+1)h)=(-1)^{n+1}(2e+1)^{n-1}a_{1n}$, since $\gcd(1-(2e+1)h,2e+1)=1$ we have $(2e+1)^{n-1}\mid t$. ∎ Since $\nabla_{n}(2e+1)=\mathcal{P}_{n}(e,q)$ holds for maximal codes, in this case we obtain the following parametrization for ordered codes which generalize the first part of Theorem 3.22. Theorem 5.27. Let $(e,q)$ be an $n$-maximal pair. There is a parametrization $$\psi:\nabla_{n}(2e+1)_{\mbox{red}}\rightarrow LPL^{\infty}(n,e,q)_{o}$$ given by $\psi(M)=\mbox{span}(M)/q\mathbb{Z}^{n}$. Next we study isomorphism classes of perfect codes. Notation 5.28. An unimodular integer matrix is a square matrix with determinant $1$ or $-1$. We denote by $\Gamma_{n}=\{M\in M_{n}(\mathbb{Z}):M\textrm{ is unimodular}\}$. If $A,B\in M_{n}(\mathbb{Z})$ we say that $A$ and $B$ are $\Gamma_{n}$-equivalent if there exists $U,V\in\Gamma_{n}$ such that $A=UBV$, we denote $A\underset{\Gamma}{\sim}B$ for this equivalence relation. We remark that two matrices $A$ and $B$ are $\Gamma$-equivalent if we can obtain one from the other through a finite number of elementary operations on the rows and on the columns. For $X\subseteq M_{n}(\mathbb{Z})$ we denote by $X/\Gamma_{n}$ the quotient space for this equivalence relation. Theorem 5.29. Let $(e,q)$ be an $n$-maximal pair. There is a parametrization $$\psi_{\mathcal{A}}:\frac{\nabla_{n}(2e+1)_{\mbox{red}}}{\Gamma_{n}}\rightarrow% \frac{LPL^{\infty}(n,e,q)_{o}}{\mathcal{A}}$$ given by $\psi_{\mathcal{A}}(M)=[\psi(M)]_{\mathcal{A}}$ (where $\psi$ is as in Theorem 5.27 and $[C]_{\mathcal{A}}$ denotes the isomorphism class of $C$). Proof. If $M$ is the generator matrix for a linear code $C\subseteq\mathbb{Z}_{q}^{n}$ then the matrix $qM^{-1}$ has integer coefficient and their Smith normal form determines the isomorphism class of $C$ (as abelian group). On the other hand for $M_{1},M_{2}\in\nabla_{n}(2e+1)_{\textrm{red}}$ we have the following equivalences: $$\mbox{span}(\overline{M_{1}})\underset{\mathcal{A}}{\sim}\mbox{span}(\overline% {M_{2}})\Leftrightarrow qM_{1}^{-1}\underset{\Gamma}{\sim}qM_{2}^{-1}% \Leftrightarrow\exists U,V\in\Gamma_{n}:UqM_{1}^{-1}V=qM_{2}^{-1}$$ $$\Leftrightarrow\exists U,V\in\Gamma_{n}:V^{-1}M_{1}U^{-1}=M_{2}\Leftrightarrow M% _{1}\underset{\Gamma}{\sim}M_{2},$$ so $\psi_{\mathcal{A}}$ is well defined and is injective. Since $\psi$ is surjective then $\psi_{\mathcal{A}}$ is surjective, therefore $\psi_{\mathcal{A}}$ is a bijection. ∎ The next goal is to characterize what are the possible group isomorphism classes that can be represented by maximal perfect codes. Definition 5.30. Let $G=\mathbb{Z}_{d_{1}}\times\ldots\times\mathbb{Z}_{d_{m}}$ with $d_{1}|d_{2}|\ldots|d_{m}$. We say that $G$ is an $(n,e,q)$-admissible structure if there exist $C\in LPL^{\infty}(n,e,q)$ such that $C\simeq G$ as abelian groups. Lemma 5.31. For $a,x$ and $y$ non-zero integers we have $\left(\begin{matrix}a&0\\ 0&axy\end{matrix}\right)\underset{\Gamma}{\sim}\left(\begin{matrix}ay&a\\ 0&ax\end{matrix}\right).$ Proof. We have the following chain of $\Gamma$-equivalence: $$\left(\begin{matrix}a&0\\ 0&axy\end{matrix}\right)\underset{\Gamma}{\sim}\left(\begin{matrix}a&a\\ 0&axy\end{matrix}\right)\underset{\Gamma}{\sim}\left(\begin{matrix}a&a\\ ax&axy+ax\end{matrix}\right)\underset{\Gamma}{\sim}\left(\begin{matrix}a&a-ay% \\ ax&ax\end{matrix}\right)$$ $$\underset{\Gamma}{\sim}\left(\begin{matrix}a&a-ay\\ 0&ax\end{matrix}\right)\underset{\Gamma}{\sim}\left(\begin{matrix}ay&a\\ 0&ax\end{matrix}\right)$$ ∎ Lemma 5.32. If $M$ is a $n\times n$ integer matrix with determinant $(2e+1)^{n}$, then $\Gamma_{n}M\Gamma_{n}\cap\nabla_{n}(2e+1)_{\mbox{red}}\neq\emptyset$. Moreover, $M$ is $\Gamma$-equivalent to a bidiagonal matrix $A\in\nabla_{n}(2e+1)_{\mbox{red}}$. Proof. For $n=1$, $\det(M)=2e+1$ implies $M=(2e+1)\in\nabla_{1}(2e+1)_{\mbox{red}}$. Let us suppose that the result is true for $n-1$ and we consider a $n\times n$ integer matrix $M$ with $\det(M)=(2e+1)^{n}$. By Smith normal form $M\underset{\Gamma}{\sim}D$ where $D=\mbox{diag}(d_{1},d_{2},\ldots,d_{n})$ is a diagonal matrix with $d_{1}\mid d_{2}\mid\ldots\mid d_{n}$ and $d_{1}d_{2}\ldots d_{n}=(2e+1)^{n}$, in particular $d_{n}=(2e+1)x$ and $2e+1=d_{1}y$ for some integers $x$ and $y$. Permuting the second and $n$th rows of $D$ and then the second and $n$th column we have $D\underset{\Gamma}{\sim}\widetilde{D}:=\mbox{diag}(d_{1},d_{n},d_{3},\ldots,d_% {n-1},d_{2})$. Applying Lemma 5.31 with $d_{n}=d_{1}xy$ we obtain $\widetilde{D}\underset{\Gamma}{\sim}\left(\begin{matrix}2e+1&v\\ 0^{t}&D_{0}\end{matrix}\right)$ where $v=(d_{1},0,\ldots,0)\in\mathbb{Z}^{n-1}$ and $D_{0}=\mbox{diag}(d_{1}x,d_{3},\ldots,d_{n-1},d_{2})$. By inductive hypothesis there exists unimodular matrices $U_{0},V_{0}\in\Gamma_{n-1}$ such that $U_{0}D_{0}V_{0}\in\nabla_{n-1}(2e+1)$ with $U_{0}D_{0}V_{0}$ bidiagonal, thus $$\left(\begin{matrix}1&0\\ 0^{t}&U_{0}\end{matrix}\right)\left(\begin{matrix}2e+1&v\\ 0^{t}&U_{0}\end{matrix}\right)\left(\begin{matrix}1&0\\ 0^{t}&V_{0}\end{matrix}\right)=\left(\begin{matrix}2e+1&vV_{0}\\ 0^{t}&U_{0}D_{0}V_{0}\end{matrix}\right)$$ is a bidigonal matrix in $\nabla_{n}(2e+1)$. Since this matrix have $2e+1$ in the main diagonal, we can obtain a reduced matrix from this, by applying some elementary operations on rows, thus the result holds for $n$. ∎ Corollary 5.33. If we denote by $M_{n}(\mathbb{Z},det=D)$ the set of matrices $M\in M_{n}(\mathbb{Z})$ with $\det(M)=D$, each equivalence class in $\nabla_{n}(2e+1)_{\textrm{red}}/\Gamma_{n}$ is contained in exactly one equivalence class in $M_{n}(\mathbb{Z},det=(2e+1)^{n})/\Gamma_{n}$. Moreover, both quotient sets have the same number of elements. Theorem 5.34. Let $(e,q)$ be an $n$-maximal pair where $q=(2e+1)t$ and $G=\mathbb{Z}_{d_{1}}\times\ldots\times\mathbb{Z}_{d_{n}}$ with $d_{1}|d_{2}|\ldots|d_{n}$. Then $G$ is a $(n,e,q)$-admissible structure if and only if $d_{1}d_{2}\ldots d_{n}=t^{n}$ and $d_{n}|q$. Proof. The direct implication follows from the fact that if $C\in LPL^{\infty}(n,e,q)$ then $\#C=t^{n}$ and $qC=\{0\}$ (because $C\subseteq\mathbb{Z}_{q}^{n}$). We denote by $\mathcal{D}=\{(d_{1},\ldots,d_{n})\in\mathbb{N}:d_{1}|\ldots|d_{n},d_{1}\ldots d% _{n}=t^{n},d_{n}|q\}$. To prove the converse implication it suffices to prove that $\#\mathcal{D}=\#LPL^{\infty}(n,e,q)_{\mbox{red}}/\mathcal{A}$ (where $X/\mathcal{A}$ denotes the set of isomorphism classes of codes in $X$). By Theorem 5.29, Lemma 5.32 and the Smith normal form theorem we have that $\#LPL^{\infty}(n,e,q)_{\mbox{red}}/\mathcal{A}=\#\mathcal{F}$ where $\mathcal{F}=\{(f_{1},\ldots,f_{n})\in\mathbb{N}:f_{1}|\ldots|f_{n},f_{1}\ldots f% _{n}=(2e+1)^{n}\}$, so it suffices to prove that $\#\mathcal{D}=\#\mathcal{F}$. We consider $X=\{(x_{1},\ldots,x_{n})\in\mathbb{N}^{n}:x_{1}\mid\ldots\mid x_{n}\mid q\}$ and the involution $\psi:X\rightarrow X$ defined by $\psi(x_{1},\ldots,x_{n})=(y_{1},\ldots,y_{n})$ where $x_{i}y_{j}=q$ if $i+j=n+1$. For $x=(x_{1},\ldots,x_{n})\in X$ we denote by $p(x)=x_{1}\ldots x_{n}$. Since $(2e+1)^{n-1}\mid t$, then $\mathcal{F}\subseteq X$ and the property $p(\psi(a))\cdot p(a)=q^{n}$ imply $\psi(\mathcal{F})=\mathcal{D}$ and $\#\mathcal{F}=\#\mathcal{D}$. ∎ The involution argument in the above proof give us the following corollary. Corollary 5.35. Let $q=(2e+1)t$ with $(2e+1)^{n-1}\mid t$ and $C\in LPL^{\infty}(n,e,q)$ with generator matrix $M$. If the Smith normal form of $M$ is given by $D=\mbox{diag}(d_{1},\ldots,d_{n})$, then $C\simeq\mathbb{Z}_{{q}/{d_{n}}}\times\mathbb{Z}_{{q}/{d_{n-1}}}\times\ldots% \times\mathbb{Z}_{{q}/{d_{1}}}$. Remark 5.36. The previous corollary does not hold for non-maximal codes. The following corollary give us the number of isomorphism classes of maximal perfect codes in $LPL^{\infty}(n,e,q)$. Corollary 5.37. Let $q=(2e+1)t$ with $(2e+1)^{n-1}\mid t$ and $f(x)$ be the generating function $f(x)=\frac{1}{(1-x)(1-x^{2})\ldots(1-x^{n})}$. If $\nu_{p}(m)$ is the exponent of the prime $p$ in the factorial decomposition of $m$, then the number of isomorphism classes of perfect codes in $LPL^{\infty}(n,e,q)$ is given by: $$\prod_{p\mid 2e+1}[x^{n\nu_{p}(2e+1)}]f(x).$$ In particular for $n=2$ this number is given by $$\prod_{p}[x^{2\nu_{p}(2e+1)}]\frac{1}{(1-x)(1-x^{2})}=\prod_{p}\left(\nu_{p}(2% e+1)+1\right)=\sigma_{0}(2e+1),$$ the number of divisor of $2e+1$ (according with Corollary 3.25, since $\gcd(2e+1,t)=2e+1$). For $n=3$ this number is given by $$\prod_{p}[x^{3\nu_{p}(2e+1)}]\frac{1}{(1-x)(1-x^{2})(1-x^{3})}=\prod_{p}\lceil% {3}/{4}\cdot(\nu_{p}(2e+1)+1)^{2}\rfloor$$ where $\lceil x\rfloor$ denotes the nearest integer to $x$. In particular when $2e+1$ is square-free this number is $3^{\omega(2e+1)}$ where $\omega(n)$ is the number of distinct prime divisors of $n$. Proof. Let $X(\alpha)=\{(x_{1},\ldots,x_{n})\in\mathbb{N}^{n}:x_{1}\leq\ldots\leq x_{n},x_% {1}+\ldots+x_{n}=n\alpha\}$ for $\alpha\in\mathbb{Z}^{+}$ and $\nu_{p}(a_{1},\ldots,a_{n}):=(\nu_{p}(a_{1}),\ldots,\nu_{p}(a_{n}))$ (where $\nu_{p}(m)$ denote the exponent of $p$ in $m$). If $\mathcal{F}$ is as in the proof of Theorem 5.34, then for each prime divisor $p\mid 2e+1$ and for each $a\in\mathcal{F}$ we have $\nu_{p}(a)\in X(\nu_{p}(2e+1))$. In this way we have a bijection between $\mathcal{F}$ and $\prod_{p}X(\nu_{p}(2e+1))$ where $p$ runs over the prime divisors of $2e+1$, in particular the number of isomorphism classes of $(n,e,q)$-codes (with $(2e+1)^{n-1}\mid t$) is given by $\#\mathcal{F}=\prod_{p\mid 2e+1}\#X(\nu_{p}(2e+1))$. With the standard change of variable $x_{i}=y_{n}+\ldots+y_{n+1-i}$ for $1\leq i\leq n$ we have $\#X(\alpha)=\#\{(y_{1},\ldots,y_{n})\in\mathbb{N}^{n}:y_{1}+2y_{2}+\ldots+ny_{% n}=n\alpha\}$ which clearly is the coefficient of $x^{n\alpha}$ in the generating function $f(x)=\frac{1}{(1-x)(1-x^{2})\ldots(1-x^{n})}$. For $n=2$ and $n=3$ we have the well known formulas $f(x)=\sum_{n=0}^{\infty}[\frac{n+2}{2}]x^{n}$ and $f(x)=\sum_{n=0}^{\infty}\lceil\frac{(n+3)^{2}}{12}\rfloor x^{n}$ (see for example p.10 of [9]) respectively. ∎ 5.4. The $n$-cyclic case By Proposition 4.10, the set $LPL^{\infty}(n,e,q)$ contain a cyclic code if and only if $t^{n-1}|2e+1$. If this condition is satisfied we say that $(e,q)$ is an $n$-cyclic pair. In this case we can also obtain a characterization of the admissible structures. Theorem 5.38. Let $(e,q)$ be an $n$-cyclic pair where $q=(2e+1)t$ and $d_{1},d_{2},\ldots,d_{n}$ be positive integers verifying $d_{1}|\ldots|d_{n}$ and $d_{1}\cdots d_{n}=t^{n}$ then there exists $C\in LPL^{\infty}(n,e,q)$ such that $C\simeq\mathbb{Z}_{d_{1}}\times\ldots\times\mathbb{Z}_{d_{n}}$. Proof. By Lemma 5.32 there exists an integer matrix $A=(a_{ij})_{1\leq i,j\leq n}\in\nabla_{n}(t)$ with $A\underset{\Gamma}{\sim}\mbox{diag}(d_{1},\ldots,d_{n})$. We define $M\in\nabla_{n}(2e+1,\mathbb{Q})$ recursively as following: (7) $$\left\{\begin{array}[]{ll}M_{n}=(2e+1)e_{n}&\\ M_{i}=(2e+1)e_{i}-\sum_{k=i+1}^{n}({a_{ik}}/{t})M_{k}&\textrm{for }1\leq i<n,% \end{array}\right.$$ where $M_{i}$ denote the $i$th row of $M$. Using $t^{n-1}\mid 2e+1$, it is not difficult to prove by induction that $M_{i}\in t^{i-1}\mathbb{Z}^{n}$ for $1\leq i\leq n$, which implies that the matrix $M$ has integer coefficient, hence $M\in\nabla_{n}(2e+1)$. Equation (7) can be written in matricial form as $AM=qI$, in particular $M\in\mathcal{P}_{n}(e,q)$ (that is, $M$ is $(e,q)$-perfect) and $qM^{-1}\in M_{n}(\mathbb{Z})$. This last fact imply that $M$ is the generator matrix of a code $C\in LPL^{\infty}(n,e,q)$ whose group structure is given by the Smith normal form of $qM^{-1}=A$, that is $C\simeq\mathbb{Z}_{d_{1}}\times\ldots\mathbb{Z}_{d_{n}}$. ∎ We remark that in the $n$-cyclic case, by the structure theorem for finitely generated abelian groups, every abelian group $G$ of order $t^{n}$ is represented by a code $C\in LPL^{\infty}(n,e,q)$ (in the sense that $C\simeq G$). 6. Concluding remarks In this paper we derive several results on $q$-ary perfect codes in the maximum metric or equivalently about tiling of the torus $\mathcal{T}_{q}^{n}$ (or $q$-periodic lattices) by cubes of odd length. A type of matrices (perfect matrices) which provide generator matrices with a special form for perfect codes is introduced. We describe isometry and isomorphism classes of two-dimensional perfect codes extending some results to maximal codes in general dimensions. Several constructions of perfect codes from codes of smaller dimension and via section are given. Through these constructions we extended results obtained for dimension two to arbitrary dimensions and interesting families of $n$-dimensional perfect codes are obtained (as those in Corollaries 4.8 and 4.9). A characterization of what group isomorphism classes can be represented by $(n,e,q)$-perfect codes is derived for the two-dimensional case, for the maximal case and for the cyclic case. Potential further problems related to this work are interesting to investigate. The fact that every linear perfect code is standard (which is a consequence of Minkoski-Hajós theorem) guarantees that the permutation associated to a code (Definition 5.2) is well defined. It is likely to be possible to extend some of our results to non-linear codes for which the permutation associated to the code is well defined. We also could study isometry classes of perfect non-linear codes ([17, 24] could be helpful). It should be interesting to obtain for higher dimensions a result analogous to the parametrization theorem (Theorem 3.22), that is, in such a way that isometry and isomorphism classes correspond with certain generalized cosets (Theorems 5.27 and 5.29 provide a partial answer for the maximal case). In [13], tilings by the notched cube and by the extended cube were considered, it may be possible to extended some of results obtained here for these more general shapes. Another remarkable fact is related to structural properties of the admissible structures (isomorphism classes that can be represented by a perfect code in the maximum metric). It is possible to define a natural poset structure in the set of isomorphism classes of abelian group of order $t^{n}$ in such a way that the cyclic group $\mathbb{Z}_{t^{n}}$ and the cartesian group ${(\mathbb{Z}^{t})}^{n}$ correspond to the maximum and minimum element in this poset and the admissible structures form an ideal in this poset in specific situations (for example the two-dimensional case and the maximal and cyclic case in arbitrary dimensions) and we wonder if this is always the case. References [1] C. Alves, S. Costa, Commutative group codes in $\mathbb{R}^{4},\mathbb{R}^{6},\mathbb{R}^{8}$ and $\mathbb{R}^{16}$ - Approaching the bound, Discrete Mathematics 313: 1677-1687, 2013. [2] M. Blaum, J. Bruck, A. Vardy, Interleaving schemes for multidimensional cluster errors, IEEE transaction on Information Theory 44(2): 730-743, 1998. [3] J. H. Conway, N. J. A. Sloane, Sphere Packings, Lattices and Groups, Springer-Verlag, New York, USA, 1998. [4] K. Corrádi, S. Szabó, A combinatorial approach for Keller’s conjecture, Periodica Mathematica Hungarica 21(2): 95-100, 1990. [5] T. Etzion, E. Yaakobi, Error-correction of multidimensional bursts, IEEE transaction on Information Theory 55: 961-976, 2009. [6] J. B. Fraleigh, First Course in Abstract Algebra, Pearson New International Edition, 2013. [7] S. W. Golomb, L. R. Welch, Perfect Codes in the Lee metric and the packing of polyominoes, SIAM Journal on Applied Mathematics 18(2): 302-317, 1970. [8] G. Hajós, Über einfache und mehrfache Bedeckung des n-dimensionalen Raumes mit einem Würfelgitter, Mathematische Zeitschrift 47(1): 427-467, 1942. [9] G. H. Hardy, Some Famous Problems of the Theory of Numbers and in Particular Waring’s Problem, Oxford University Press, UK, 1920. [10] O. H. Keller, Über die lückenlose Einfüllung des Raumes mit Würfeln, Journal für die Reine und Angewandte Mathematik 163: 231-248, 1930. [11] A. P. Kisielewicz, On the structure of cube tilings of $R^{3}$ and $R^{4}$, Journal of Combinatorial Theory, series A 120(1): 1-10, 2013. [12] T. Klove, T. Lin, D. Tsai, W. Tzeng, Permutation arrays under the Chebyshev distance, IEEE Transaction on Information Theory 56(6): 2611-2617, 2010. [13] M. N. Kolountzakis, Lattice tilings by cubes: whole, notched and extended, The Electronic Journal of Combinatorics 5(1): $\#$R14, 1998. [14] A. P. Kisielewicz, K. Przeslawski, Polyboxes, cube tilings and rigidity, Discrete & Computational Geometry 40(1): 1–30, 2008. [15] A. P. Kisielewicz, K. Przeslawski, Rigidity and the chessboard theorem for cube packings, European Journal of Combinatorics 33(6): 1113-1119, 2012. [16] A. P. Kisielewicz, K. Przeslawski, The coin exchange problem and the structure of cube tilings, The Electronic Journal of Combinatorics 19: $\#$R26, 2012. [17] A. Kisielewicz, K. Przeslawski, The structure of cube tilings under symmetry conditions, Discrete & Computational Geometry 48(3): 777-782, 2012. [18] J. H. van Lint, Introduction to Coding Theory, Springer-Verlag, New York, 1982. [19] J. C. Lagarias, P. W. Shor, Keller’s cube-tiling conjecture is false in high dimensions, Bulletin of American Mathematical Society 27: 279-283, 1992. [20] J. C. Lagarias, P. W. Shor, Cube tilings of $\mathbb{R}^{n}$ and nonlinear codes, Discrete & Computational Geometry 11: 359–391, 1994. [21] J. Mackey, A cube tiling of dimension eight with no facesharing, Discrete & Computational Geometry 28(2): 275–279, 2002. [22] H. Minkowski, Diophantische Approximationen, Teubner, Leipzig, 1907. Reprint: Physica-Verlag, Würzberg, 1961. [23] F. J. MacWilliams, N. J. A. Slone, The Theory of Error-Correcting Codes, North-Holland, Amsterdam, 1978. [24] M. Muniz, S. I. R. Costa, Labeling of Lee and Hamming spaces, Discrete Mathematics 260(1): 119-134, 2003. [25] O. Perron, Über lückenlose Ausfülung des n-dimensionalen Raumes durch kongruente Würfel, Mathematische Zeitschrift 46(1): 1-26, 1940. [26] R. M. Roth, P. H. Siegel, Lee-metric BCH codes and their application to constrained and partial-response channels, IEEE Transaction on Information Theory 40(4): 1083-1096, 1994. [27] K. U. Schmidt, Complementary sets, generalized Reed-Muller codes, and power control for OFDM, IEEE Transaction on Information Theory 53(2): 808-814, 2007. [28] M. D. Sikiric, Y. Itoh, Combinatorial cube packings in the cube and the torus, European Journal of Combinatorics 31(2): 517-534, 2010. [29] M. D. Sikiric, Y. Itoh, A. Poyarkov, Cube packings, second moment and holes, European Journal of Combinatorics 28: 715-725, 2007. [30] M. Shieh, S. Tsai, Decoding frequency permutation arrays under Chebyshev distance, IEEE Transaction on Information Theory 56(11): 5730-5737, 2010. [31] S. Szabó, Topics in Factorization of Abelian Groups, Hindustan Book Agency, New Delhi, India, 2004. [32] I. Tamo, M. Schwartz, Correcting limited-magnitude errors in the rank-modulation scheme, IEEE Transaction on Information Theory 56: 2551-2560, 2010. [33] C. Zong, What is known about unit cubes, Bulletin of American Mathematical Society 42(2): 181-211, 2005.
The Post-Decoherence Density Matrix Propagator for Quantum Brownian Motion Jonathan Halliwell and Andreas Zoupas Theory Group, Blackett Laboratory Imperial College, London SW7 2BZ UK Preprint IC 96–95/67, August, 1996 Submitted to Physical Review D ABSTRACT: Using the path integral representation of the density matrix propagator of quantum Brownian motion, we derive its asymptotic form for times greater than the so-called localization time, $(\hbar/\gamma kT)^{{1\over 2}}$, where $\gamma$ is the dissipation and $T$ the temperature of the thermal environment. The localization time is typically greater than the decoherence time, but much shorter than the relaxation time, $\gamma^{-1}$. We use this result to show that the reduced density operator rapidly evolves into a state which is approximately diagonal in a set of generalized coherent states. We thus reproduce, using a completely different method, a result we previously obtained using the quantum state diffusion picture (Phys.Rev.D52, 7294 (1995)). We also go beyond this earlier result, in that we derive an explicit expression for the weighting of each phase space localized state in the approximately diagonal density matrix, as a function of the initial state. For sufficiently long times it is equal to the Wigner function, and we confirm that the Wigner function is positive for times greater than the localization time (multiplied by a number of order $1$). 1. INTRODUCTION One of the simplest open systems that is amenable to straightforward analysis is the quantum Brownian motion model. This model consists of a non-relativistic point particle, possibly in a potential, coupled to a bath of harmonic oscillators in a thermal state. The quantum Brownian motion model has been used very extensively in studies of decoherence and emergent classicality (see for example, Refs.[1,2,3,4,5,6,7,8]). In the simplest case of a free particle of mass $m$ in a high temperature bath, with negligible dissipation the master equation for the reduced density matrix $\rho(x,y)$ of the point particle is, $${\partial\rho\over\partial t}={i\hbar\over 2m}\left({\partial^{2}\rho\over% \partial x^{2}}-{\partial^{2}\rho\over\partial y^{2}}\right)-{1\over 2}a^{2}(x% -y)^{2}\rho$$ (1.1)1.1( 1.1 ) where $a^{2}=4m\gamma kT/\hbar^{2}$. (More general forms of this equation, together with the derivations of it may be found in many places. See, for example, Refs.[2,9,5]). One of the most important properties of (1.1) (and also its more general forms) is that the density operator tends to become approximately diagonal in both position and momentum after a short time. This has been seen in numerical solutions and in the evolution of particular types of initial states for which analytic solution is possible [10,6,11,7,12,8,13,14,15]. A more precise demonstration of this statement was given in Ref.[16] by appealing to an alternative description of open systems known as the quantum state diffusion picture [17,18,19,20,21]. In that picture, the density operator $\rho$ satisfying (1.1) is regarded as a mean over a distribution of pure state density operators, $$\rho=M|\psi{\rangle}{\langle}\psi|$$ (1.2)1.2( 1.2 ) where $M$ denotes the mean (defined below), with the pure states evolving according to a non-linear stochastic Langevin-Ito equation, which for the model of this paper is, $$|d\psi{\rangle}=-{i\over\hbar}H|\psi{\rangle}dt-{1\over 2}\left(L-{\langle}L{% \rangle}\right)^{2}|\psi{\rangle}dt+\left(L-{\langle}L{\rangle}\right)|\psi{% \rangle}\ d\xi(t)$$ (1.3)1.3( 1.3 ) for the normalized state vector $|\psi{\rangle}$, where $H=p^{2}/2m$ and $L=a{\hat{x}}$. Here, $d\xi$ is a complex differential random variable representing a complex Wiener process. The linear and quadratic means are, $$M[d\xi d\xi^{*}]=\ dt,\quad M[d\xi d\xi]=0,\quad M[d\xi]=0$$ (1.4)1.4( 1.4 ) The appeal of this picture is that the solutions to the stochastic equation (1.3) appear to describe the expected behaviour of an individual history of the system, and have been seen to correspond to single runs of laboratory experiments. For example, for the quantum Brownian motion model, the solutions tend to phase space localized states of constant width whose centres undergo classical Brownian motion [20,16,22,23,24]. The timescale of this process, the localization time, is at slowest of order $(\hbar/\gamma kT)^{{1\over 2}}$, which is the timescale on which the thermal fluctuations overtake the quantum fluctuations [25,26,27]. For an initial superposition of localized states a distance $\ell$ apart, localization initally proceeds on a much shorter timescale, of order $\hbar^{2}/(\ell^{2}m\gamma kT)$ (which is often called the decoherence time [8,14]), thereafter going over to the slower timescale above. For us, the interesting feature of the quantum state diffusion picture is that it gives some useful information about the form of the density operator on time scales greater than the localization time. Given a set of localized phase space solutions $|\Psi_{pq}{\rangle}$, the density operator may be reconstructed via (1.2). This, it may be shown [16], may be written explicitly as $$\rho=\int dpdq\ f(p,q,t)|\Psi_{pq}{\rangle}{\langle}\Psi_{pq}|$$ (1.5)1.5( 1.5 ) Here, $f(p,q,t)$ is a non-negative, normalized solution to the Fokker-Planck equation describing the classical Brownian motion undergone by the centres of the stationary solutions. This is therefore an explicit, albeit indirect, demonstration of the approach to approximately phase space diagonal form on short time scales. The above demonstration was described by us in detail in Ref.[16]. However, we were not able to deduce an explicit form for the function $f(p,q,t)$ using the quantum state diffuion picture. That is, we know that it is a solution to the Fokker-Planck equation, but it was not clear how to pick out the particular solution corresponding to a particular initial density operator. Intuitively, it is clear that $f(p,q,t)$ is something like the Wigner function of the initial state, coarse-grained sufficiently to make it positive, evolved forwards in time, and with the interference terms thrown away. We would like to be able to show this explicitly. The aim of the present paper is to derive the form (1.5) for times greater than the localization time directly from the path integral representation of the density matrix propagator corresponding to (1.1), without using the quantum state diffusion picture. As we shall see, this derivation has the advantage that it gives an explicit expression for $f(p,q,t)$. In particular, we shall show that $f(p,q,t)$ coincides with the Wigner function $W_{t}(p,q)$ of the density operator at time $t$, for sufficiently large times. 2. THE DENSITY MATRIX PROPAGATOR The solution to the master equation (1.1) may be written in terms of the propagator, $J$, $$\rho_{t}(x,y)=\int dx_{0}dy_{0}\ J(x,y,t|x_{0},y_{0},0)\ \rho_{0}(x_{0},y_{0})$$ (2.1)2.1( 2.1 ) (see, for example, Refs.[2,27] for further details of the quantum Brownian motion model). The propagator may be given in general by a path integral expression, which for the particular case considered here is $$J(x_{f},y_{f},t|x_{0},y_{0},0)=\int{D}x{D}y\ \exp\left({im\over 2\hbar}\int dt% ({\dot{x}}^{2}-{\dot{y}}^{2})-{a^{2}\over 2}\int dt(x-y)^{2}\right)$$ (2.2)2.2( 2.2 ) This is readily evaluated, with the result, $$\eqalignno{J(x_{f},y_{f},t|x_{0},y_{0},0)=&\exp\left({im\over 2\hbar t}\left[(% x_{f}-x_{0})^{2}-(y_{f}-y_{0})^{2}\right]\right.\cr&\left.-{a^{2}t\over 6}% \left[(x_{f}-y_{f})^{2}+(x_{f}-y_{f})(x_{0}-y_{0})+(x_{0}-y_{0})^{2}\right]% \right)&(2.3)\cr}$$ (For convenience we will ignore prefactors in what follows. They may be recovered where required by appropriate normalizations.) The main result of the present paper comes from the simple observation that the real part of the exponent in the path integral (2.2) may be written $$\exp\left(-{a^{2}\over 2}\int dt\ (x-y)^{2}\right)=\int{D}{\bar{x}}\ \exp\left% (-a^{2}\int dt\ (x-{\bar{x}})^{2}-a^{2}\int dt\ (y-{\bar{x}})^{2}\right)$$ (2.4)2.4( 2.4 ) The path integral representation of the propagator may therefore be written, $$J(x_{f},y_{f},t|x_{0},y_{0},0)=\int{D}{\bar{x}}\ K_{{\bar{x}}}(x_{f},t|x_{0},0% )\ K_{{\bar{x}}}^{*}(y_{f},t|y_{0},0)$$ (2.5)2.5( 2.5 ) where $$K_{{\bar{x}}}(x_{f},t|x_{0},0)=\int{D}x\ \exp\left({im\over 2\hbar}\int dt\ {% \dot{x}}^{2}-a^{2}\int dt\ (x-{\bar{x}})^{2}\right)$$ (2.6)2.6( 2.6 ) For a pure initial state, $\rho_{0}(x,y)=\Psi_{0}(x)\Psi_{0}^{*}(y)$, the density operator at time $t$ may therefore be written, $$\rho_{t}(x,y)=\int{D}{\bar{x}}\ \Psi_{{\bar{x}}}(x,t)\Psi^{*}_{{\bar{x}}}(y,t)$$ (2.7)2.7( 2.7 ) where the (unnormalized) wave function $\Psi_{{\bar{x}}}$ is given by $$\Psi_{{\bar{x}}}(x_{f},t)=\int dx_{0}\ K_{{\bar{x}}}(x_{f},t|x_{0},0)\ \Psi_{0% }(x_{0})$$ (2.8)2.8( 2.8 ) (Wave functions of this type often appear in discussions of systems undergoing continuous measurement [28,29,30,31].) Our strategy is to first evaluate the quantity $K_{{\bar{x}}}$, examine is asymptotic form for times greater than the localization time, and then use it to reconstruct the density matrix propagator, $J$. The reason we expect this to yield the desired result is that up to normalization factors and ignoring the fact that ${\bar{x}}$ is real not complex, Eq.(2.8) is essentially the solution to the Langevin–Ito equation, (1.3), so the phase space localization effect should be visible in its long time limit. Moreover, Eq.(2.7) is the analogue of (1.2) or (1.5), so by reorganizing the functional integral over ${\bar{x}}(t)$, we might reasonably expect to derive (1.5). The path integral (2.6) is essentially the same as that for a harmonic oscillator coupled to an external source, with the complication that the frequency is complex. The path integral is therefore readily carried out (see Ref.[32], for example), with the result, $$\eqalignno{K_{{\bar{x}}}(x_{f},t|x_{0},0)=N&\exp\left({i\over\hbar}c_{1}(x_{f}% ^{2}+x_{0}^{2})+{i\over\hbar}c_{2}x_{f}x_{0}\right.\cr&\left.+c_{3}x_{f}+c_{4}% x_{0}+c_{5}-a^{2}\int_{0}^{t}ds\ {\bar{x}}^{2}(s)\right)&(2.9)\cr}$$ where, $$\eqalignno{c_{1}&={m\omega\cos\omega t\over 2\sin\omega t}&(2.10)\cr c_{2}&=-{% m\omega\over\sin\omega t}&(2.11)\cr c_{3}&={2a^{2}\over\sin\omega t}\int_{0}^{% t}ds\ {\bar{x}}(s)\ \sin\omega s&(2.12)\cr c_{4}&={2a^{2}\over\sin\omega t}% \int_{0}^{t}ds\ {\bar{x}}(s)\ \sin\omega(t-s)&(2.13)\cr c_{5}&={4i\hbar a^{2}% \over m\omega\sin\omega t}\int_{0}^{t}ds\int_{0}^{s}ds^{\prime}\ {\bar{x}}(s){% \bar{x}}(s^{\prime})\ \sin\omega(t-s)\ \sin\omega s^{\prime}&(2.14)\cr}$$ Here $\omega={\alpha}(1-i)$ and $${\alpha}=\left({\hbar a^{2}\over 4m}\right)^{{1\over 2}}=\left({\gamma kT\over% \hbar}\right)^{{1\over 2}}$$ (2.15)2.15( 2.15 ) The timescale of evolution according to (2.9) is therefore ${\alpha}^{-1}$, which coincides with the localization time discussed in Ref.[16]. The asymptotic properties of $K_{{\bar{x}}}$ are now easily seen. As $t{\rightarrow}\infty$, $c_{2}{\rightarrow}0$ and $c_{1}{\rightarrow}{1\over 2}m{\alpha}(1+i)$ like $e^{-{\alpha}t}$. Since $c_{2}{\rightarrow}0$, the propagator $K_{{\bar{x}}}$ factors into a product of functions of $x_{0}$ and $x_{f}$. The wave function (2.8) therefore “forgets” its initial conditions and becomes proportional to a Gaussian of the form $$\exp\left({i\over\hbar}c_{1}x_{f}^{2}+c_{3}x_{f}\right)$$ (2.16)2.16( 2.16 ) on a timescale ${\alpha}^{-1}$. This is in complete agreement with the quantum state diffusion picture analysis of Refs.[20,16]. Now introduce $${\bar{q}}={\hbar\over m{\alpha}}{\twelverm Re}\ c_{3},\quad{\bar{p}}=\hbar% \left({\twelverm Re}\ c_{3}+{\twelverm Im}\ c_{3}\right)$$ (2.17)2.17( 2.17 ) Then the Gaussian may be written $$\eqalignno{\exp\left({i\over\hbar}c_{1}x^{2}+c_{3}x\right)&=\exp\left(-{m{% \alpha}\over 2\hbar}(1-i)(x-{\bar{q}})^{2}+{i\over\hbar}{\bar{p}}x+{m{\alpha}% \over 2\hbar}(1-i){\bar{q}}^{2}\right)\cr&\equiv{\langle}x|\Psi_{{\bar{p}}{% \bar{q}}}{\rangle}\ e^{{m{\alpha}\over 2\hbar}(1-i){\bar{q}}^{2}}&(2.18)\cr}$$ The propagator $K_{{\bar{x}}}$ therefore has the form $$\eqalignno{K_{{\bar{x}}}(x_{f},t|x_{0},0)=&N\ {\langle}x_{f}|\Psi_{{\bar{p}}{% \bar{q}}}{\rangle}\ e^{{m{\alpha}\over 2\hbar}(1-i){\bar{q}}^{2}}\cr&\times\ % \exp\left({i\over\hbar}c_{1}x_{0}^{2}+c_{4}x_{0}+c_{5}-a^{2}\int_{0}^{t}ds\ {% \bar{x}}^{2}(s)\right)&(2.19)\cr}$$ The generalized coherent states $|\Psi_{{\bar{p}}{\bar{q}}}{\rangle}$ depend on ${\bar{x}}(t)$ only through ${\bar{p}}$ and ${\bar{q}}$, which are functionals of ${\bar{x}}(t)$. They are close to minimal uncertainty states, satisfying, $\Delta p\Delta q=\hbar/\sqrt{2}$ [20,16]. The desired form of the propagator is now obtained by inserting (2.19) in (2.5), but reorganizing the functional integral over ${\bar{x}}(t)$ into ordinary integrations over ${\bar{p}}$ and ${\bar{q}}$ and functional integrations over remaining parts of ${\bar{x}}(t)$. This may be achieved by writing the functional integral over ${\bar{x}}(t)$ as $$\int{D}{\bar{x}}=\int dpdq\ \int{D}{\bar{x}}\ \delta(p-{\bar{p}})\ \delta(q-{% \bar{q}})$$ (2.20)2.20( 2.20 ) with ${\bar{p}}$ and ${\bar{q}}$ given in terms of ${\bar{x}}$ by (2.17). We thus obtain, $$\eqalignno{J(x_{f},y_{f},t|x_{0},y_{0},0)=&\int dpdq\ \int{D}{\bar{x}}\ \delta% (p-{\bar{p}})\ \delta(q-{\bar{q}})\ {\langle}x_{f}|\Psi_{pq}{\rangle}\ {% \langle}\Psi_{pq}|y_{f}{\rangle}\ e^{{m{\alpha}\over\hbar}q^{2}}\cr&\times\ % \exp\left({i\over\hbar}c_{1}x_{0}^{2}-{i\over\hbar}c_{1}^{*}y_{0}^{2}+c_{4}x_{% 0}+c_{4}^{*}y_{0}\right)\cr&\times\ \exp\left(c_{5}+c_{5}^{*}-2a^{2}\int_{0}^{% t}ds\ {\bar{x}}^{2}(s)\right)&(2.21)\cr}$$ This may be written, $$J(x_{f},y_{f},t|x_{0},y_{0},0)=\int dpdq\ f(p,q,t|x_{0},y_{0})\ {\langle}x_{f}% |\Psi_{pq}{\rangle}\ {\langle}\Psi_{pq}|y_{f}{\rangle}$$ (2.22)2.22( 2.22 ) where $$\eqalignno{f(p,q,t|x_{0},y_{0})=&\int{D}{\bar{x}}\ \delta(p-{\bar{p}})\ \delta% (q-{\bar{q}})\ e^{{m{\alpha}\over\hbar}q^{2}}\cr&\times\ \exp\left({i\over% \hbar}c_{1}x_{0}^{2}-{i\over\hbar}c_{1}^{*}y_{0}^{2}+c_{4}x_{0}+c_{4}^{*}y_{0}% \right)\cr&\times\exp\left(c_{5}+c_{5}^{*}-2a^{2}\int_{0}^{t}ds\ {\bar{x}}^{2}% (s)\right)&(2.23)\cr}$$ We have clearly cast the result in the desired form. Folding an arbitrary initial state into the expression for the density matrix propagator (2.22), we obtain an expression of the desired form (1.5), where $f(p,q,t)$ is given explicitly by, $$f(p,q,t)=\int dx_{0}dy_{0}\ f(p,q,t|x_{0},y_{0})\ \rho_{0}(x_{0},y_{0})$$ (2.24)2.24( 2.24 ) This is our first main result. 3. THE PHASE SPACE DISTRIBUTION FUNCTION It remains to evaluate the path integral expression (2.23). To do this first notice that (2.23) may be written $$\eqalignno{f(p,q,t|x_{0},y_{0})=&\ \exp\left({i\over\hbar}c_{1}x_{0}^{2}-{i% \over\hbar}c_{1}^{*}y_{0}^{2}+{m{\alpha}\over\hbar}q^{2}\right)\ \int dkdk^{% \prime}\ e^{{i\over\hbar}kp+{i\over\hbar}k^{\prime}q}\cr&\times\ \int{D}{\bar{% x}}\ \exp\left(-{i\over\hbar}k{\bar{p}}-{i\over\hbar}k^{\prime}{\bar{q}}+c_{4}% x_{0}+c_{4}^{*}y_{0}\right)\cr&\quad\times\exp\left(c_{5}+c_{5}^{*}-2a^{2}\int% _{0}^{t}ds\ {\bar{x}}^{2}(s)\right)&(3.1)\cr}$$ The functional integral over ${\bar{x}}$ is a Gaussian, since $c_{5}$ is quadratic in ${\bar{x}}$ and ${\bar{p}}$, ${\bar{q}}$ and $c_{4}$ are linear in ${\bar{x}}$, but it involves inverting the functional matrix contain in the last exponential in (3.1), which does not look particularly easy. However, we are saved from having to do this calculation by the following observation. From Eq.(2.5) and Eq.(2.9) (for ${\alpha}t>>1$), we see that $$\eqalignno{J(x_{f},y_{f},t|x_{0},y_{0},0)=&\exp\left({i\over\hbar}c_{1}(x_{f}^% {2}+x_{0}^{2})-{i\over\hbar}c_{1}^{*}(y_{f}^{2}+y_{0}^{2})\right)\cr&\times\ % \int{D}{\bar{x}}\ \exp\left(c_{3}x_{f}+c_{3}^{*}y_{f}+c_{4}x_{0}+c_{4}^{*}y_{0% }\right)\cr&\quad\times\exp\left(c_{5}+c_{5}^{*}-2a^{2}\int_{0}^{t}ds\ {\bar{x% }}^{2}(s)\right)&(3.2)\cr}$$ This functional integral over ${\bar{x}}$ in this expression is very similar in form to (3.1) but we already know what the answer is: it is Eq.(2.3). In particular, equating (3.2) and (2.3), we obtain $$\eqalignno{\int{D}{\bar{x}}\ \exp\left(c_{3}x_{f}+c_{3}^{*}y_{f}\right.&\left.% +c_{4}x_{0}+c_{4}^{*}y_{0}+c_{5}+c_{5}^{*}-2a^{2}\int_{0}^{t}ds\ {\bar{x}}^{2}% (s)\right)\cr=&\exp\left({im\over 2\hbar t}\left[(x_{f}-x_{0})^{2}-(y_{f}-y_{0% })^{2}\right]\right.\cr&\quad\left.-{a^{2}t\over 6}\left[(x_{f}-y_{f})^{2}+(x_% {f}-y_{f})(x_{0}-y_{0})+(x_{0}-y_{0})^{2}\right]\right)\cr&\times\exp\left(-{i% \over\hbar}c_{1}(x_{f}^{2}+x_{0}^{2})+{i\over\hbar}c_{1}^{*}(y_{f}^{2}+y_{0}^{% 2})\right)&(3.3)\cr}$$ Now the point is that the formula (3.3) is true for arbitrary $x_{f}$, $y_{f}$. In particular, using (2.17), we see that $$c_{3}x_{f}+c_{3}^{*}y_{f}={m{\alpha}\over\hbar}\left[x_{f}+y_{f}-i(x_{f}-y_{f}% )\right]{\bar{q}}+{i\over\hbar}{\bar{p}}(x_{f}-y_{f})$$ (3.4)3.4( 3.4 ) Hence the functional integral (3.3) is exactly the same as the one appearing in (3.1) if, in (3.3), we make the substitutions $$(x_{f}-y_{f})\ {\rightarrow}\ -k,\quad{m{\alpha}\over\hbar}\left[x_{f}+y_{f}-i% (x_{f}-y_{f})\right]\ {\rightarrow}\ -{i\over\hbar}k^{\prime}$$ (3.5)3.5( 3.5 ) Inverting for $x_{f}$ and $y_{f}$, we therefore find that the functional integral over ${\bar{x}}(t)$ in (3.1) is equal to the right-hand side of (3.3) with $$\eqalignno{x_{f}&=-{(1+i)\over 2}k-{i\over 2m{\alpha}}k^{\prime}&(3.6)\cr y_{f% }&={(1-i)\over 2}k-{i\over 2m{\alpha}}k^{\prime}&(3.7)\cr}$$ Using this result, and changing variables from $k^{\prime}$ to $K=k+k^{\prime}/m{\alpha}$ in (3.1), we obtain $$\eqalignno{f(p,q,t|x_{0},y_{0})=&\exp\left({m{\alpha}\over\hbar}q^{2}+i{mX_{0}% \xi_{0}\over\hbar t}-{a^{2}t\over 6}\xi_{0}^{2}\right)\cr&\times\int dkdK\ % \exp\left(-\left({a^{2}t\over 6}-{m{\alpha}\over 4\hbar}\right)k^{2}-{m{\alpha% }\over 4\hbar}K^{2}+\left({m{\alpha}\over 2\hbar}-{m\over 2\hbar t}\right)kK% \right)\cr&\quad\quad\times\exp\left({i\over\hbar}k\left(p-m{\alpha}q+{mX_{0}% \over t}-i{\hbar a^{2}t\over 6}\xi_{0}\right)\right)\cr&\quad\quad\times\exp% \left({i\over\hbar}K\left(m{\alpha}q+i{m\xi_{0}\over 2t}\right)\right)&(3.8)\cr}$$ where $X_{0}={1\over 2}(x_{0}+y_{0})$, $\xi_{0}=x_{0}-y_{0}$. This may now be evaluated. An alternative way of writing (3.8) is to carry out the same steps, but to change variables in (3.1) from $k$, $k^{\prime}$ to $x_{f}$, $y_{f}$, with the formal result, $$\eqalignno{f(p,q,t|x_{0},y_{0})=&\int dx_{f}dy_{f}\exp\left({m{\alpha}\over 2% \hbar}(1-i)(x_{f}-q)^{2}-{i\over\hbar}px_{f}\right)\cr&\times\exp\left({m{% \alpha}\over 2\hbar}(1+i)(y_{f}-q)^{2}+{i\over\hbar}py_{f}\right)J(x_{f},y_{f}% ,t|x_{0},y_{0},0)&(3.9)\cr}$$ Folding in the initial state via (2.24), we obtain, $$\eqalignno{f(p,q,t)=&\int dx_{f}dy_{f}\exp\left({m{\alpha}\over 2\hbar}(1-i)(x% _{f}-q)^{2}-{i\over\hbar}px_{f}\right)\cr&\times\exp\left({m{\alpha}\over 2% \hbar}(1+i)(y_{f}-q)^{2}+{i\over\hbar}py_{f}\right)\rho_{t}(x_{f},y_{f})&(3.10% )\cr}$$ which has the appearance of a formal inversion of the relation (1.5). Because the coordinate transformation (3.6), (3.7) is complex some attention to the integration contour is necessary. In particular, $k$ and $k^{\prime}$ are integrated along the real axis, therefore $x_{f}+y_{f}$ is integrated along a purely imaginary contour and $x_{f}-y_{f}$ along a real contour. More precisely, let $X={1\over 2}(x_{f}+y_{f})$ and $\xi=x_{f}-y_{f}$. Then (3.9) becomes $$\eqalignno{f(p,q,t|x_{0},y_{0})=\int_{-i\infty}^{i\infty}dX\int_{-\infty}^{+% \infty}d\xi&\ \exp\left({m{\alpha}\over\hbar}\left((X-q)^{2}+{\xi^{2}\over 4}% \right)-{i\over\hbar}m{\alpha}\xi(X-q)-{i\over\hbar}p\xi\right)\cr&\times\ J(X% +{\xi\over 2},X-{\xi\over 2},t|x_{0},y_{0},0)&(3.11)\cr}$$ Explicitly, this integral reads, $$\eqalignno{f(p,q,t|x_{0},y_{0})=&\int_{-i\infty}^{i\infty}dX\int_{-\infty}^{+% \infty}d\xi\ \exp\left({m{\alpha}\over\hbar}\left((X-q)^{2}+{\xi^{2}\over 4}% \right)-{i\over\hbar}m{\alpha}\xi(X-q)-{i\over\hbar}p\xi\right)\cr&\times\exp% \left({im\over\hbar t}(X-X_{0})(\xi-\xi_{0})-{2m{\alpha}^{2}t\over 3\hbar}(\xi% ^{2}+\xi\xi_{0}+\xi_{0}^{2})\right)&(3.12)\cr}$$ where $X_{0}$ and $\xi_{0}$ defined in the same way as $X$ and $\xi$. The $X$ integral will clearly converge since the contour is along the imaginary axis, and the $\xi$ integral will converge for sufficiently large ${\alpha}t$. Letting $X{\rightarrow}X+q$, the integral over $X$ is readily carried out, with the result $$\eqalignno{f(p,q,t|x_{0},y_{0})=\int d\xi&\exp\left(-{i\over\hbar}p\xi+{im% \over\hbar t}(q-X_{0})(\xi-\xi_{0})-{2m{\alpha}^{2}t\over 3\hbar}(\xi^{2}+\xi% \xi_{0}+\xi_{0}^{2})\right)\cr\times&\exp\left({m{\alpha}\over 4\hbar}\left[% \xi^{2}+\left(\xi-{(\xi-\xi_{0})\over{\alpha}t}\right)^{2}\right]\right)&(3.13% )\cr}$$ The integral over $\xi$ may now be evaluated but it is not necessary to do this, since the form of the answer is now clear. For ${\alpha}t>>1$, the terms in the second exponential are negligible compared to the similiar terms in the first. Furthermore, the remaining terms have the form of the Wigner transform of the propagator [27,33]. We thus have the simple result, $$f(p,q,t|x_{0},y_{0})\ \approx\ \int d\xi\ e^{-{i\over\hbar}p\xi}\ J(q,\xi,t|X_% {0},\xi_{0},0)$$ (3.14)3.14( 3.14 ) Attaching an arbitrary initial density matrix, it then follows from (2.24) that $$\eqalignno{f(p,q,t)&\approx\ \int d\xi\ e^{-{i\over\hbar}p\xi}\ \rho_{t}(q+{1% \over 2}\xi,q-{1\over 2}\xi)\cr&=W_{t}(p,q)&(3.15)\cr}$$ That is, for ${\alpha}t>>1$, $f(p,q,t)$ is the Wigner function of the density operator at time $t$. This is the second main result of this paper. From any of the above representations of $f(p,q,t)$ (other than (3.14)), or from Ref.[16], it is straighforward to show that $f(p,q,t)$ obeys the Fokker-Planck equation, $${\partial f\over\partial t}=-{p\over m}{\partial f\over\partial q}+2m\gamma kT% {\partial^{2}f\over\partial p^{2}}+(2\hbar\gamma kT)^{{1\over 2}}{\partial^{2}% f\over\partial p\partial q}+{\hbar\over 2m}{\partial^{2}f\over\partial q^{2}}$$ (3.16)3.16( 3.16 ) As we have seen, $f(p,q,t)$ approaches the Wigner function $W_{t}(p,q)$ for ${\alpha}t>>1$, which obeys the Fokker-Planck equation of classical Brownian motion: $${\partial W\over\partial t}=-{p\over m}{\partial W\over\partial q}+2m\gamma kT% {\partial^{2}W\over\partial p^{2}}$$ (3.17)3.17( 3.17 ) What happens is that the last two terms in Eq.(3.16) become negligible for large ${\alpha}t$, as may be seen by studying the Wigner function propagator (below). 4. THE POSITIVITY OF THE WIGNER FUNCTION We have shown that the density operator approaches the form (1.5), where $f(p,q,t)$ is given by the Wigner function. However, $f(p,q,t)$ is by construction positive, yet the Wigner function is not guaranteed to be positive in general [33]. What happens is that the Wigner function becomes strictly non-negative after a period of time, under evolution according to (the Wigner transform of) Eq.(1.1), as we now show. The Wigner transform of the relation (2.1) yields, $$W_{t}(p,q)=\int dp_{0}dq_{0}\ K(p,q,t|p_{0},q_{0},0)\ W_{0}(p_{0},q_{0})$$ (4.1)4.1( 4.1 ) where $K(p,q,t|p_{0},q_{0},0)$ is the Wigner function propagator, and is given by [27], $$K(p,q,t|p_{0},q_{0},0)=\exp\left(-\mu(p-p_{0})^{2}-\nu\left(q-q_{0}-{p_{0}t% \over m}\right)^{2}+\sigma(p-p_{0})\left(q-q_{0}-{p_{0}t\over m}\right)\right)$$ (4.2)4.2( 4.2 ) where, introducing $D=2m\gamma kT$, $$\mu={1\over Dt},\quad\nu={3m^{2}\over Dt^{3}},\quad\sigma={3m\over Dt^{2}}$$ (4.3)4.3( 4.3 ) It is well-known that the Wigner function may take negative values only through oscillations in $\hbar$-sized regions of phase space, and that it may be rendered positive by coarse-graining of such a region. Considered for example, the smeared Wigner function $$W_{H}(p,q)=2\int dp^{\prime}dq^{\prime}\ \exp\left(-{2\sigma_{q}^{2}(p-p^{% \prime})^{2}\over\hbar^{2}}-{(q-q^{\prime})^{2}\over 2\sigma_{q}^{2}}\right)\ % W_{\rho}(p^{\prime},q^{\prime})$$ (4.4)4.4( 4.4 ) This object is called the Husimi function [34]. It is equal to the expectation value of the corresponding density operator in a coherent state (of position width $\sigma_{q}$), ${\langle}p,q|\rho|p,q{\rangle}$, so is non-negative. Loosely speaking, what happens during time evolution according to (4.1), is that, after a certain amount of time, the propagator effectively smears the Wigner function over a region of phase space greater than $\hbar$, and it becomes positive, in the manner of (4.4). We will now show this explicitly. Letting $p_{0}{\rightarrow}p_{0}+p$ and $q_{0}{\rightarrow}q_{0}+q-p_{0}t/m$ in (4.1) yields, $$W_{t}(p,q)=\int dp_{0}dq_{0}\ \exp\left(-\mu p_{0}^{2}-\nu q_{0}^{2}+\sigma p_% {0}q_{0}\right)\ W_{0}(p_{0}+p,q_{0}+q-{p_{0}t\over m})$$ (4.5)4.5( 4.5 ) The further transformation $p_{0}{\rightarrow}p_{0}+{\sigma\over 2\mu}q_{0}$ yields, $$W_{t}(p,q)=\int dp_{0}dq_{0}\ \exp\left(-\mu p_{0}^{2}-{\beta}q_{0}^{2}\right)% \ W_{0}(p_{0}+{\sigma\over 2\mu}q_{0}+p,q_{0}+q-{p_{0}t\over m})$$ (4.6)4.6( 4.6 ) where ${\beta}=\left(\nu-{\sigma^{2}\over 4\mu}\right)$. These two transformations are canonical, and therefore the transformed Wigner function appearing in the integrand of (4.6) is still the Wigner function of some state (unitarily related to the original one). Hence, $$W_{t}(p,q)=\int dp_{0}dq_{0}\ \exp\left(-\mu p_{0}^{2}-{\beta}q_{0}^{2}\right)% \ \tilde{W}_{pq}(p_{0},q_{0})$$ (4.7)4.7( 4.7 ) for some Wigner function ${\tilde{W}}_{pq}$ depending on $p,q$. This may now be recast as the smearing of a Husimi function: $$\eqalignno{W_{t}(p,q)=&\int dp^{\prime}\ \exp\left(-{{p^{\prime}}^{2}\over(\mu% ^{-1}-\hbar^{2}{\beta})}\right)\cr&\int dp_{0}dq_{0}\exp\left(-{(p^{\prime}-p_% {0})^{2}\over\hbar^{2}{\beta}}-{\beta}q_{0}^{2}\right)\ \tilde{W}_{pq}(p_{0},q% _{0})&(4.8)\cr}$$ The integral over $p_{0}$, $q_{0}$ is a Husimi function with $\sigma_{q}^{2}=1/(2{\beta})$. Hence $W_{t}(p,q)\geq 0$ provided the integral over $p^{\prime}$ in (4.8) exists. This will be the case if $\mu^{-1}>\hbar^{2}{\beta}$, that is, if $$t>\left({\sqrt{3}\over 2}\right)^{{1\over 2}}\left({\hbar\over\gamma kT}\right% )^{{1\over 2}}$$ (4.9)4.9( 4.9 ) The Wigner function will therefore be non-negative for times greater than the localization time (mulitplied by a number of order $1$). 5. DISCUSSION We have shown that for times greater than the localization time, $(\hbar/\gamma kT)^{{1\over 2}}$, the density operator satisfying (1.1) approaches the form $$\rho=\int dpdq\ W_{t}(p,q)|\Psi_{pq}{\rangle}{\langle}\Psi_{pq}|$$ (5.1)5.1( 5.1 ) where $W_{t}(p,q)$ is the Wigner function and the $|\Psi_{pq}{\rangle}$ are close to minimum uncertainty generalized coherent states. The Wigner function is strictly non-negative for times greater than the localization time (times a number of order $1$). Diósi has also discussed the possibility of the phase space diagonal form (1.5) under evolution according to the master equation (1.1) [35]. His method was very different to ours, in that he used the properties of the coherent states to regard (1.5) as an expansion of the density operator. He found that such an expansion is possible for times greater than the localization time, times a number of order $1$, in tune with our results. An advantage of deriving (5.1) using path integral methods, rather than quantum state diffusion, is that it yields and explicit expression for the phase space distribution function $f(p,q,t)$. Another advantage is that it is not obviously restricted to Markovian master equations. The quantum state diffusion picture, in its current state of development, exists only for systems described by a Markovian master equation. It may exist in the non-Markovian case, but is yet to be developed. The exact propagator for quantum Brownian motion, for quadratic potentials, can be given in terms of a path integral [5], and is (mildly) non-Markovian. Since the method described here utitlizes path integrals, rather than the quantum state diffusion picture, there is a chance that our method may be valid in the non-Markovian case also, but this is still to be investigated. We have concentrated in this paper on the simplest possible model of quantum Brownian motion: the free particle in a high temperature environment with negligible dissipation $\gamma$. It is clear, however, that remaining in the context of a Markovian master equation, it would be straighforward (although perhaps tedious) to extend our considerations to the case of a harmonic oscillator with non-trivial dissipation. In the quantum state diffusion picture analysis, this case was covered in Ref.[16] and we expect the path integral treatment of the present paper to yield comparable results. It is perhaps enlightening to comment on the various timescales involved in a more general quantum Brownian motion model, and sketch the expected general physical picture part of which is described by the results of this paper. In this paper, we have largely been concerned with the localization time, $(\hbar/\gamma kT)^{{1\over 2}}$, which is the timescale on which an arbitrary initial density operator approaches the form (1.5). The nomenclature “localization time” comes from the quantum state diffusion picture, which was the picture first used to derive some of the results described in this paper. It is so named because it is the time scale on which an arbitrary initial wave function becomes localized in phase space under evolution according to Eq.(1.3) [16,20]. Also relevant is the decoherence time, $\hbar^{2}/(\ell^{2}m\gamma kT)$, which is the timescale on which the off-diagonal terms of the density matrix are suppressed (in the position representation) [14]. The decoherence time necessarily involves a length scale $\ell$, which comes from the initial state. It could, for example, be the separation of a superposition of localized wave packets, and the decoherence time is then the time scale on which the interference between these packets is suppressed. If one is interested in emergent classicality for macroscopic systems, it is appropriate to choose values order $1$ in c.g.s. units, for $\ell$, $T$, $m$ and $\gamma$. The decoherence time is then typically much shorter than the localization time. This is in turn typically much shorter than the relaxation time, $\gamma^{-1}$, which is the time scale on which the system approaches thermal equilibrium (when this is possible). Hence the general picture we have is as follows. Suppose the initial state of the system is a superposition of localized wave packets. Then the interference terms between these wave packets is destroyed on the decoherence timescale. After a few localization times the density matrix subsequently approaches the phase space diagonal form (1.5). After a much longer time of order the relaxation time, the system reaches thermal equilibrium. Discussions of emergent classicality usually concern times between the decoherence time and the relaxation time, and it is this range of time which has been the primary concern of this paper. ACKNOWLEDGEMENTS We would like to thank Todd Brun, Lajos Diósi, Juan Pablo Paz, Ian Percival and Wojtek Zurek for useful conversations. REFERENCES 1. G.S.Agarwal, Phys. Rev. A3, 828 (1971); Phys. Rev. A4, 739 (1971); H.Dekker, Phys. Rev. A16, 2116 (1977); Phys.Rep. 80, 1 (1991); G.W.Ford, M.Kac and P.Mazur, J. Math. Phys. 6, 504 (1965); H.Grabert, P.Schramm, G-L. Ingold, Phys. Rep. 168, 115 (1988); V.Hakim and V.Ambegaokar, Phys. Rev. A32, 423 (1985); J.Schwinger, J. Math. Phys. 2, 407 (1961); I.R.Senitzky, Phys. Rev. 119, 670 (1960). 2. A.O.Caldeira and A.J.Leggett, Physica 121A, 587 (1983). 3. H.F.Dowker and J.J.Halliwell, Phys.Rev. D46, 1580 (1992) 4. M.R.Gallis, Phys. Rev. A48, 1023 (1993). 5. B.L.Hu, J.P.Paz and Y.Zhang, Phys. Rev. D45, 2843 (1992); Phys. Rev. D47, 1576 (1993). 6. J.P.Paz, S.Habib and W.Zurek, Phys. Rev. D47, 488 (1993). 7. W.G.Unruh and W.Zurek, Phys.Rev. D40, 1071 (1989). 8. W.Zurek, Physics Today 40, 36 (1991) 9. J.J.Halliwell and T.Yu, “Alternative Derivation of the Hu-Paz-Zhang Master Equation of Quantum Brownian Motion”, accepted for publication in Physical Review D. 10. E.Joos and H.D.Zeh, Z.Phys. B59, 223 (1985). 11. J.P.Paz and W.Zurek, Phys. Rev. D48, 2728 (1993). 12. W.Zurek, S.Habib and J.P.Paz, Phys. Rev. Lett. 70, 1187 (1993). 13. W.Zurek, Phys.Rev.Lett. 53, 391 (1984). 14. W.Zurek, in Frontiers of Nonequilibrium Statistical Physics, edited by G.T.Moore and M.O.Scully (Plenum, 1986). 15. W.Zurek, Prog.Theor.Phys. 89, 281 (1993); and in, Physical Origins of Time Asymmetry, edited by J. J. Halliwell, J. Perez-Mercader and W. Zurek (Cambridge University Press, Cambridge, 1994). 16. J.J.Halliwell and A.Zoupas, Phys. Rev. D52, 7294 (1995). 17. N.Gisin and I.C. Percival, J.Phys. A25, 5677 (1992); see also Phys. Lett. A167, 315 (1992). 18. N.Gisin and I.C.Percival, J.Phys. A26, 2233 (1993). 19. N.Gisin and I.C.Percival, J.Phys. A26, 2245 (1993). 20. L.Diósi, Phys.Lett 132A, 233 (1988). 21. L.Diósi, N.Gisin, J.Halliwell and I.C.Percival, Phys.Rev.Lett 74, 203 (1995). 22. D.Gatarek and N.Gisin, J.Math.Phys. 32, 2152 (1991). 23. Y.Salama and N.Gisin, Phys. Lett. 181A, 269, (1993). 24. T.Brun, I.C.Percival, R.Schack and N.Gisin, QMW preprint (1996). 25. B.L.Hu and Y.Zhang, Mod.Phys.Lett. A8, 3575 (1993). 26. A.Anderson and J.J.Halliwell, Phys. Rev. 48, 2753 (1993). 27. C.Anastopoulos and J.J.Halliwell, Phys. Rev. D51, 6870 (1995). 28. C.M.Caves and G.J.Milburn, Phys.Rev. A36, 5543 (1987). 29. M.B.Mensky, Continous Measurement and Path Integrals (IOP Publishing, Bristol, 1993). 30. L.Diósi, Phys.Lett. 129A, 419 (1988). 31. G.C.Ghirardi, A.Rimini and T.Weber, Phys.Rev. D34, 470 (1986); G.C.Ghirardi, P.Pearle amd A.Rimini, Phys.Rev. A42, 78 (1990). 32. L.Schulman, Techniques and Applications of Path Integration (Wiley, New York, 1981). 33. N.Balazs and B.K.Jennings, Phys. Rep. 104, 347 (1984), M.Hillery, R.F.O’Connell, M.O.Scully and E.P.Wigner, Phys. Rep. 106, 121 (1984); V.I.Tatarskii, Sov.Phys.Usp 26, 311 (1983). 34. K.Husimi, Proc.Phys.Math.Soc. Japan 22, 264 (1940). 35. L.Diósi, Phys.Lett A122, 221 (1987).
Pell numbers whose Euler function is a Pell number Bernadette Faye and  Florian Luca Ecole Doctorale de Mathematiques et d’Informatique Université Cheikh Anta Diop de Dakar BP 5005, Dakar Fann, Senegal and School of Mathematics University of the Witwatersrand Private Bag X3, Wits 2050, South Africa bernadette@aims-senegal.org School of Mathematics, University of the Witwatersrand Private Bag X3, Wits 2050, South Africa Florian.Luca@wits.ac.za Abstract. In this paper, we show that the only Pell numbers whose Euler function is also a Pell number are $1$ and $2$. AMS Subject Classification 2010: Primary 11B39; Secondary 11A25 Keywords: Pell numbers; Euler function; Applications of sieve methods. 1. Introduction Let $\phi(n)$ be the Euler function of the positive integer $n$. Recall that if $n$ has the prime factorization $$n=p_{1}^{a_{1}}\cdots p_{k}^{a_{k}}$$ with distinct primes $p_{1},\ldots,p_{k}$ and positive integers $a_{1},\ldots,a_{k}$, then $$\phi(n)=p_{1}^{a_{1}-1}(p_{1}-1)\cdots p_{k}^{a_{k}-1}(p_{k}-1).$$ There are many papers in the literature dealing with diophantine equations involving the Euler function in members of a binary recurrent sequence. For example, in [11], it is shown that $1,~{}2$, and $3$ are the only Fibonacci numbers whose Euler function is also a Fibonacci number, while in [4] it is shown that the Diophantine equation $\phi(5^{n}-1)=5^{m}-1$ has no positive integer solutions $(m,n)$. Furthermore, the divisibility relation $\phi(n)\mid n-1$ when $n$ is a Fibonacci number, or a Lucas number, or a Cullen number (that is, a number of the form $n2^{n}+1$ for some positive integer $n$), or a rep-digit $(g^{m}-1)/(g-1)$ in some integer base $g\in[2,1000]$ have been investigated in [10], [5], [7] and [3], respectively. Here we look at a similar equation with members of the Pell sequence. The Pell sequence $(P_{n})_{n\geq 0}$ is given by $P_{0}=0$, $P_{1}=1$ and $P_{n+1}=2P_{n}+P_{n-1}$ for all $n\geq 0$. Its first terms are $$0,1,2,5,12,29,70,169,408,985,2378,5741,13860,33461,80782,195025,470832,\ldots$$ We have the following result. Theorem 1. The only solutions in positive integers $(n,m)$ of the equation (1) $$\phi(P_{n})=P_{m}$$ are $(n,m)=(1,1),(2,1).$ For the proof, we begin by following the method from [11], but we add to it some ingredients from [10]. 2. Preliminary results Let $(\alpha,\beta)=(1+{\sqrt{2}},1-{\sqrt{2}})$ be the roots of the characteristic equation $x^{2}-2x-1=0$ of the Pell sequence $\{P_{n}\}_{n\geq 0}$. The Binet formula for $P_{n}$ is (2) $$P_{n}=\frac{\alpha^{n}-\beta^{n}}{\alpha-\beta}\quad{\text{\rm for~{} all}}% \quad n\geq 0.$$ This implies easily that the inequalities (3) $$\alpha^{n-2}\leq P_{n}\leq\alpha^{n-1}$$ hold for all positive integers $n$. We let $\{Q_{n}\}_{n\geq 0}$ be the companion Lucas sequence of the Pell sequence given by $Q_{0}=2$, $Q_{1}=2$ and $Q_{n+2}=2Q_{n+1}+Q_{n}$ for all $n\geq 0$. Its first few terms are $$2,2,6,14,34,82,198,478,1154,2786,6726,16238,39202,94642,228486,551614,\ldots$$ The Binet formula for $Q_{n}$ is (4) $$Q_{n}=\alpha^{n}+\beta^{n}\quad{\text{\rm for~{} all}}\quad n\geq 0.$$ We use the well-known result. Lemma 2. The relations (i) $P_{2n}=P_{n}Q_{n}$, (ii) $Q_{n}^{2}-8P_{n}^{2}=4(-1)^{n}$ hold for all $n\geq 0$. For a prime $p$ and a nonzero integer $m$ let $\nu_{p}(m)$ be the exponent with which $p$ appears in the prime factorization of $m$. The following result is well-known and easy to prove. Lemma 3. The relations (i) $\nu_{2}(Q_{n})=1$, (ii) $\nu_{2}(P_{n})=\nu_{2}(n)$ hold for all positive integers $n$. The following divisibility relations among the Pell numbers are well-known. Lemma 4. Let $m$ and $n$ be positive integers. We have: (i) If $m\mid n$ then $P_{m}\mid P_{n}$, (ii) $\gcd(P_{m},P_{n})=P_{\gcd(m,n)}$. For each positive integer $n$, let $z(n)$ be the smallest positive integer $k$ such that $n\mid P_{k}.$ It is known that this exists and $n\mid P_{m}$ if and only if $z(n)\mid m$. This number is referred to as the order of appearance of $n$ in the Pell sequence. Clearly, $z(2)=2$. Further, putting for an odd prime $p$, $e_{p}={\displaystyle{\left(\frac{2}{p}\right)}}$, where the above notation stands for the Legendre symbol of $2$ with respect to $p$, we have that $z(p)\mid p-e_{p}$. A prime factor $p$ of $P_{n}$ such that $z(p)=n$ is called primitive for $P_{n}$. It is known that $P_{n}$ has a primitive divisor for all $n\geq 2$ (see [2] or [1]). Write $P_{z(p)}=p^{e_{p}}m_{p}$, where $m_{p}$ is coprime to $p$. It is known that if $p^{k}\mid P_{n}$ for some $k>e_{p}$, then $pz(p)\mid n$. In particular, (5) $$\nu_{p}(P_{n})\leq e_{p}\quad{\text{\rm whenever}}\quad p\nmid n.$$ We need a bound on $e_{p}$. We have the following result. Lemma 5. The inequality (6) $$e_{p}\leq\frac{(p+1)\log\alpha}{2\log p}.$$ holds for all primes $p$. Proof. Since $e_{2}=1$, the inequality holds for the prime $2$. Assume that $p$ is odd. Then $z(p)\mid p+\varepsilon$ for some $\varepsilon\in\{\pm 1\}$. Furthermore, by Lemmas 2 and 4, we have $$p^{e_{p}}\mid P_{z(p)}\mid P_{p+\varepsilon}=P_{(p+\varepsilon)/2}Q_{(p+% \varepsilon)/2}.$$ By Lemma 2, it follows easily that $p$ cannot divide both $P_{n}$ and $Q_{n}$ for $n=(p+\varepsilon)/2$ since otherwise $p$ will also divide $$Q_{n}^{2}-8P_{n}^{2}=\pm 4,$$ a contradiction since $p$ is odd. Hence, $p^{e_{p}}$ divides one of $P_{(p+\varepsilon)/2}$ or $Q_{(p+\varepsilon)/2}$. If $p^{e_{p}}$ divides $P_{(p+\varepsilon)/2}$, we have, by (3), that $$p^{e_{p}}\leq P_{(p+\varepsilon)/2}\leq P_{(p+1)/2}<\alpha^{(p+1)/2},$$ which leads to the desired inequality (6) upon taking logarithms of both sides. In case $p^{e_{p}}$ divides $Q_{(p+\varepsilon)/2}$, we use the fact that $Q_{(p+\varepsilon)/2}$ is even by Lemma 3 (i). Hence, $p^{e_{p}}$ divides $Q_{(p+\varepsilon)/2}/2$, therefore, by formula (4), we have $$p^{e_{p}}\leq\frac{Q_{(p+\varepsilon)/2}}{2}\leq\frac{Q_{(p+1)/2}}{2}<\frac{% \alpha^{(p+1)/2}+1}{2}<\alpha^{(p+1)/2},$$ which leads again to the desired conclusion by taking logarithms of both sides.     $\sqcap$$\sqcup$ For a positive real number $x$ we use $\log x$ for the natural logarithm of $x$. We need some inequalities from the prime number theory. For a positive integer $n$ we write $\omega(n)$ for the number of distinct prime factors of $n$. The following inequalities (i), (ii) and (iii) are inequalities (3.13), (3.29) and (3.41) in [15], while (iv) is Théoréme 13 from [6]. Lemma 6. Let $p_{1}<p_{2}<\cdots$ be the sequence of all prime numbers. We have: (i) The inequality $$p_{n}<n(\log n+\log\log n)$$ holds for all $n\geq 6$. (ii) The inequality $$\prod_{p\leq x}\left(1+\frac{1}{p-1}\right)<1.79\log x\left(1+\frac{1}{2(\log x% )^{2}}\right)$$ holds for all $x\geq 286$. (iii) The inequality $$\phi(n)>\frac{n}{1.79\log\log n+2.5/\log\log n}$$ holds for all $n\geq 3$. (iv) The inequality $$\omega(n)<\frac{\log n}{\log\log n-1.1714}$$ holds for all $n\geq 26$. For a positive integer $n$, we put ${\mathcal{P}}_{n}=\{p:z(p)=n\}$. We need the following result. Lemma 7. Put $$S_{n}:=\sum_{p\in{\mathcal{P}}_{n}}\frac{1}{p-1}.$$ For $n>2$, we have (7) $$S_{n}<\min\left\{\frac{2\log n}{n},\frac{4+4\log\log n}{\phi(n)}\right\}.$$ Proof. Since $n>2$, it follows that every prime factor $p\in{\mathcal{P}}_{n}$ is odd and satisfies the congruence $p\equiv\pm 1\pmod{n}$. Further, putting $\ell_{n}:=\#{\mathcal{P}}_{n}$, we have $$(n-1)^{\ell_{n}}\leq\prod_{p\in{\mathcal{P}}_{n}}p\leq P_{n}<\alpha^{n-1}$$ (by inequality (3)), giving (8) $$\ell_{n}\leq\frac{(n-1)\log\alpha}{\log(n-1)}.$$ Thus, the inequality (9) $$\ell_{n}<\frac{n\log\alpha}{\log n}$$ holds for all $n\geq 3$, since it follows from (8) for $n\geq 4$ via the fact that the function $x\mapsto x/\log x$ is increasing for $x\geq 3$, while for $n=3$ it can be checked directly. To prove the first bound, we use (9) to deduce that (10) $$\displaystyle S_{n}$$ $$\displaystyle\leq$$ $$\displaystyle\sum_{1\leq\ell\leq\ell_{n}}\left(\frac{1}{n\ell-2}+\frac{1}{n% \ell}\right)$$ $$\displaystyle\leq$$ $$\displaystyle\frac{2}{n}\sum_{1\leq\ell\leq\ell_{n}}\frac{1}{\ell}+\sum_{m\geq n% }\left(\frac{1}{m-2}-\frac{1}{m}\right)$$ $$\displaystyle\leq$$ $$\displaystyle\frac{2}{n}\left(\int_{1}^{\ell_{n}}\frac{dt}{t}+1\right)+\frac{1% }{n-2}+\frac{1}{n-1}$$ $$\displaystyle\leq$$ $$\displaystyle\frac{2}{n}\left(\log\ell_{n}+1+\frac{n}{n-2}\right)$$ $$\displaystyle\leq$$ $$\displaystyle\frac{2}{n}\log\left(n\left(\frac{(\log\alpha)e^{2+2/(n-2)}}{\log n% }\right)\right).$$ Since the inequality $$\log n>(\log\alpha)e^{2+2/(n-2)}$$ holds for all $n\geq 800$, (10) implies that $$S_{n}<\frac{2\log n}{n}\quad{\text{\rm for}}\quad n\geq 800.$$ The remaining range for $n$ can be checked on an individual basis. For the second bound on $S_{n}$, we follow the argument from [10] and split the primes in ${\mathcal{P}}_{n}$ in three groups: (i) $p<3n$; (ii) $p\in(3n,n^{2})$; (iii) $p>n^{2}$; We have (11) $$T_{1}=\sum_{\begin{subarray}{c}p\in{\mathcal{P}}_{n}\\ p<3n\end{subarray}}\frac{1}{p-1}\leq\left\{\begin{matrix}{\displaystyle{\frac{% 1}{n-2}+\frac{1}{n}+\frac{1}{2n-2}+\frac{1}{2n}+\frac{1}{3n-2}}}&<&{% \displaystyle{\frac{10.1}{3n}}},&n\equiv 0\pmod{2},\\ {\displaystyle{\frac{1}{2n-2}+\frac{1}{2n}}}&<&{\displaystyle{\frac{7.1}{3n}}}% ,&n\equiv 1\pmod{2},\\ \end{matrix}\right.$$ where the last inequalities above hold for all $n\geq 84$. For the remaining primes in ${\mathcal{P}}_{n}$, we have (12) $$\sum_{\begin{subarray}{c}p\in{\mathcal{P}}_{n}\\ p>3n\end{subarray}}\frac{1}{p-1}<\sum_{\begin{subarray}{c}p\in{\mathcal{P}}_{n% }\\ p>3n\end{subarray}}\frac{1}{p}+\sum_{m\geq 3n+1}\left(\frac{1}{m-1}-\frac{1}{m% }\right)=T_{2}+T_{3}+\frac{1}{3n},$$ where $T_{2}$ and $T_{3}$ denote the sums of the reciprocals of the primes in ${\mathcal{P}}_{n}$ satisfying (ii) and (iii), respectively. The sum $T_{2}$ was estimated in [10] using the large sieve inequality of Montgomery and Vaughan [13] (see also page 397 in [11]), and the bound on it is (13) $$T_{2}=\sum_{3n<p<n^{2}}\frac{1}{p}<\frac{4}{\phi(n)\log n}+\frac{4\log\log n}{% \phi(n)}<\frac{1}{\phi(n)}+\frac{4\log\log n}{\phi(n)},$$ where the last inequality holds for $n\geq 55$. Finally, for $T_{3}$, we use the estimate (9) on $\ell_{n}$ to deduce that (14) $$T_{3}<\frac{\ell_{n}}{n^{2}}<\frac{\log\alpha}{n\log n}<\frac{0.9}{3n},$$ where the last bound holds for all $n\geq 19$. To summarize, for $n\geq 84$, we have, by (11), (12), (13) and (14), $$S_{n}<\frac{10.1}{3n}+\frac{1}{3n}+\frac{0.9}{3n}+\frac{1}{\phi(n)}+\frac{4% \log\log n}{\phi(n)}=\frac{4}{n}+\frac{1}{\phi(n)}+\frac{4\log\log n}{\phi(n)}% \leq\frac{3+4\log\log n}{\phi(n)}$$ for $n$ even, which is stronger that the desired inequality. Here, we used that $\phi(n)\leq n/2$ for even $n$. For odd $n$, we use the same argument except that the first fraction $10.1/(3n)$ on the right–hand side above gets replaced by $7.1/(3n)$ (by (11)), and we only have $\phi(n)\leq n$ for odd $n$. This was for $n\geq 84$. For $n\in[3,83]$, the desired inequality can be checked on an individual basis.     $\sqcap$$\sqcup$ The next lemma from [9] gives an upper bound on the sum appearing in the right–hand side of (7). Lemma 8. We have $$\sum_{d\mid n}\frac{\log d}{d}<\left(\sum_{p\mid n}\frac{\log p}{p-1}\right)% \frac{n}{\phi(n)}.$$ Throughout the rest of this paper we use $p,~{}q,~{}r$ with or without subscripts to denote prime numbers. 3. Proof of The Theorem 3.1. Some lower bounds on $m$ and $\omega(P_{n})$ We start with a computation showing that there are no other solutions than $n=1,~{}2$ when $n\leq 100$. So, from now on $n>100$. We write (15) $$P_{n}=q_{1}^{\alpha_{1}}\ldots q_{k}^{\alpha_{k}},$$ where $q_{1}<\cdots<q_{k}$ are primes and $\alpha_{1},\ldots,\alpha_{k}$ are positive integers. Clearly, $m<n$. McDaniel [12], proved that $P_{n}$ has a prime factor $q\equiv 1\pmod{4}$ for all $n>14$. Thus, McDaniel’s result applies for us showing that $$4\mid q-1\mid\phi(P_{n})\mid P_{m},$$ so $4\mid m$ by Lemma 3. Further, it follows from a the result of the second author [5], that $\phi(P_{n})\geq P_{\phi(n)}.$ Hence, $m\geq\phi(n)$. Thus, (16) $$m\geq\phi(n)\geq\frac{n}{1.79\log\log n+2.5/\log\log n},$$ by Lemma 6 (iii). The function $$x\mapsto\frac{x}{1.79\log\log x+2.5/\log\log x}$$ is increasing for $x\geq 100$. Since $n\geq 100$, inequality (16) together with the fact that $4\mid m$, show that $m\geq 24$. Put $\ell=n-m$. Since $m$ is even, we have $\beta^{m}>0$, therefore (17) $$\frac{P_{n}}{P_{m}}=\frac{\alpha^{n}-\beta^{n}}{\alpha^{m}-\beta^{m}}>\frac{% \alpha^{n}-\beta^{n}}{\alpha^{m}}\geq\alpha^{\ell}-\frac{1}{\alpha^{m+n}}>% \alpha^{\ell}-10^{-40},$$ where we used the fact that $$\frac{1}{\alpha^{m+n}}\leq\frac{1}{\alpha^{124}}<10^{-40}.$$ We now are ready to provide a large lower bound on $n$. We distinguish the following cases. Case 1: $n$ is odd. Here, we have $\ell\geq 1$. So, $$\frac{P_{n}}{P_{m}}>\alpha-10^{-40}>2.4142.$$ Since $n$ is odd, it follows that $P_{n}$ is divisible only by primes $q$ such that $z(q)$ is odd. Among the first $10000$ primes, there are precisely $2907$ of them with this property. They are $$\mathcal{F}_{1}=\{5,13,29,37,53,61,101,109,\ldots,104597,104677,104693,104701,% 104717\}.$$ Since $$\prod_{p\in{\mathcal{F}}_{1}}\left(1-\frac{1}{p}\right)^{-1}<1.963<2.4142<% \frac{P_{n}}{P_{m}}=\prod_{i=1}^{k}\left(1-\frac{1}{q_{i}}\right)^{-1},$$ we get that $k>2907$. Since $2^{k}\mid\phi(P_{n})\mid P_{m}$, we get, by Lemma 3, that (18) $$n>m>2^{2907}.$$ Case 2: $n\equiv 2\pmod{4}$. Since both $m$ and $n$ are even, we get $\ell\geq 2.$ Thus, (19) $$\frac{P_{n}}{P_{m}}>\alpha^{2}-10^{-40}>5.8284.$$ If $q$ is a prime factor of $P_{n}$, as in Case 1, we have that $z(q)$ is not divisible by $4$. Among the first $10000$ primes, there are precisely $5815$ of them with this property. They are $$\mathcal{F}_{2}=\{2,5,7,13,23,29,31,37,41,47,53,61,\ldots,104693,104701,104711% ,104717\}.$$ Writing $p_{j}$ as the $j$th prime number in $\mathcal{F}_{2}$, we check with Mathematica that $$\displaystyle\prod_{i=1}^{415}\left(1-\frac{1}{p_{i}}\right)^{-1}$$ $$\displaystyle=$$ $$\displaystyle 5.82753\ldots$$ $$\displaystyle\prod_{i=1}^{416}\left(1-\frac{1}{p_{i}}\right)^{-1}$$ $$\displaystyle=$$ $$\displaystyle 5.82861\ldots,$$ which via inequality (19) shows that $k\geq 416$. Of the $k$ prime factors of $P_{n}$, we have that only $k-1$ of them are odd ($q_{1}=2$ because $n$ is even), but one of those is congruent to $1$ modulo $4$ by McDaniel’s result. Hence, $2^{k}\mid\phi(P_{n})\mid P_{m}$, which shows, via Lemma 3, that (20) $$n>m\geq 2^{416}.$$ Case 3: $4\mid n$. In this case, since both $m$ and $n$ are multiples of $4$, we get that $\ell\geq 4$. Therefore, $$\frac{P_{n}}{P_{m}}>\alpha^{4}-10^{-40}>33.97.$$ Letting $p_{1}<p_{2}<\cdots$ be the sequence of all primes, we have that $$\prod_{i=1}^{2000}\left(1-\frac{1}{p_{i}}\right)^{-1}<17.41\ldots<33.97<\frac{% P_{n}}{P_{m}}=\prod_{i=1}^{k}\left(1-\frac{1}{q_{i}}\right),$$ showing that $k>2000$. Since $2^{k}\mid\phi(P_{n})=P_{m}$, we get (21) $$n>m\geq 2^{2000}.$$ To summarize, from (18), (20) and (21), we get the following results. Lemma 9. If $n>2$, then (1) $2^{k}\mid m$; (2) $k\geq 416$; (3) $n>m\geq 2^{416}$. 3.2. Bounding $\ell$ in term of $n$ We saw in the preceding section that $k\geq 416$. Since $n>m\geq 2^{k}$, we have (22) $$k<k(n):=\frac{\log n}{\log 2}.$$ Let $p_{j}$ be the $j$th prime number. Lemma 6 shows that $$p_{k}\leq p_{\lfloor k(n)\rfloor}\leq k(n)(\log k(n)+\log\log k(n)):=q(n).$$ We then have, using Lemma 6 (ii), that $$\frac{P_{m}}{P_{n}}=\prod_{i=1}^{k}\left(1-\frac{1}{q_{i}}\right)\geq\prod_{2% \leq p\leq q(n)}\left(1-\frac{1}{p}\right)>\frac{1}{1.79\log q(n)(1+1/(2(\log q% (n))^{2}))}.$$ Inequality (ii) of Lemma 6 requires that $x\geq 286$, which holds for us with $x=q(n)$ because $k(n)\geq 416$. Hence, we get $$1.79\log q(n)\left(1+\frac{1}{(2(\log q(n))^{2})}\right)>\frac{P_{n}}{P_{m}}>% \alpha^{\ell}-10^{-40}>\alpha^{\ell}\left(1-\frac{1}{10^{40}}\right).$$ Since $k\geq 416$, we have $q(n)>3256$. Hence, we get $$\log q(n)\left(1.79\left(1-\frac{1}{10^{40}}\right)^{-1}\left(1+\frac{1}{2(% \log(3256))^{2}}\right)\right)>\alpha^{\ell},$$ which yields, after taking logarithms, to (23) $$\ell\leq\frac{\log\log q(n)}{\log\alpha}+0.67.$$ The inequality (24) $$q(n)<(\log n)^{1.45}$$ holds in our range for $n$ (in fact, it holds for all $n>10^{83}$, which is our case since for us $n>2^{416}>10^{125}$). Inserting inequality (24) into (23), we get $$\ell<\frac{\log\log(\log n)^{1.45}}{\log\alpha}+0.67<\frac{\log\log\log n}{% \log\alpha}+1.1.$$ Thus, we proved the following result. Lemma 10. If $n>2$, then (25) $$\ell<\frac{\log\log\log n}{\log\alpha}+1.1.$$ 3.3. Bounding the primes $q_{i}$ for $i=1,\ldots,k$ Write (26) $$P_{n}=q_{1}\cdots q_{k}B,\quad{\text{\rm where}}\quad B=q_{1}^{\alpha_{1}-1}% \cdots q_{k}^{\alpha_{k}-1}.$$ Clearly, $B\mid\phi(P_{n})$, therefore $B\mid P_{m}$. Since also $B\mid P_{n}$, we have, by Lemma 4, that $B\mid\gcd(P_{n},P_{m})=P_{\gcd(n,m)}\mid P_{\ell}$ where the last relation follows again by Lemma 4 because $\gcd(n,m)\mid\ell.$ Using the inequality (3) and Lemma 10, we get (27) $$B\leq P_{n-m}\leq\alpha^{n-m-1}\leq\alpha^{0.1}\log\log n.$$ To bound the primes $q_{i}$ for all $i=1,\ldots,k$, we use the inductive argument from Section 3.3 in [11]. We write $$\prod_{i=1}^{k}\left(1-\frac{1}{q_{i}}\right)=\frac{\phi(P_{n})}{P_{n}}=\frac{% P_{m}}{P_{n}}.$$ Therefore, $$1-\prod_{i=1}^{k}\left(1-\frac{1}{q_{i}}\right)=1-\frac{P_{m}}{P_{n}}=\frac{P_% {n}-P_{m}}{P_{n}}\geq\frac{P_{n}-P_{n-1}}{P_{n}}>\frac{P_{n-1}}{P_{n}}.$$ Using the inequality (28) $$1-(1-x_{1})\cdots(1-x_{s})\leq x_{1}+\cdots+x_{s}\quad{\text{\rm valid for all% }}\quad x_{i}\in[0,1]~{}{\text{\rm for}}~{}i=1,\ldots,s,$$ we get, $$\frac{P_{n-1}}{P_{n}}<1-\prod_{i=1}^{k}\left(1-\frac{1}{q_{i}}\right)\leq\sum_% {i=1}^{k}\frac{1}{q_{i}}<\frac{k}{q_{1}},$$ therefore, (29) $$q_{1}<k\left(\frac{P_{n}}{P_{n-1}}\right)<3k.$$ Using the method of the proof of inequality (13) in [11], one proves by induction on the index $i\in\{1,\ldots,k\}$ that if we put $$u_{i}:=\prod_{j=1}^{i}q_{j},$$ then (30) $$u_{i}<\left(2\alpha^{2.1}k\log\log n\right)^{(3^{i}-1)/2}.$$ In particular, $$q_{1}\cdots q_{k}=u_{k}<(2\alpha^{2.1}k\log\log n)^{(3^{k}-1)/2},$$ which together with formula (23) and (27) gives $$P_{n}=q_{1}\cdots q_{k}B<(2\alpha^{2.1}k\log\log n)^{1+(3^{k}-1)/2}=(2\alpha^{% 2.1}k\log\log n)^{(3^{k}+1)/2}.$$ Since $P_{n}>\alpha^{n-2}$ by inequality (3), we get $$(n-2)\log\alpha<\frac{(3^{k}+1)}{2}\log(2\alpha^{2.1}k\log\log n).$$ Since $k<\log n/\log 2$ (see (22)), we get $$\displaystyle 3^{k}$$ $$\displaystyle>$$ $$\displaystyle(n-2)\left(\frac{2\log\alpha}{\log(2\alpha^{2.1}(\log n)(\log\log n% )(\log 2)^{-1})}\right)-1$$ $$\displaystyle>$$ $$\displaystyle 0.17(n-2)-1>\frac{n}{6},$$ where the last two inequalities above hold because $n>2^{416}$. So, we proved the following result. Lemma 11. If $n>2$, then $$3^{k}>n/6.$$ 3.4. The case when $n$ is odd Assume that $n>2$ is odd and let $q$ be any prime factor of $P_{n}$. Reducing relation (31) $$Q_{n}^{2}-8P_{n}^{2}=4(-1)^{n}$$ of Lemma 2 (ii) modulo $q$, we get $Q_{n}^{2}\equiv-4\pmod{q}$. Since $q$ is odd, (because $n$ is odd), we get that $q\equiv 1\pmod{4}$. This is true for all prime factors $q$ of $P_{n}$. Hence, $$4^{k}\mid\prod_{i=1}^{k}(q_{i}-1)\mid\phi(P_{n})\mid P_{m},$$ which, by Lemma 3 (ii), gives $4^{k}\mid m$. Thus, $$n>m\geq 4^{k},$$ inequality which together with Lemma 11 gives $$n>\left(3^{k}\right)^{\log 4/\log 3}>\left(\frac{n}{6}\right)^{\log 4/\log 3},$$ so $$n<6^{\log 4/\log(4/3)}<5621,$$ in contradiction with Lemma 9. 3.5. Bounding $n$ From now on, $n>2$ is even. We write it as $$n=2^{s}r_{1}^{\lambda_{1}}\cdots r_{t}^{\lambda_{t}}=:2^{s}n_{1},$$ where $s\geq 1$, $t\geq 0$ and $3\leq r_{1}<\cdots<r_{t}$ are odd primes. Thus, by inequality (17), we have $$\displaystyle\alpha^{\ell}\left(1-\frac{1}{10^{40}}\right)$$ $$\displaystyle<$$ $$\displaystyle\alpha^{\ell}-\frac{1}{10^{40}}<\frac{P_{n}}{\phi(P_{n})}$$ $$\displaystyle=$$ $$\displaystyle\prod_{p\mid P_{n}}\left(1+\frac{1}{p-1}\right)$$ $$\displaystyle=$$ $$\displaystyle 2\prod_{\begin{subarray}{c}d\geq 3\\ d\mid n\end{subarray}}\prod_{p\in{\mathcal{P}}_{d}}\left(1+\frac{1}{p-1}\right),$$ and taking logarithms we get (32) $$\displaystyle\ell\log\alpha-\frac{1}{10^{39}}$$ $$\displaystyle<$$ $$\displaystyle\log\left(\alpha^{\ell}\left(1-\frac{1}{10^{40}}\right)\right)$$ $$\displaystyle<$$ $$\displaystyle\log 2+\sum_{\begin{subarray}{c}d\geq 3\\ d\mid n\end{subarray}}\sum_{p\in{\mathcal{P}}_{d}}\log\left(1+\frac{1}{p-1}\right)$$ $$\displaystyle<$$ $$\displaystyle\log 2+\sum_{\begin{subarray}{c}d\geq 3\\ d\mid n\end{subarray}}S_{d}.$$ In the above, we used the inequality $\log(1-x)>-10x$ valid for all $x\in(0,1/2)$ with $x=1/10^{40}$ and the inequality $\log(1+x)\leq x$ valid for all real numbers $x$ with $x=p$ for all $p\in{\mathcal{P}}_{d}$ and all divisors $d\mid n$ with $d\geq 3$. Let us deduce that the case $t=0$ is impossible. Indeed, if this were so, then $n$ is a power of $2$ and so, by Lemma 9, both $m$ and $n$ are divisible by $2^{416}$. Thus, $\ell\geq 2^{416}$. Inserting this into (32), and using Lemma 7, we get $$2^{416}\log\alpha-\frac{1}{10^{39}}<\sum_{a\geq 1}\frac{2\log(2^{a})}{2^{a}}=4% \log 2,$$ a contradiction. Thus, $t\geq 1$ so $n_{1}>1$. We now put $${\mathcal{I}}:=\{i:r_{i}\mid m\}\quad{\text{\rm and}}\quad{\mathcal{J}}=\{1,% \ldots,t\}\backslash{\mathcal{I}}.$$ We put $$M=\prod_{i\in{\mathcal{I}}}r_{i}.$$ We also let $j$ be minimal in ${\mathcal{J}}$. We split the sum appearing in (32) in two parts: $$\sum_{d\mid n}S_{d}=L_{1}+L_{2},$$ where $$L_{1}:=\sum_{\begin{subarray}{c}d\mid n\\ r\mid d\Rightarrow r\mid 2M\end{subarray}}S_{d}\quad{\text{\rm and}}\quad L_{2% }:=\sum_{\begin{subarray}{c}d\mid n\\ r_{u}\mid d~{}{\text{\rm for~{}some}}~{}u\in{\mathcal{J}}\end{subarray}}S_{d}.$$ To bound $L_{1}$, we note that all divisors involved divide $n^{\prime}$, where $$n^{\prime}=2^{s}\prod_{i\in{\mathcal{I}}}r_{i}^{\lambda_{i}}.$$ Using Lemmas 7 and 8, we get (33) $$\displaystyle L_{1}$$ $$\displaystyle\leq$$ $$\displaystyle 2\sum_{d\mid n^{\prime}}\frac{\log d}{d}$$ $$\displaystyle<$$ $$\displaystyle 2\left(\sum_{r\mid n^{\prime}}\frac{\log r}{r-1}\right)\left(% \frac{n^{\prime}}{\phi(n^{\prime})}\right)$$ $$\displaystyle=$$ $$\displaystyle 2\left(\sum_{r\mid 2M}\frac{\log r}{r-1}\right)\left(\frac{2M}{% \phi(2M)}\right).$$ We now bound $L_{2}$. If ${\mathcal{J}}=\emptyset$, then $L_{2}=0$ and there is nothing to bound. So, assume that ${\mathcal{J}}\neq\emptyset$. We argue as follows. Note that since $s\geq 1$, by Lemma 2 (i), we have $$P_{n}=P_{n_{1}}Q_{n_{1}}Q_{2n_{1}}\cdots Q_{2^{s-1}n_{1}}.$$ Let $q$ be any odd prime factor of $Q_{n_{1}}$. By reducing relation (ii) of Lemma 2 modulo $q$ and using the fact that $n_{1}$ and $q$ are both odd, we get $2P_{n_{1}}^{2}\equiv 1\pmod{q}$, therefore ${\displaystyle{\left(\frac{2}{q}\right)=1}}$. Hence, $z(q)\mid q-1$ for such primes $q$. Now let $d$ be any divisor of $n_{1}$ which is a multiple of $r_{j}$. The number of them is $\tau(n_{1}/r_{j})$, where $\tau(u)$ is the number of divisors of the positive integer $u$. For each such $d$, there is a primitive prime factor $q_{d}$ of $Q_{d}\mid Q_{n_{1}}$. Thus, $r_{j}\mid d\mid q_{d}-1$. This shows that (34) $$\nu_{r_{j}}(\phi(P_{n}))\geq\nu_{r_{j}}(\phi(Q_{n_{1}}))\geq\tau(n_{1}/r_{j})% \geq\tau(n_{1})/2,$$ where the last inequality follows from the fact that $$\frac{\tau(n_{1}/r_{j})}{\tau(n_{1})}=\frac{\lambda_{j}}{\lambda_{j}+1}\geq% \frac{1}{2}.$$ Since $r_{j}$ does not divide $m$, it follows from (5) that (35) $$\nu_{r_{j}}(P_{m})\leq e_{r_{j}}.$$ Hence, (34), (35) and (1) imply that (36) $$\tau(n_{1})\leq 2e_{r_{j}}.$$ Invoking Lemma 5, we get (37) $$\tau(n_{1})\leq\frac{(r_{j}+1)\log\alpha}{\log r_{j}}.$$ Now every divisor $d$ participating in $L_{2}$ is of the form $d=2^{a}d_{1}$, where $0\leq a\leq s$ and $d_{1}$ is a divisor of $n_{1}$ divisible by $r_{u}$ for some $u\in{\mathcal{J}}$. Thus, (38) $$L_{2}\leq\tau(n_{1})\min\left\{\sum_{\begin{subarray}{c}0\leq a\leq s\\ d_{1}\mid n_{1}\\ r_{u}\mid d_{1}~{}{\text{\rm for~{}some}}~{}u\in{\mathcal{J}}\end{subarray}}S_% {2^{a}d_{1}}\right\}:=g(n_{1},s,r_{1}).$$ In particular, $d_{1}\geq 3$ and since the function $x\mapsto\log x/x$ is decreasing for $x\geq 3$, we have that (39) $$g(n_{1},s,r_{1})\leq 2\tau(n_{1})\sum_{0\leq a\leq s}\frac{\log(2^{a}r_{j})}{2% ^{a}r_{j}}.$$ Putting also $s_{1}:=\min\{s,416\}$, we get, by Lemma 9, that $2^{s_{1}}\mid\ell$. Thus, inserting this as well as (33) and (39) all into (32), we get (40) $$\ell\log\alpha-\frac{1}{10^{39}}<2\left(\sum_{r\mid 2M}\frac{\log r}{r-1}% \right)\left(\frac{2M}{\phi(2M)}\right)+g(n_{1},s,r_{1}).$$ Since (41) $$\sum_{0\leq a\leq s}\frac{\log(2^{a}r_{j})}{2^{a}r_{j}}<\frac{4\log 2+2\log r_% {j}}{r_{j}},$$ inequalities (41), (37) and (39) give us that $$g(n_{1},s,r_{1})\leq 2\left(1+\frac{1}{r_{j}}\right)\left(2+\frac{4\log 2}{% \log r_{j}}\right)\log\alpha:=g(r_{j}).$$ The function $g(x)$ is decreasing for $x\geq 3$. Thus, $g(r_{j})\leq g(3)<10.64$. For a positive integer $N$ put (42) $$f(N):=N\log\alpha-\frac{1}{10^{39}}-2\left(\sum_{r\mid N}\frac{\log r}{r-1}% \right)\left(\frac{N}{\phi(N)}\right).$$ Then inequality (40) implies that both inequalities $$\displaystyle f(\ell)<g(r_{j}),$$ (43) $$\displaystyle(\ell-M)\log\alpha+f(M)<g(r_{j})$$ hold. Assuming that $\ell\geq 26$, we get, by Lemma 6, that $$\displaystyle\ell\log\alpha-\frac{1}{10^{39}}-2(\log 2)\frac{(1.79\log\log\ell% +2.5/\log\log\ell)\log\ell}{\log\log\ell-1.1714}\leq 10.64.$$ Mathematica confirmed that the above inequality implies $\ell\leq 500$. Another calculation with Mathematica showed that the inequality (44) $$f(\ell)<10.64$$ for even values of $\ell\in[1,500]\cap{\mathbb{Z}}$ implies that $\ell\in[2,18]$. The minimum of the function $f(2N)$ for $N\in[1,250]\cap{\mathbb{Z}}$ is at $N=3$ and $f(6)>-2.12$. For the remaining positive integers $N$, we have $f(2N)>0$. Hence, inequality (3.5) implies $$(2^{s_{1}}-2)\log\alpha<10.64\quad{\text{\rm and}}\quad(2^{s_{1}}-2)3\log% \alpha<10.64+2.12=12.76,$$ according to whether $M\neq 3$ or $M=3$, and either one of the above inequalities implies that $s_{1}\leq 3$. Thus, $s=s_{1}\in\{1,2,3\}$. Since $2M\mid\ell$, $2M$ is square-free and $\ell\leq 18$, we have that $M\in\{1,3,5,7\}$. Assume $M>1$ and let $i$ be such that $M=r_{i}$. Let us show that $\lambda_{i}=1$. Indeed, if $\lambda_{i}\geq 2$, then $$199\mid Q_{9}\mid P_{n},\quad 29201\mid P_{25}\mid P_{n},\quad 1471\mid Q_{49}% \mid P_{n},$$ according to whether $r_{i}=3,~{}5$, $7$, respectively, and $3^{2}\mid 199-1,~{}5^{2}\mid 29201-1,~{}7^{2}\mid 1471-1$. Thus, we get that $3^{2},~{}5^{2},~{}7^{2}$ divide $\phi(P_{n})=P_{m}$, showing that $3^{2},~{}5^{2},~{}7^{2}$ divide $\ell$. Since $\ell\leq 18$, only the case $\ell=18$ is possible. In this case, $r_{j}\geq 5$, and inequality (3.5) gives $$8.4<f(18)\leq g(5)<7.9,$$ a contradiction. Let us record what we have deduced so far. Lemma 12. If $n>2$ is even, then $s\in\{1,2,3\}$. Further, if ${\mathcal{I}}\neq\emptyset$, then ${\mathcal{I}}=\{i\}$, $r_{i}\in\{3,5,7\}$ and $\lambda_{i}=1$. We now deal with ${\mathcal{J}}$. For this, we return to (32) and use the better inequality namely $$2^{s}M\log\alpha-\frac{1}{10^{39}}\leq\ell\log\alpha-\frac{1}{10^{39}}\leq\log% \left(\frac{P_{n}}{\phi(P_{n})}\right)\leq\sum_{d\mid 2^{s}M}\sum_{p\in{% \mathcal{P}}_{d}}\log\left(1+\frac{1}{p-1}\right)+L_{2},$$ so (45) $$L_{2}\geq 2^{s}M\log\alpha-\frac{1}{10^{39}}-\sum_{d\mid 2^{s}M}\sum_{p\in{% \mathcal{P}}_{d}}\log\left(1+\frac{1}{p-1}\right).$$ In the right–hand side above, $M\in\{1,3,5,7\}$ and $s\in\{1,2,3\}$. The values of the right–hand side above are in fact $$h(u):=u\log\alpha-\frac{1}{10^{39}}-\log(P_{u}/\phi(P_{u}))$$ for $u=2^{s}M\in\{2,4,6,8,10,12,14,20,24,28,40,56\}$. Computing we get: $$h(u)\geq H_{s,M}\left(\frac{M}{\phi(M)}\right)\quad{\text{\rm for}}\quad M\in% \{1,3,5,7\},\quad s\in\{1,2,3\},$$ where $$H_{1,1}>1.069,\quad H_{1,M}>2.81\quad{\text{\rm for}}\quad M>1,\quad H_{2,M}>2% .426,\quad H_{3,M}>5.8917.$$ We now exploit the relation (46) $$H_{s,M}\left(\frac{M}{\phi(M)}\right)<L_{2}.$$ Our goal is to prove that $r_{j}<10^{6}$. Assume this is not so. We use the bound $$L_{2}<\sum_{\begin{subarray}{c}d\mid n\\ r_{u}\mid d~{}{\text{\rm for~{}sume}}~{}u\in{\mathcal{J}}\end{subarray}}\frac{% 4+4\log\log d}{\phi(d)}$$ of Lemma 7. Each divisor $d$ participating in $L_{2}$ is of the form $2^{a}d_{1}$, where $a\in[0,s]\cap{\mathbb{Z}}$ and $d_{1}$ is a multiple of a prime at least as large as $r_{j}$. Thus, $$\frac{4+4\log\log d}{\phi(d)}\leq\frac{4+4\log\log 8d_{1}}{\phi(2^{a})\phi(d_{% 1})}\quad{\text{\rm for}}\quad a\in\{0,1,\ldots,s\},$$ and $$\frac{d_{1}}{\phi(d_{1})}\leq\frac{n_{1}}{\phi(n_{1})}\leq\frac{M}{\phi(M)}% \left(1+\frac{1}{r_{j}-1}\right)^{\omega(n_{1})}.$$ Using (37), we get $$2^{\omega(n_{1})}\leq\tau(n_{1})\leq\frac{(r_{j}+1)\log\alpha}{\log r_{j}}<r_{% j},$$ where the last inequality holds because $r_{j}$ is large. Thus, (47) $$\omega(n_{1})<\frac{\log r_{j}}{\log 2}<2\log r_{j}.$$ Hence, (48) $$\displaystyle\frac{n_{1}}{\phi(n_{1})}$$ $$\displaystyle\leq$$ $$\displaystyle\frac{M}{\phi(M)}\left(1+\frac{1}{r_{j}-1}\right)^{\omega(n_{1})}% <\frac{M}{\phi(M)}\left(1+\frac{1}{r_{j}-1}\right)^{2\log r_{j}}$$ $$\displaystyle<$$ $$\displaystyle\frac{M}{\phi(M)}\exp\left(\frac{2\log r_{j}}{r_{j}-1}\right)<% \frac{M}{\phi(M)}\left(1+\frac{4\log r_{j}}{r_{j}-1}\right),$$ where we used the inequalities $1+x<e^{x}$, valid for all real numbers $x$, as well as $e^{x}<1+2x$ which is valid for $x\in(0,1/2)$ with $x=2\log r_{j}/(r_{j}-1)$ which belongs to $(0,1/2)$ because $r_{j}$ is large. Thus, the inequality $$\frac{4+4\log\log d}{\phi(d)}\leq\left(\frac{4+4\log\log 8d_{1}}{d_{1}}\right)% \left(1+\frac{4\log r_{j}}{r_{j}-1}\right)\left(\frac{1}{\phi(2^{a})}\right)% \frac{M}{\phi(M)}$$ holds for $d=2^{a}d_{1}$ participating in $L_{2}$. The function $x\mapsto(4+4\log\log(8x))/x$ is decreasing for $x\geq 3$. Hence, (49) $$L_{2}\leq\left(\frac{4+4\log\log(8r_{j})}{r_{j}}\right)\tau(n_{1})\left(1+% \frac{4\log r_{j}}{r_{j}-1}\right)\left(\sum_{0\leq a\leq s}\frac{1}{\phi(2^{a% })}\right)\left(\frac{M}{\phi(M)}\right).$$ Inserting inequality (37) into (49) and using (46), we get (50) $$\log r_{j}<4\left(1+\frac{1}{r_{j}}\right)\left(1+\frac{4\log r_{j}}{r_{j}-1}% \right)(1+\log\log(8r_{j}))(\log\alpha)\left(\frac{G_{s}}{H_{s,M}}\right),$$ where $$G_{s}=\sum_{0\leq a\leq s}\frac{1}{\phi(2^{a})}.$$ For $s=2,~{}3$, inequality (50) implies $r_{j}<900,000$ and $r_{j}<300$, respectively. For $s=1$ and $M>1$, inequality (50) implies $r_{j}<5000$. When $M=1$ and $s=1$, we get $n=2n_{1}$ and $j=1$. Here, inequality (50) implies that $r_{1}<8\times 10^{12}$. This is too big, so we use the bound $$S_{d}<\frac{2\log d}{d}$$ of Lemma 7 instead for the divisors $d$ of participating in $L_{2}$, which in this case are all the divisors of $n$ larger than $2$. We deduce that $$1.06<L_{2}<2\sum_{\begin{subarray}{c}d\mid 2n_{1}\\ d>2\end{subarray}}\frac{\log d}{d}<4\sum_{d_{1}\mid n_{1}}\frac{\log d_{1}}{d_% {1}}.$$ The last inequality above follows from the fact that all divisors $d>2$ of $n$ are either of the form $d_{1}$ or $2d_{1}$ for some divisor $d_{1}\geq 3$ of $n_{1}$, and the function $x\mapsto\log x/x$ is decreasing for $x\geq 3$. Using Lemma 8 and inequalities (47) and (48), we get $$\displaystyle 1.06$$ $$\displaystyle<$$ $$\displaystyle 4\left(\sum_{r\mid n_{1}}\frac{\log r}{r-1}\right)\left(\frac{n_% {1}}{\phi(n_{1})}\right)<\left(\frac{4\log r_{1}}{r_{1}-1}\right)\omega(n_{1})% \left(1+\frac{4\log r_{1}}{r_{1}-1}\right)$$ $$\displaystyle<$$ $$\displaystyle\left(\frac{4\log r_{1}}{r_{1}-1}\right)\left(2\log r_{1}\right)% \left(1+\frac{4\log r_{1}}{r_{1}-1}\right),$$ which gives $r_{1}<159$. So, in all cases, $r_{j}<10^{6}$. Here, we checked that $e_{r}=1$ for all such $r$ except $r\in\{13,31\}$ for which $e_{r}=2$. If $e_{r_{j}}=1$, we then get $\tau(n_{1}/r_{j})\leq 1$, so $n_{1}=r_{j}$. Thus, $n\leq 8\cdot 10^{6}$, in contradiction with Lemma 9. Assume now that $r_{j}\in\{13,31\}$. Say $r_{j}=13$. In this case, $79$ and $599$ divide $Q_{13}$ which divides $P_{n}$, therefore $13^{2}\mid(79-1)(599-1)\mid\phi(P_{n})=P_{m}$. Thus, if there is some other prime factor $r^{\prime}$ of $n_{1}/13$, then $13r^{\prime}\mid n_{1}$, and $Q_{13r^{\prime}}$ has a primitive prime factor $q\equiv 1\pmod{13r^{\prime}}$. In particular, $13\mid q-1$. Thus, $\nu_{13}(\phi(P_{n}))\geq 3$, showing that $13^{3}\mid P_{m}$. Hence, $13\mid m$, therefore $13\mid M$, a contradiction. A similar contradiction is obtained if $r_{j}=31$ since $Q_{31}$ has two primitive prime factors namely $424577$ and $865087$ so $31\mid M$. This finishes the proof. Acknowledgments B. F. thanks OWSD and Sida (Swedish International Development Cooperation Agency) for a scholarship during her Ph.D. studies at Wits. References [1] Y. F. Bilu, G. Hanrot and P. M. Voutier, “Existence of Primitive Divisors of Lucas and Lehmer Numbers. With an appendix by M. Mignotte”, J. Reine Angew. Math. 539 (2001), 75–122. [2] R. D. Carmichael, “On the numerical factors of the arithmetic forms $\alpha^{n}\pm\beta^{n}$”, Ann. Math. (2) 15 (1913), 30–70. [3] J. Cilleruelo and F. Luca, “Repunit Lehmer numbers”, Proc. Edinb. Math. Soc. (2) 54 (2011), no. 1, 55-65. [4] B. Faye, F. Luca and A. Tall, “On the equation $\phi(5^{m}-1)=5^{n}-1$”, Bull. Korean Math. Soc. 52 (2015), 513–524. [5] B. Faye and F. Luca, “Lucas numbers with the Lehmer property”, Preprint, 2015. [6] Robin Guy, “Estimation de la fonction de Tchebychef $\theta$ sur le $k$-ieme nombre premier et grandes valeurs de la fonction $\omega(n)$ nombre de diviseurs premiers de $n$”, Acta Arith. 42 (1983), no. 4, 367–389. [7] J. M. Grau Ribas and F. Luca,“Cullen numbers with the Lehmer property”, Proc. Amer. Math. Soc. 140 (2012), no. 1, 129–134: Corrigendum 141 (2013), no. 8, 2941–2943. [8] F. Luca, “Arithmetic functions of Fibonacci numbers”, Fibonacci Quart. 37 (1999), no. 3, 265–268. [9] F. Luca, “Multiply perfect numbers in Lucas sequences with odd parameters”, Publ. Math. Debrecen 58 (2001), no. 1-2, 121–155. [10] F. Luca, “Fibonacci numbers with the Lehmer property”, Bull. Pol. Acad. Sci. Math. 55 (2007), 7–15. [11] F. Luca and F. Nicolae,“$\phi(F_{m})=F_{n}$”, Integers 9 (2009), A30. [12] W. McDaniel, “On Fibonacci and Pell Numbers of the Form $kx^{2}$”, Fibonacci Quart. 40, (2002), 41–42. [13] H. L. Montgomery and R. C. Vaughan, “The large sieve”, Mathematika 20 (1973), 119–134. [14] A. Pethő, “The Pell sequence contains only trivial perfect powers”, in Sets, graphs and numbers (Budapest, 1991), Colloq. Math. Soc. Janos Bolyai 60, North-Holland, Amsterdam, (1992), 561–568. [15] J. B. Rosser, L. Schoenfeld, “Approximate formulas for some functions of prime numbers“, Illinois J. Math. 6 (1962), 64–94.
Millimeter-Wave Antenna Array Diagnosis with Partial Channel State Information George Medina${}^{*}$, Akashdeep Singh Jida${}^{*}$, Sravan Pulipati${}^{{\ddagger}}$, Rohith Talwar${}^{*}$, Nancy Amala J${}^{*}$, Tareq Y. Al-Naffouri${}^{\lx@sectionsign}$, Arjuna Madanayake${}^{{\ddagger}}$ and Mohammed E. Eltayeb${}^{*}$ ${}^{*}$Department of Electrical & Electronic Engineering California State University, Sacramento, USA. Email: mohammed.eltayeb@csus.edu ${}^{{\ddagger}}$Electrical and Computer Engineering, Florida International University, Miami, USA. Email: amadanay@fiu.edu ${}^{\lx@sectionsign}$King Abdullah University of Science and Technology, Thuwal, KSA. Email: tareq.alnaffouri@kaust.edu.sa Abstract Large antenna arrays enable directional precoding for Millimeter-Wave (mmWave) systems and provide sufficient link budget to combat the high path-loss at these frequencies. Due to atmospheric conditions and hardware malfunction, outdoor mmWave antenna arrays are prone to blockages or complete failures. This results in a modified array geometry, distorted far-field radiation pattern, and system performance degradation. Recent remote array diagnostic techniques have emerged as an effective way to detect defective antenna elements in an array with few diagnostic measurements. These techniques, however, require full and perfect channel state information (CSI), which can be challenging to acquire in the presence of antenna faults. This paper proposes a new remote array diagnosis technique that relaxes the need for full CSI and only requires knowledge of the incident angle-of-arrivals, i.e. partial channel knowledge. Numerical results demonstrate the effectiveness of the proposed technique and show that fault detection can be obtained with comparable number of diagnostic measurements required by diagnostic techniques based on full channel knowledge. In presence of channel estimation errors, the proposed technique is shown to out-perform recently proposed array diagnostic techniques. Antenna Arrays, fault diagnosis, compressed sensing, millimeter-wave communication. I Introduction Communication in the Millimeter-Wave (mmWave) band is the new frontier for next generation wireless systems [1]-[4]. To provide sufficient link budget for these system, large antenna arrays will be required to enable directional precoding [4]-[6]. Thanks to the small carrier wavelength at mmWave frequencies, multiple antenna elements can be packaged onto a small chip, possibly with other RF components [7]. The densely packed antenna elements, however, introduces new challenges for these systems. The comparable sizes of blockages, e.g., dirt, water droplets, and ice, can completely (or partially) block mmWave signals incident on a single or multiple antenna elements. Manufacturing imperfections can also lead to antenna element failure. Antenna element blockage or failure randomizes the array’s geometry, and as a result, distorts its radiation pattern and causes uncertainties in the mmWave channel [8]. Therefore, it is crucial to design remote array diagnosis techniques that continuously monitor the performance of mmWave antenna arrays and minimize the effects of antenna element failures. Several techniques based on sparse signal recovery have recently emerged in the literature to remotely diagnose antenna arrays in a fast and reliable manner [8]-[16]. These techniques formulate the fault detection problem as a sparse signal recovery problem in compressed sensing. Specifically, a sparse difference response vector is generated by subtracting the response of a reference fault-free antenna array from the response of a potentially faulty antenna array, commonly known as the Array-Under-Test (AUT). Using this sparse difference response vector, sparse signal recovery algorithms, see e.g. [17]-[20], are then applied to recover identity of the faulty antenna elements. We refer to such techniques as difference based techniques in this paper. Other techniques adopt a deep learning approach to diagnose mmWave antenna arrays [21] [22]. These techniques apply machine learning algorithms to identify faulty antenna elements by measuring distortions in the far-field radiation pattern. Despite their excellent performance, the above referenced diagnosis techniques require full and perfect channel state information (CSI) to generate and update the response of the reference fault-free antenna array in a timely manner. This is challenging since perfect CSI estimation is dependent on many factors e.g. link quality, number of scatters, estimation errors, etc., and the faulty array itself distorts the communication channel estimate [13]. To overcome this limitation, it is curial that new array diagnosis techniques are design to be independent on prior communication channel knowledge. In this paper, we propose a new technique for remote array diagnosis. The proposed technique only requires knowledge of the set of all possible Angle-of-Arrivals (AoAs) the diagnostic signals take, and does not require full channel knowledge. The idea is to design the combining vector (or antenna weights) at the AUT to null diagnostic signals from all incident AoAs. In the presence of antenna faults, the receive beam pattern is distorted, and diagnostic signals can not be nulled. These received (or leaked) diagnostic signals are exploited to formulate the diagnosis problem as a sparse signal recovery in compressed sensing. As we will show in Section III, this technique enables antenna fault detection with just a few diagnostic measurements. The main contributions of this paper can be summarized as follows: (i) We present new array diagnosis formulation that takes the effect of the communication channel into account. Prior work assumes perfect knowledge of the far-field beam pattern and does not take the effect of the communication channel into account. (ii) We present a new and novel array diagnosis technique that relaxes the need for full channel knowledge. To the best of our knowledge, this is the first paper that proposes an array diagnosis technique that requires partial channel knowledge. The remainder of this paper is organized as follows. In Section II, we formulate the mmWave antenna array diagnosis problem in the presence of multipath. In Section III, we present the proposed array diagnosis technique. In Section IV, we present some numerical results and conclude our work in Section V. II Problem Formulation We consider a transceiver equipped with a uniform linear antenna array which consists of $N$ equally spaced elements and $S<<N$ possibly faulty elements. A fault is defined as any impairment that causes an antenna element to function abnormally. A fault can result from either physical blockage of an antenna element or circuit failure. While a linear array is adopted in this paper for simplicity, other antenna structures can be equally adopted. To initiate antenna diagnosis, a probe is used to transmit $M$ diagnosis symbols to the transceiver with the AUT. In the absence of antenna faults, the $m$th received diagnosis symbol can be written as $$\displaystyle y_{m}=\mathbf{w}_{m}^{*}\mathbf{h}s+z_{m},$$ (1) where the entries of the combining vector $\mathbf{w}_{m}\in\mathcal{C}^{N\times 1}$ represent the $m$th complex weights associated with the receive antenna, $\mathbf{h}$ is the mmWave channel between the transceiver and the probe, $s_{m}$ is the $m$th transmitted diagnosis symbol, and $z_{m}\sim\mathcal{CN}(0,\sigma^{2})$ is the additive noise at the transceiver. A geometric channel model with $L$ scatterers is adopted in this paper [4] [5] [23]. Under this model, the channel can be expressed as $$\displaystyle\mathbf{h}=\sqrt{\frac{N}{L}}\sum_{\ell=1}^{L}\alpha_{\ell}{% \mathbf{a}}(\theta_{\ell}),$$ (2) where $\alpha_{\ell}\sim\mathcal{CN}(0,1)$ is the complex gain of the $\ell$th path, $\theta_{\ell}$ is the $\ell$th path AoA, and the vector ${\mathbf{a}}(\theta_{\ell})$ represents the transceiver’s antenna array response corresponding to the $\ell$th AoA $\theta_{\ell}$. For simplicity, we set $s=1$ in (1) and omit it from the subsequent analysis. In the presence of antenna faults, the received diagnosis symbol becomes $$\displaystyle\hat{y}_{m}$$ $$\displaystyle=$$ $$\displaystyle\mathbf{w}_{m}^{*}{\mathbf{Bh}}+z_{m}$$ (3) $$\displaystyle=$$ $$\displaystyle\mathbf{w}_{m}^{*}\hat{\mathbf{h}}+z_{m}$$ (4) where $\hat{\mathbf{h}}={\mathbf{Bh}}$ is the equivalent mmWave channel. The entries of the diagonal matrix $\mathbf{B}\in\mathcal{C}^{N\times N}$ are $$\text{B}_{n,n}=\left\{\begin{array}[]{ll}\alpha_{n},&\hbox{ if the $n$th % antenna element is faulty}\\ 1,&\hbox{ otherwise, }\\ \end{array}\right.$$ (5) where $\alpha_{n}=\kappa_{n}e^{j\Phi_{n}}$, $0\leq\kappa_{n}\leq 1$ and $0\leq\Phi_{n}\leq 2\pi$. A value of $\kappa_{n}=0$ represents maximum blockage (or complete failure), and $\Phi_{n}$ captures the phase-shift caused by the fault at the $n$th antenna element. The diagonal matrix $\mathbf{B}$ captures failures that can result from internal circuitry of the antenna element itself, or from external blockages caused by, for example, weather. From equations (3) and (4), we observe that faults modify the antenna array manifold and causes uncertainty the mmWave channel. To locate the identity of the faulty antenna elements, prior work in the literature proposed several techniques which are based on subtracting $M$ received diagnosis symbols in (4) from $M$ ideal (fault-free) diagnosis symbols in (1). The ideal diagnosis symbols in (1) can be generated offline if the channel $\mathbf{h}$ is fully known at the receiver. This subtraction results in the following difference vector $\mathbf{y}_{\text{d}}\in\mathcal{C}^{M\times 1}$ $$\displaystyle\mathbf{y}_{\text{d}}$$ $$\displaystyle=$$ $$\displaystyle\mathbf{y}-\hat{\mathbf{y}}$$ (6) $$\displaystyle=$$ $$\displaystyle\mathbf{W}^{*}{\mathbf{h}}-\mathbf{W}^{*}\hat{\mathbf{h}}+\mathbf% {z}$$ (7) $$\displaystyle=$$ $$\displaystyle\mathbf{W}^{*}{\mathbf{h}}_{\text{d}}+\mathbf{z}.$$ (8) In (7) and (8), the matrix $\mathbf{W}=[\mathbf{w}_{1},\mathbf{w}_{2},....,\mathbf{w}_{M}]$, $\mathbf{z}$ is the additive noise vector, and the vector ${\mathbf{h}}_{\text{d}}$ is sparse with the non-zero entries corresponding to the identity of the faulty antenna elements. Applying any sparse recovery techniques, see e.g. [17]-[20], ${\mathbf{h}}_{\text{d}}$ can be recovered with overwhelming probability from $\mathbf{y}_{\text{d}}$ and $\mathbf{W}$ provided that the number of diagnosis symbols $M>2S\log{N}$. While excellent, the requirement of perfect channel knowledge is not practical and poses a greater challenge for difference based techniques. Perfect channel knowledge might not be readily available in practice, and as shown in [13], acquiring perfect channel estimation is not possible with faulty antenna hardware. In the following section, we propose a new approach to identify the faulty antenna elements. The proposed approach relaxes the need for full channel knowledge and only requires a set of all possible angles of arrivals. The receive angles of arrival can be easily obtained by, for example, beam training techniques outlined in [24]-[28] and references therein, or provided by finger-printing techniques outlined in [29]-[31] and references therein. III Antenna Fault Detection at the Receiver In this section we mathematically formulate and outline the proposed antenna fault detection technique. For this technique, we assume that the receiver is only equipped with knowledge of its angles of arrivals $\theta_{\ell}\in\Theta$, where $\Theta$ is a set that contains all possible AoAs. The receiver has no knowledge of the complex paths gains nor their corresponding delays (if any). To mathematically formulate the problem solution, we first rewrite (4) as $$\displaystyle\hat{y}_{m}=\mathbf{w}_{m}^{*}(\mathbf{h}+\mathbf{h}_{\text{e}})+% z_{m},$$ (9) where $\mathbf{h}_{\text{e}}$ is the error in the mmWave channel caused by faulty receive antennas. Observe that $\mathbf{h}_{\text{e}}$ is sparse with the non-zero elements corresponding to the fault locations. If the AoAs are quantized to $N$ points, the channel in (9) can be expressed in matrix form as $$\displaystyle\hat{y}_{m}$$ $$\displaystyle=$$ $$\displaystyle\mathbf{w}_{m}^{*}(\mathbf{A}+\mathbf{A}_{\text{e}})\mathbf{x}+z_% {m}$$ (10) $$\displaystyle=$$ $$\displaystyle\mathbf{w}_{m}^{*}\mathbf{Ax}+\mathbf{w}_{m}^{*}\mathbf{A}_{\text% {e}}\mathbf{x}+z_{m},$$ (11) where the matrix $\mathbf{A}$ is the DFT matrix with its $i$th column corresponding to the array response of $i$th quantized AoA. The $L$ non-zero entries of the sparse vector $\mathbf{x}$ correspond to the complex gains of the $L$ paths. The matrix $\mathbf{A}_{\text{e}}$ is row sparse with its non-zero row entries corresponding to the error imposed by the faulty antenna elements. As the objective of this paper is to detect antenna faults (the second term in (11)), it is imperative that the weights $\mathbf{w}_{m}$ are designed to be in the null-space of the column vectors of the DFT matrix $\mathbf{A}$ that correspond to the $L$ AoAs. There are two main ways to achieve this. If the AoAs are quantized to $N$ points, then one can exploit the orthogonality property of the DFT matrix $\mathbf{A}$ in (10) to select the columns that do not correspond to the $L$ AoAs as the receive beamforming (or measurement) weights, i.e. $\mathbf{w}_{m}\in[\mathbf{A}]_{:,m},m\not=l$. If the AoAs are not quantized, the vector $\mathbf{w}_{m}$ needs to be orthogonal to all AoAs. Exploiting the large antenna dimensions available in mmWave systems, one can generate $M$ receive antenna weights (or beam vectors) that are orthogonal to the array response corresponding to directions in $\Theta$. To achieve this, let the matrix $\mathbf{D}=[{\mathbf{a}}(\theta_{1}),{\mathbf{a}}(\theta_{2}),...,{\mathbf{a}}% (\theta_{L})]$ contain the array response vectors that correspond to the $L$ AoAs in $\Theta$. Using Householder transformation [32], the orthogonal beam matrix $\mathbf{Q}\in\mathcal{C}^{N\times N}$ can be obtained as follows $$\displaystyle\mathbf{Q}=\mathbf{I}-\mathbf{D}(\mathbf{D}^{*}\mathbf{D})^{-1}% \mathbf{D}^{*}.$$ (12) The combining matrix ${\mathbf{W}}$ is then formed by selecting $M$ columns from the matrix $\mathbf{Q}$. To this end note that the combining matrix $\mathbf{W}$ is used to receive the $M$ diagnosis symbols as follows $$\displaystyle\hat{\mathbf{y}}$$ $$\displaystyle=$$ $$\displaystyle\mathbf{W}^{*}\mathbf{Ax}+\mathbf{W}^{*}\mathbf{A}_{\text{e}}% \mathbf{x}+\mathbf{z}$$ (13) $$\displaystyle=$$ $$\displaystyle\mathbf{W}^{*}\mathbf{A}_{\text{e}}\mathbf{x}+\tilde{\mathbf{z}}+% \mathbf{z}$$ (14) $$\displaystyle=$$ $$\displaystyle\mathbf{W}^{*}\mathbf{h}_{\text{e}}+\tilde{\mathbf{z}}+\mathbf{z}.$$ (15) Note as the columns of $\mathbf{W}$ are orthogonal to the columns in $\mathbf{A}$ corresponding to the $L$ AoAs, the first term in (13) cancels out. The interference vector $\tilde{\mathbf{z}}$ accounts for the interference that arises when $\mathbf{W}^{*}$ and $\mathbf{Ax}$ are not orthogonal. This situation could arise due to imperfect channel estimates at the receiver. As the error vector $\mathbf{h}_{e}$ is sparse, with non-zero entries corresponding to the faulty antenna elements, compressed sensing techniques outlined in [17]-[20] can be used to recover $\mathbf{h}_{\text{e}}$ from $\hat{\mathbf{y}}$ and $\mathbf{W}$ as follows: $$\displaystyle\min$$ $$\displaystyle||\tilde{\mathbf{h}}_{\text{e}}||_{1}$$ s.t. $$\displaystyle||\hat{\mathbf{y}}-\mathbf{W}^{*}\tilde{\mathbf{h}}_{\text{e}}||_% {2}\leq\epsilon.$$ For simplicity, we employ the orthogonal matching pursuit (OMP) algorithm [20] to solve the above optimization problem. The non-zero entries of the recovered vector $\tilde{\mathbf{h}}_{\text{e}}\in\mathcal{C}^{N\times 1}$ correspond to the identity of the faulty antenna elements. As outlined, the proposed technique only requires knowledge of incident AoAs, and not the full channel knowledge, to recover the identity of the faulty antenna elements. This partial channel requirement reduces the implementation complexity of remote array diagnostic techniques. This, however, comes at the expense of additional complexity in the design of the receive antenna weights. IV Numerical Results and Discussions In this section, we conduct numerical simulations to evaluate the efficacy of the proposed technique. We consider a receiver with a uniform linear array with half wavelength separation and $S$ faulty antenna elements. We adopt the blockage and channel model presented in Section II. To generate complete antenna element blockages, randomly selected $S$ diagonal entries in the blockage matrix $\mathbf{B}$ in (3) are set to zero. To generate partial blockages, $S$ diagonal entries in the blockage matrix $\mathbf{B}$ are set to have a random phase shift and amplitude (see (5)). We adopt the probability of success $\text{P}_{\text{success}}$, i.e. the probability that all faulty antennas are detected, as a performance measure to quantify the error in detecting the faulty antenna locations. This probability is defined as $$\text{P}_{\text{success}}=\text{Pr}({\mathcal{I}_{S}=\mathcal{\hat{I}}}_{S}),$$ where the entries of the set $\mathcal{I}_{S}$ represent the true identities of the faulty antennas and the entries of the set $\hat{\mathcal{I}}_{S}$ represent the identities of the detected faulty antennas. For benchmarking purposes, we compare the probability of success achieved by the proposed technique with the probability of success achieved by the difference based technique proposed in [8]. In all simulations, we set $N$ = 128 antennas and consider both single and multi-path channel cases. For the single path case, the transmitting probe is situated so as to correspond to one of the $N=128$ quantized AoAs in the DFT matrix $\mathbf{A}$ (see (10)). The receive antenna weights are selected from the columns of $\mathbf{A}$ that do not correspond to the receiver’s quantized AoA. The performance of the proposed technique for this scenario is illustrated in Fig. 1 and Fig. 2. In Fig. 1, we plot the probability of success versus the number of measurements (or diagnosis time) for different number of antenna faults. Fig. 1 shows that the proposed technique is able to successfully detect antenna faults without additional diagnostic measurements when compared to difference based techniques. This is achieved without the need for prior knowledge of the receiver’s channel gain (or path-loss). In Fig. 2, we study the effect of AoA estimation errors on the performance of the proposed technique. Specifically, we plot the probability of success versus the AoA estimation error when the array is subjected to both complete and partial blockages. In the presence of partial blockages, Fig. 2 shows that both the proposed and difference based techniques experience a slight loss in the detection performance when compared to complete blockages. This is mainly due to the fact that the magnitude of the errors in the error vector $\mathbf{h}_{\text{e}}$ in (15) are smaller when compared to complete blockages. This effectively reduces the detection capability in the presence of noise when compared to the case of full blockages, which results in higher error magnitudes. Fig. 2 also shows that both the proposed and difference based technique are sensitive to AoA estimations errors. Nonetheless, the proposed technique is superior in the sense it can tolerate significantly higher AoA errors. In Fig. 3 and Fig. 4 we study the performance of the proposed technique in the presence of multi-path, complete and partial blockages, and non-quantized AoAs. Specifically, each random path corresponds to a random AoA and complex gain. Based on knowledge of all AoAs, and using Householder transformation, the measurement matrix $\mathbf{W}$ is designed to result in a beam pattern that is orthogonal to the receiver’s AoAs (see (12)-(15)). In Fig. 3 we plot the success probability versus the receive signal-to-noise ratio (SNR) in the presence of complete and partial blockages. Fig. 3 shows that both the proposed and the difference technique require high SNR to successfully detect antenna faults, and the proposed technique is superior in the sense that it is less sensitive to the system noise. Note that noise affects both the path gains and the AoAs. As the proposed technique is mainly sensitive to AoAs errors, and not path gain errors, it experiences less performance degradtaion when compared to the difference technique. To draw some insights into the effect of channel estimation errors on the performance of the proposed technique, we plot the probability of success versus the variance of the path gain and AoA estimation errors at the receiver in Fig. 4. The estimation errors are assumed to be Gaussian distributed with zero mean and variance as indicated in Fig. 4. Interestingly, and as evident from Fig. 4, the proposed technique permits successful antenna fault detection irrespective of the path gain noise error magnitude. This performance gain is attributed to the fact that the proposed technique does not require knowledge of channel gain for fault detection. Hence channel gain estimation errors do not affect the performance of the proposed technique. Nonetheless, the proposed technique is shown in Fig. 4 to be sensitive to AoA mismatch. As the mismatch increases, the orthogonality between the designed beamforming weights and the true channel response diminishes. This increases the noise at the receiver. Fig. 4 shows that the probability of success for the difference based technique deteriorates drsticlly with slight path gain or AoA mismatch. The reason for this is that any mismatch between the generated channel and the true channel would destroy the sparsity property of the difference channel response $\mathbf{h}_{\text{d}}$, and hence sparse recovery would not be possible in this case. V Conclusion In this paper, we proposed a novel array diagnosis technique for mmWave systems with large antenna arrays. The proposed technique is able to identify the locations of antenna faults with only partial channel knowledge. For both single path and multipath cases, the proposed technique is shown to be less sensitive to channel estimation errors when compared to the widely adopted difference based technique. This improvement comes at the expense of additional complexity in the design of the receive beamforming weights. Due to its robustness against channel estimation errors, the proposed technique can be deployed to perform real-time array diagnosis. Future work will focus on array diagnosis in the absence of any channel knowledge and on applying this technique on a practical set-up. Acknowledgment This material is based upon work supported in part by the Sacramento State Research and Creative Activity Faculty Awards Program. References [1] F. Boccardi, R. Heath, A. Lozano, T. Marzetta, and P. Popovski, “Five disruptive technology directions for 5G,” IEEE Commun. Mag., vol. 52, no. 2, pp. 74-80, Feb. 2014. [2] L. U. Khan, I. Yaqoob, M. Imran, Z. Han and C. S. Hong, “6G Wireless Systems: A Vision, Architectural Elements, and Future Directions,” IEEE Access, vol. 8, pp. 147029-147044, 2020. [3] Z. Pi and F. Khan, “An introduction to millimeter-wave mobile broadband systems”, IEEE Commun. Mag., vol. 49, no. 6, pp. 101-107, 2011. [4] R. W. Heath, N. Gonzalez-Prelcic, S. Rangan, W. Roh and A. M. Sayeed, “An Overview of Signal Processing Techniques for Millimeter Wave MIMO Systems,” IEEE Journal of Selected Topics in Signal Processing, vol. 10, no. 3, pp. 436-453, April 2016. [5] A. Alkhateeb, O. Ayach, and R. W. Heath Jr., “Channel estimation and hybrid precoding for millimeter wave cellular systems,” IEEE J. on Select. Areas in Commun., vol. 8, no. 5, pp. 831-846, Oct. 2014. [6] H. Sarieddeen, M. Alouini and T. Y. Al-Naffouri, “Terahertz-Band Ultra-Massive Spatial Modulation MIMO,” in IEEE Journal on Selected Areas in Communications, vol. 37, no. 9, pp. 2040-2052, Sept. 2019. [7] T. S. Rappaport, J. N. Murdock and F. Gutierrez, “State of the Art in 60-GHz Integrated Circuits and Systems for Wireless Communications,” in Proceedings of the IEEE, vol. 99, no. 8, pp. 1390-1436, Aug. 2011, [8] M. E. Eltayeb, T. Y. Al-Naffouri and R. W. Heath, “Compressive Sensing for Millimeter Wave Antenna Array Diagnosis,” in IEEE Transactions on Communications, vol. 66, no. 6, pp. 2708-2721, June 2018. [9] M. E. Eltayeb, T. Y. Al-Naffouri and R. W. Heath, “Compressive Sensing for Blockage Detection in Vehicular Millimeter Wave Antenna Arrays,” in IEEE Global Communications Conference (GLOBECOM), Washington, DC, 2016, pp. 1-6. [10] W. Li, W. Deng and M. D. Migliore, “A Deterministic Far-Field Sampling Strategy for Array Diagnosis Using Sparse Recovery,” IEEE Antennas and Wireless Propagation Letters, vol. 17, no. 7, pp. 1261-1265, July 2018. [11] B. Fuchs, L. L. Coq and M. D. Migliore, “Fast Antenna Array Diagnosis from a Small Number of Far-Field Measurements,” IEEE Transactions on Antennas and Propagation, vol. 64, no. 6, pp. 2227-2235, June 2016, [12] M. Salucci, A. Gelmini, G. Oliveri and A. Massa, “Planar Array Diagnosis by Means of an Advanced Bayesian Compressive Processing,” IEEE Transactions on Antennas and Propagation, vol. 66, no. 11, pp. 5892-5906, Nov. 2018, [13] M. E. Eltayeb, “Relay-Aided Channel Estimation for mmWave Systems with Imperfect Antenna Arrays,” 2019 IEEE International Conference on Communications (ICC), Shanghai, China, 2019, pp. 1-5. [14] O. J. Famoriji, Z. Zhang, A. Fadamiro, R. Zakariyya, and F. Lin, “Planar Array Diagnostic Tool for Millimeter-Wave Wireless Communication Systems”, Electronics 2018, 7, 383. [15] S. Ma, W. Shen, J. An and L. Hanzo, “Antenna Array Diagnosis for Millimeter-Wave MIMO Systems,” IEEE Transactions on Vehicular Technology, vol. 69, no. 4, pp. 4585-4589, April 2020. [16] M. D. Migliore, “A compressed sensing approach for array diagnosis from a small set of near-field measurements,” IEEE Trans. Antennas Propag., vol. 59, no. 6, pp. 2127-2133, 2011. [17] E. Candes and Y. Plan, “Near-ideal model selection by l1 minimization,” Ann. Statist., vol. 37, pp. 2145-2177, 2009. [18] A. Fletcher, S. Rangan, and V. Goyal, “Necessary and sufficient conditions for sparsity pattern recovery,” IEEE Trans. Inform. Theory, vol. 55, no. 12, pp. 5758-5772, Dec. 2009. [19] M. E. Eltayeb, T. Y. Al-Naffouri and H. R. Bahrami, “Compressive Sensing for Feedback Reduction in MIMO Broadcast Channels,” IEEE Transactions on Communications, vol. 62, no. 9, pp. 3209-3222, Sept. 2014. [20] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on information theory, vol. 53, no. 12, pp. 4655-4666, 2007. [21] K. Chen, W. Wang, X. Chen and H. Yin, “Deep Learning Based Antenna Array Fault Detection,” 2019 IEEE 89th Vehicular Technology Conference (VTC2019-Spring), Kuala Lumpur, Malaysia, 2019, pp. 1-5. [22] M. H. Nielsen, M. H. Jespersen and M. Shen, “Remote Diagnosis of Fault Element in Active Phased Arrays using Deep Neural Network,” 2019 27th Telecommunications Forum (TELFOR), Belgrade, Serbia, 2019, pp. 1-4. [23] M. Akdeniz, Y. Liu, M. Samimi, S. Sun, S. Rangan, T. Rappaport, and E. Erkip, “Millimeter wave channel modeling and cellular capacity evaluation,” IEEE J. on Selected Areas in Commun., vol. 32, no. 6, pp. 1164-1179, June 2014. [24] J. Wang, Z. Lan, C. Pyo, T. Baykas, C. Sum, M. Rahman, J. Gao, R. Funada, F. Kojima, H. Harada et aI., “Beam codebook based beam forming protocol for multi-Gbps millimeter-wave WPAN systems,” IEEE J. on Selet. Areas in Commun., vol. 27, no. 8, pp. 1390-1399, 2009. [25] M. E. Eltayeb, A. Alkhateeb, R. W. Heath Jr., and T. Y. Al-Naffouri, “Opportunistic Beam Training with Hybrid Analog/Digital Codebooks for mmWave Systems,” in IEEE Global Conf. on Signal and Information Processing, Dec. 2015, Orlando, Florida, USA. [26] IEEE, “IEEE802.11-10/0433r2, PHY/MAC Complete Proposal Specification (TGad D0.1),” 2010. [27] S. Hur, T. Kim, D. Love, J. Krogmeier, T. Thomas, A. Ghosh, “Millimeter wave beamforming for wireless backhaul and access in small cell networks,” IEEE Trans. Commun., vol. 61, no. 10, pp. 4391-4403, Oct. 2013. [28] A. Alkhateeb, O. Ayach, G. Leus and R. W. Heath Jr, “Single-sided adaptive estimation of multi-path millimeter wave channels,” in the 15th int. Workshop on Sig. Proc. Adv. in Wireless Commun., June 2014, pp. 125-129. [29] P. Bahl and V. N. Padmanabhan, “RADAR: an in-building RF-based user location and tracking system,” in Proc. of the IEEE INFOCOM, Mar. 2000, pp. 775-784. [30] A. Alkhateeb, “DeepMIMO: A Generic Deep Learning Dataset for Millimeter Wave and Massive MIMO Applications”, arXiv:1902.06435, Feb. 2019. [31] V. Va, J. Choi, T. Shimizu, G. Bansal, and R. W. Heath, “Inverse Multipath Fingerprinting for Millimeter Wave V2I Beam Alignment,” in IEEE Transactions on Vehicular Technology, vol. 67, no. 5, pp. 4042-4058, May 2018. [32] F. Rotella and I. Zambettakis, “Block householder transformation for parallel qr factorization,” Applied mathematics letters, vol. 12, no. 4, pp. 29-34, 1999.
On the origin of the magnetic energy in the quiet solar chromosphere Juan Martínez-Sykora 11affiliationmark: 22affiliationmark: 33affiliationmark: 44affiliationmark: & Viggo H. Hansteen 33affiliationmark: 44affiliationmark: & Boris Gudiksen 33affiliationmark: 44affiliationmark: & Mats Carlsson 33affiliationmark: 44affiliationmark: & Bart De Pontieu 22affiliationmark: 33affiliationmark: 44affiliationmark: & Milan Gošić 11affiliationmark: 22affiliationmark: juanms@lmsal.com 11affiliationmark: Bay Area Environmental Research Institute, Moffett Field, CA 94035, USA 22affiliationmark: Lockheed Martin Solar and Astrophysics Laboratory, Palo Alto, CA 94304, USA 33affiliationmark: Rosseland Centre for Solar Physics, University of Oslo, P.O. Box 1029 Blindern, N-0315 Oslo, Norway 44affiliationmark: Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, N-0315 Oslo, Norway Abstract The presence of magnetic field is crucial in the transport of energy through the solar atmosphere. Recent ground-based and space-borne observations of the quiet Sun have revealed that magnetic field accumulates at photospheric heights, via a local dynamo or from small-scale flux emergence events. However, most of this small-scale magnetic field may not expand into the chromosphere due to the entropy drop with height at the photosphere. Here we present a study that uses a high resolution 3D radiative MHD simulation of the solar atmosphere with non-grey and non-LTE radiative transfer and thermal conduction along the magnetic field to reveal that: 1) the net magnetic flux from the simulated quiet photosphere is not sufficient to maintain a chromospheric magnetic field (on average), 2) processes in the lower chromosphere, in the region dominated by magneto-acoustic shocks, are able to convert kinetic energy into magnetic energy, 3) the magnetic energy in the chromosphere increases linearly in time until the r.m.s. of the magnetic field strength saturates at roughly 4 to 30 G (horizontal average) due to conversion from kinetic energy, 4) and that the magnetic features formed in the chromosphere are localized to this region. Subject headings:Magnetohydrodynamics (MHD) —Methods: numerical — Radiative transfer — Sun: atmosphere — Sun: corona 1. Introduction The turbulent convection zone of the Sun continuously stretches, bends, and reconnects the magnetic field. This process converts kinetic into magnetic energy and is known as the local dynamo. The convection zone time and length scales increase with depth, so that local dynamo time-scales and growth rates also vary with depth (Nordlund, 2008; Rempel, 2014; Kitiashvili et al., 2015). Consequently, and due to the relative short timescales at the surface compared to deeper layers, the magnetic energy growth is faster close to the solar surface. Studying the small-scale dynamo believed to operate in the solar convection zone is inherently difficult (Cattaneo, 1999). The turbulent dynamo relies on the transfer of kinetic energy in the form of turbulent motions into the magnetic energy (Parker, 1955; Childress & Gilbert, 1995). Identifying a working dynamo is a question of how efficient the amplification of the magnetic field is compared to the efficiency of the cascade of magnetic field down to the resistive scale (Finn & Ott, 1988). Note that any work on the magnetic field will produce more magnetic energy – thus the compression of a uniform magnetic field will locally increase the magnetic energy –. In local dynamo theory, one expects the magnetic energy in a closed system, where a magnetic dynamo is at work, to change initially with a polynomial increase, followed by an exponential increase over time, before finally the magnetic energy saturates to the kinetic energy of the system (e.g., Brandenburg & Subramanian, 2005). This comes about because at this point, the magnetic field is strong enough to hinder the motions of the plasma to a degree where the amplification of the magnetic field decreases. If one looks at a system that is not isolated, the transport of magnetic flux in and out of the system can modify the growth rate. Usually the turbulent dynamo is modelled by using numerical simulations based on magnetohydrodynamics (MHD) assuming that a weak initial small-scale magnetic field is present from the onset. In reality, magnetic field is always present, still, in theory the presence of magnetic field is not strictly necessary if one takes into account Biermann’s battery effect (Biermann, 1950), i.e., that the seed of the magnetic field can be generated due to local imbalances in electron pressure (Khomenko et al., 2017), but this effect is not included in MHD. When the magnetic energy increases exponentially, the magnetic energy is smaller than the kinetic energy of the fluid, and therefore the magnetic dynamo is in the linear regime. After the energy of magnetic field approaches the kinetic energy of the fluid, the Lorentz force is able to modify the flow of the fluid significantly, quenching the dynamo action that therefore enters a non-linear regime (see the review Childress & Gilbert, 1995). Non-linear dynamos are present not only in turbulent flows, but also in more organized flows, which at low Reynolds number can lead to greater growth rates (Alexakis, 2011). For instance, the laminar flows can also lead to a non-linear dynamo process (Archontis et al., 2007). These authors also found that vortices can concentrate non-linear dynamo processes. The local dynamo in the convection zone is under vigorous debate. Radiative-MHD simulations performed by Vögler & Schüssler (2007); Rempel (2014); Hotta et al. (2015) show the existence of a local dynamo, but the magnetic Prandtl number — the ratio between magnetic ($Re_{m}$) and viscous Reynolds ($Re$) numbers ($Pr=Re_{m}/Re$) — in those numerical simulations is believed to be larger than the one estimated at the solar atmosphere (Archontis et al., 2003; Schekochihin et al., 2004; Brandenburg, 2011, 2014). The efficiency of energy conversion from kinetic into magnetic decreases when the magnetic Prandtl number decreases. This dependence is different across different various types of flows and turbulence, which may suggest that the local dynamo present in the solar surface is minor. Still, high resolution observations show very confined and small-scale magnetic features within the photosphere, contrary to what is found in those simulations. Another consideration to bear in mind is that studies with varying $Pr$ numbers pay the penalty of being confined to extremely low, and perhaps unrealistic, $Re_{m}$ in order to achieve the small Prandtl numbers believed to characterize solar conditions (Brandenburg, 2011). It is thus clear that further studies are required to settle the debate about the presence and nature of the local dynamo in the photosphere. It is of course of great interest to study how the local dynamo impacts the solar atmosphere. In the quiet Sun, there is no observational evidence of magnetic field in the chromosphere resulting from photospheric small-scale flux emergence events (e.g., Martínez González & Bellot Rubio, 2009; Martínez González et al., 2010). These studies used magnetic field extrapolation from photospheric features in a few events and spectroscopic signals of the chromosphere with Ca ii. It is unclear whether the lack of strong observational evidence for the effects of small-scale photospheric fields on the chromosphere is caused by insufficient sensitivity, or because weak fields do not reach greater heights. In the absence of conclusive observations, models can help. Recently, Amari et al. (2015) studied the impact of the local photospheric dynamo on the corona. They found that the energy flux generated by photospheric motions, including the local dynamo, is large enough to play a major role in heating the corona. However, their model does not include a detailed description of the upper solar atmosphere, the chromosphere is not well resolved and a proper radiative transfer treatment of the atmosphere is lacking. An accurate treatment of the radiative losses in the photosphere and chromosphere will impact the Poynting flux through the solar atmosphere. Here we study the impact of the photospheric quiet Sun magnetic field on the chromosphere using a state-of-the-art numerical model that does incorporate many physical processes that have previously been ignored We will show in this work how magnetic field rising into the chromosphere is by itself not able to maintain the field strength there. The consequence of this must be that an amplification of the magnetic field in the chromosphere must occur. Our model reveals that the kinetic energy in the chromosphere can produce magnetic field in-situ. We first provide a short description of the radiative-MHD code and the numerical model. The results are discussed in the next section, the analysis divided between describing the magnetic and kinetic energy in the chromosphere (Section 3.1), Poynting flux (Section 3.2), magnetic field structures in the chromosphere (Section 3.3) and reconnection (Section 3.4). We end this manuscript with a conclusion and discussion section. 2. Numerical model We use a 3D radiative-MHD numerical simulation computed with the Bifrost code (Gudiksen et al., 2011). The model includes radiative transfer with scattering in the photosphere and lower chromosphere (Skartlien, 2000; Hayek et al., 2010). In the middle and upper chromosphere, radiation from specific species such as hydrogen, calcium and magnesium is computed following Carlsson & Leenaarts (2012) recipes, while using optically thin radiative losses in the transition region and corona. Thermal conduction along the magnetic field, important for the energetics of the transition region and corona, is solved following the formulation of Rempel (2017). Initially, the simulation only spans a vertical domain stretching from -2.5 Mm below the photosphere up to 0.7 Mm above. The photosphere is located at $z\sim 0$ (where $\tau_{500}=1$). The horizontal domain spans $6\times 6$ Mm in the $x$ and $y$ directions with 5 km resolution, thus the simulation includes of order $\sim 4$ granular cells along each axis. Initially, the simulation box is seeded with a uniform weak vertical magnetic field of 2.5 G. The model is run for 51 minutes, until the magnetic field complexity and strength reaches a statistically steady state with $B_{\rm rms}=56$ G at photospheric heights ($z=0$ Mm). The magnetic energy increases in the convection zone and photosphere due to the workings of the convective motion (similar to that described by Vögler & Schüssler, 2007; Rempel, 2014; Cameron & Schüssler, 2015). At this point in time we extend the outer solar atmosphere up to $z=8$ Mm. The vertical $z$-axis used is non-uniform, with the smallest grid size in regions of high vertical gradients, in the photosphere, chromosphere and transition region ($dz=4$ km) and higher in the corona and in the convection zone. The density and temperature structure of the outer atmosphere was originally set by using the horizontal mean density and temperature stratification found in the pre-existing so called “non-GOL” model described in Martínez-Sykora et al. (2017a) while the velocities were initialized to zero. Thanks to the strong density increase into the convection zone the transients generated by attaching an outer atmosphere do not impact deeper layers and mostly propagate outwards (see Figure 1 and Movie 1). The total energy of these transients are also negligible compared to the steady-state situation that develops in the chromosphere (see also Section 3.1.1). In other words, the initial stratification of the outer modeled atmosphere did not impact the steady-state solution found at later times. Using a potential field extrapolation, the magnetic field was computed, starting at the height where the lowest magnetic energy in the initial model was found ($z\sim 0.42$ Mm). The initial magnetic field in the chromosphere is therefore very weak (2.5 G) and nearly vertical (see Figure 1 and Movie 1). The horizontal boundaries are periodic. The top and bottom boundaries are open, allowing shocks and flows to pass through. At the bottom boundary, the entropy is set so as to maintain the convective motion with an effective temperature of $\sim 5750$ K. In this model we do not inject new magnetic flux but let the magnetic field exit the simulation at the top and bottom boundaries. 3. Results We find that in this high-resolution quiet Sun model the chromospheric magnetic energy content increases substantially with time. This could in principle be caused by 1) small-scale magnetic flux emergence, 2) diffusion of magnetic field through the atmosphere, 3) and/or the magnetic energy grows in place fed by the dynamics of the chromosphere. In the remainder of this manuscript we will show that the increase of magnetic energy is the result of the latter scenario. In the following we describe the evolution of the magnetic field and kinetic energy of the plasma and their propagation through the atmosphere. In the later part of this section we will focus on the magnetic structures that appear and their evolution within the chromosphere itself and reconnection. Before describing our findings in detail we list, in Table 1, the heights used to separate various layers of the atmosphere. These cuts are based on properties of our simulation and not on observational properties. 3.1. Growth of the chromospheric magnetic energy At the start of the simulation the kinetic energy of the chromosphere increases as acoustic waves from the photosphere propagate into the initial atmosphere. The magnetic energy in the modeled chromosphere starts to increase after only a few minutes of the initial start time ($t=50$ min111Note that, initially we started with a model that spanned only up to 0.7 Mm above the photosphere. It took $\sim 50$ min for the convective motion to build up the magnetic field in the photosphere. We then expanded the domain to include the chromosphere and corona) of the simulation. Figure 2 shows the mean in horizontal planes of the magnetic (top) and kinetic energies as a function of height and time. For the first fifteen minutes ($t<65$ min), the magnetic energy increases throughout the chromosphere with a local maximum centered at $z=0.8$ Mm and a local minimum near $z\approx 0.52$ Mm. At $z=0.8$ Mm, the magnetic energy first increases once the mean in horizontal planes of the magnetic energy reaches $\sim 1$ erg cm${}^{-3}$. At later times ($t>65$ min), the local minimum in the magnetic energy found at $z\approx 0.52$ Mm disappears and reaches values that are at least as big as, or even greater, than that found just above. Previous simulations of the local dynamo in the convection zone, such as those carried out by Vögler & Schüssler (2007), Rempel (2014), or in this simulation at early times ($t<50$ min), before we added the outer chromosphere and low corona, show a sharp decrease in magnetic energy with height around $z\approx 0.3$ Mm (see blue lines in top panel of Figure 2). This may be due to the location of the open upper boundary in these models. When the boundary is placed at greater heights we find a much higher value of the magnetic field strength at this height (yellow-red lines in the top panel of Figure 2). We carry out our analysis by subtracting the advected magnetic flux, but also, alternately, the full Poynting flux. Figure 3 shows, as a function of time, the mean magnetic energy within the domain $z=[0.52,1.5]$ Mm. In order quantify the magnetic energy build in-situ in the chromosphere, we have removed any in-coming or out-going magnetic flux or Poynting flux at the boundaries of this domain integrated in time, i.e., by subtracting $P_{a}\equiv\int_{t_{0}}^{t_{1}}B_{h}^{2}\,u_{z}\,\Delta t$ for the magnetic flux, or, for the Poynting flux $P_{t}\equiv\int_{t_{0}}^{t_{1}}[B_{h}\,u_{z}\,-B_{z}\,(u_{x}\,B_{x}+u_{y}\,B_{% y})]\Delta t$ at the boundaries. Where $B_{x}$, $B_{y}$, and $B_{z}$ are the various components of the magnetic field, $u_{x}$, $u_{y}$, and $u_{z}$, are the various components of the velocity field and $B_{h}$ is the horizontal magnetic field at the boundaries of this selected domain. The resulting evolution of the mean magnetic energy within the domain is shown with blue and black lines, respectively. Note that both methods of analysis give qualitatively the same result. In both cases, the magnetic energy increases linearly in time until reaching a steady state some 16 minutes later. Therefore, the linear increase of the magnetic energy seen in the chromosphere (blue and black lines in Figure 3) is not due to flux transported into the chromosphere, generated by conversion of the kinetic into magnetic energy from the convective motion, nor due to the Poynting flux through the upper and lower boundaries of the chromosphere. The work of the Poynting flux ($B_{z}\,(u_{x}\,B_{x}+u_{y}\,B_{y})$) contributes to the stretching of the magnetic field, one of the processes needed to increase the magnetic energy with the dynamics. This mechanism converts kinetic energy into magnetic energy. However, as mentioned in the introduction, the local dynamo must occur in a closed and self-contained system, i.e., no energy flux at the boundaries. Such a requirement is not achievable in the solar atmosphere, the Sun is highly connected throughout its atmosphere including the lower chromosphere. Therefore, the convection zone will disturb the magnetic field configuration in the layers above. Consequently, this can not be considered as a closed and self-contained system such as that covered by local dynamo theory. Furthermore, since the magnetic energy growth is linear and not exponential, this field magnification does not correspond to a fast dynamo process (Finn & Ott, 1988; Cattaneo, 1999). In an attempt to isolate the magnetic energy growth process in the chromosphere, as mentioned, we include the magnetic energy where we have subtracted the Poynting flux at the boundaries of this selected domain (black line in Figure 3). Note that in that case, the increase is even faster. This has to do with the fact that the initial vertical magnetic field is negative. In the opposite scenario, i.e., assuming that on average the vertical magnetic field sign is reversed, the Poynting flux due to horizontal motions ($-B_{z}(u_{x}B_{x}+u_{y}B_{y})$) will become negative. In that case, the magnetic energy increase will be smaller than when subtracting the magnetic flux (blue line in Figure 3). The magnetic energy increase in the lower chromosphere comes neither from the emergence of photospheric field nor diffusion, but instead represents the increase of field in-situ. First, as mentioned above, Figure 3 already takes into account any incoming or outgoing magnetic flux. Magnetic diffusion cannot be responsible as the magnetic energy at $z\approx 0.8$ Mm is larger than at the neighboring heights during the first 15 min. In fact, magnetic energy is transported away from the chromosphere while the magnetic energy in the chromosphere is still increasing (see Section 3.2). It is interesting that the lower-chromospheric magnetic energy increases in-situ (Figures 2 and 3) since the lower chromosphere is not, in principle, dominated by the same physical processes as those governing the convection zone. The dynamics in the lower chromosphere are dominated by shocks driven by the convective motions occurring below. These shocks have many sources and propagate in many directions as seen in Figure 4 and the corresponding movie. Both figure and movie show the temperature, horizontal velocity, and vertical and horizontal magnetic field maps at $z=0.8$ Mm. The equipartition between magnetic and kinetic energy are shown in black contours, i.e., locations where $E_{k}=E_{m}$. At this height, most of the time and almost everywhere, plasma $\beta$ is higher than 1, where plasma $\beta$ is the ratio of the gas pressure to the magnetic pressure. The magnetic field is compressed and advected with the shock fronts. In the movie one can also see how both the horizontal and vertical magnetic fields increase with time. The magnetic field orientation becomes horizontal more rapidly in the lower chromosphere than in the layers above and below. Note that initially, the model contained a weak vertical field configuration. Figure 5 illustrates the magnetic field orientation ($<|B_{h}|/|B|>$) as a function of height (horizontal axis) and time (color-scheme). It is interesting to see that during the first 10 to 15 minutes the field is more horizontal in the same height range within the chromosphere where the magnetic field grows faster. When the simulation reaches a steady state, the magnetic field has almost the same orientation from the upper photosphere ($z=0.3$ Mm) to the lower chromosphere ($z\sim 0.9$ Mm), i.e., it is highly horizontal in this height range. This is evidence that the potential field extrapolation is far from the final state in which the field is largely horizontal (see also Abbett, 2007). This seems to be due to: 1) the conversion of the kinetic energy into magnetic energy which mixes the direction of the magnetic field, 2) the upper photosphere is super-adiabatic and plasma tends to expand horizontally stretching the magnetic field, 3) shocks stretch the magnetic field as shown in Section 3.3. Note that these three processes are highly connected. As mentioned above, the magnetic energy increase in the chromosphere is not exponential, but linear with time. This is due to the fact that the lower chromosphere is not a turbulent plasma, as detailed in the following section. Note also that the simulation reaches a steady state with more or less constant magnetic field energy faster (15 minutes) than in the photosphere and much faster than in deeper layers (compare with Vögler & Schüssler, 2007; Rempel, 2014). 3.1.1 Kinetic energy The increase of the magnetic energy in the lower chromosphere shown in Figure 3 is due to a transfer of kinetic energy. The initial chromospheric velocity field was set to zero and thus the simulation started from a static atmosphere. Very rapidly, from the beginning of the simulation, photospheric convective motions drive a kinetic energy increase in the chromosphere. A steady state is reached in the first couple of minutes as shown in the two bottom panels of Figure 2. The mean kinetic energy of the initial transients in the chromosphere is low enough compared to the steady state reached later in time to be excluded as a source of the increase in energy, rather that comes from the continual pumping of acoustic energy from the photosphere. Consequently, the steady state simulated atmosphere is not a result of our initial conditions but rather of the self-consistent physical processes happening in this region. The kinetic energy within the lower chromosphere is of the same magnitude as the magnetic energy which suggest that the kinetic energy is responsible for the growth of magnetic energy. The fact that the kinetic and magnetic energies have similar magnitudes within the lower chromosphere in such a short period of time ($\sim 15$ minutes) means that the conversion from the kinetic energy to magnetic energy is rather efficient. The kinetic energy decreases more slowly with height in the lower chromosphere ($z\sim 0.8$ Mm) than it does in the upper photosphere ($z\sim 0.5$ Mm), i.e., the gradient of the kinetic energy is smaller in the lower chromosphere than in the upper photosphere. In fact, in the upper photosphere, the kinetic energy per gram (bottom panel) drops as a function of height and reaches a local minimum at $z\approx 0.5$ Mm. Above that height, the kinetic energy per gram then increases rapidly from $z=0.5$ to $z=0.8$ Mm, after the simulation has reached a steady state. In the middle and upper chromosphere the kinetic energy per gram keeps increasing with height which is due the density drop and flows moving along the magnetic field lines. Figure 6 shows the normalized power spectrum of the kinetic energy at three different heights. Note that none of these follow a power law slope. Consequently, the convective motion in the photosphere, and shocks in the chromosphere are not, properly speaking, a turbulent fluid. Instead these kinematic flows have preferential scales due to the dominant processes that play a role, i.e., the processes dominating the physics and thus dynamics in each of these locations are very different; in the convection zone slow convective up-flows carry entropy upwards while faster down-flow lanes are the sites of falling cool gas. In the photosphere it is convective motions and radiative losses that dominate, while in the lower chromosphere propagating shocks are the main source of dynamic motions. Therefore, at intergranular length scales ($\sim 100$ km, $k\approx 10$ Mm${}^{-1}$) the normalized power spectrum of the kinetic energy is greater in the convection zone ($z=-1$ Mm), and in the photosphere ($z=0$) than in the lower chromosphere ($z=0.8$ Mm). This may be because intergranular lane structures are not present in the lower chromosphere. The main contribution from photospheric granular scales present in the chromosphere comes from the wave patterns generated via photospheric motions (p-modes). Because of the density drop with height, these waves become shocks in the chromosphere. Since these shocks often collide, the shock pattern in the chromosphere reveals extremely thin elongated structures, which are much thinner (a few km) than the structures in the photosphere like the intergranular lanes ($\sim 100$ km). The colliding shocks produce very narrow structures and this may explain why the relative power spectrum of the kinetic energy is greater at very small-scales in the lower chromosphere ($\sim 10$ km) than that found at lower heights. In the chromosphere, the magnetic field is strongest behind shock fronts or where, previously, different shock fronts have collided together, both regions containing relatively mild flows. The regions with strong magnetic field are thus not at the location of shocks nor in regions with the fastest flows. Though it is a subtle difference one may be able to see this in Movie 2. In addition, we find no correlation between the magnetic field strength and the velocity (not shown here). We also investigate shear, stretching, and compression of the plasma in the lower chromosphere. Figure 7 shows maps of the magnetic field strength as well as divergence, shear and rotation of the plasma (see corresponding Movie 3). These components of the velocity gradient (panels B-D) nicely show the locations of shocks. In the shock fronts and where the shocks collide, there is large compression, shear and rotational motions. Close visual inspection of Movie 3 shows that the magnetic field increases after a shock passes through. This is in agreement with the 2D histogram of the magnetic field with divergence, shear and rotation of the velocity shown in Figure 8. Due to the fact that the magnetic field strength increases behind the shocks these 2D histograms show that regions with large magnetic field strength are in locations with low velocity gradients. It is also very important to note that the various components of the velocity gradient show a strong increase in the lower chromosphere, shown in Figure 9, suggesting that these are stretching, twisting, and folding the magnetic field lines in the chromosphere. In other words, the magnetic field increases because the velocity field changes spatially. In this figure we added the gradient of the velocity perpendicular to the magnetic field (red). This term is directly linked to the process that bends and stretches the magnetic field, i.e., variations of the velocity longitudinal to the magnetic field will not bend or stretch the magnetic field. 3.2. Poynting flux through the atmosphere Let us consider the transport of magnetic energy through the solar atmosphere. It is well known that the Poynting flux in the convection zone is negative, i.e., magnetic field is being carried downwards from the photosphere through the convection zone (Nordlund, 2008) in quiet Sun regions. Less clear is what happens within the chromosphere and how magnetic field is transported there. Abbett & Fisher (2012) numerical model shows that the Poynting flux changes sign in the photosphere, and becomes positive. In the upper photosphere drops to almost zero. Note that the radiative losses treatment in their model is rather simplified. Figure 10 shows the total (in black) Poynting flux ($P_{t}=\int_{t_{0}}^{t_{1}}\vec{u}\times\vec{B}\times\vec{B}\,\Delta t$) and the advective component only ($P_{a}=\int_{t_{0}}^{t_{1}}B_{h}\,u_{z}\,\Delta t$) of the Poynting flux as a function of height, averaged over the horizontal plane and over 4 minutes in time. The shock dominated nature of the chromosphere induced by the p-mode waves from the photosphere makes the time-average necessary as these fluxes vary strongly in time. The 4 minute time average is taken from $t=66~{}$min where the chromospheric field has reached a steady state. This can be clearly seen in Movie 4 which shows the same lines shown in Figure 10 as they evolve in time. The figure shows that the advective component (blue) is on average negative up to $z\approx 0.2$ Mm. Likewise, the total Poynting flux, $P_{t}$ is negative, also in the photosphere. Above this point the advective component, $P_{a}$ becomes positive, i.e., magnetic field is emerging, and increases in amplitude up to $z=0.5$ Mm. This region is where the overshoot of photospheric granular motions ends and where plasma motions become shock dominated. At greater heights, the advective component $P_{a}$ is negative and very close to zero within $z\approx[0.55,0.75]$ Mm. The total Poynting flux $P_{t}$ in this region also goes to zero. We find that the net upper-photospheric magnetic field is not expanding to higher layers. In fact, on average a small amount of magnetic field is transported from the lower chromosphere ($z\approx[0.55,0.75]$ Mm) to deeper layers. Note that, as mentioned in the previous section, the averaged magnetic field is negative, in a more mixed polarity scenario, i.e., $<B_{z}>\approx 0$, the total Poynting flux as a function of height (black line) will be dominated by advection of magnetic field, and closer to the blue line. In Movie 4, the advected magnetic flux ($P_{a}$) within $z\approx[0.55,0.75]$ Mm is seen to increase from relatively large negative values (at $t\approx~{}56$) to nearly zero towards the end of the simulation. Consequently, during this period, at heights of $z\approx 0.5$ Mm the magnetic field grows due to advected flux coming from both above and below. Above $z=0.75$ Mm, the $P_{a}$ is positive and the magnetic field expands towards the corona. By comparing Figure 3 and Figure 10 one can see how some of the magnetic energy generated within the lower chromosphere is both expelled to lower layers ($z\sim 0.5$ Mm) and transported to greater heights ($z>0.8$ Mm). This confirms that the lower chromosphere is continuously converting kinetic energy into magnetic energy. In the simulation the net advective component of the Poynting flux, $P_{a}$ is downwards in the upper photosphere due to the entropy drop and the turnover of the convective motions. In the photosphere, the plasma is super-adiabatic, consequently, most of the plasma is advected (together with the magnetic field) to the downflows. Therefore, the magnetic field must be quite buoyant to cross this region (Acheson, 1979; Archontis et al., 2004). Despite this, a small fraction of magnetic field does reach the chromosphere (as seen in the red line Figure 10), and there are emerging events that reach the chromosphere. However, the net transport of magnetic field is downward. (Investigating the dynamics and evolution of the photospheric magnetic field that does reach the chromosphere requires a separate study and is outside of the scope of this work. This component of the field is not important in the simulation described here). 3.3. Field structure An important aspect of this study lies in revealing the magnetic field structure and its evolution within the lower atmosphere. We focus on how chromospheric dynamics are able to stretch, twist, fold, and reconnect the magnetic field. In this section we will describe the different types of magnetic structures formed in-situ in the chromosphere as a result of these actions. 3.3.1 Photospheric field structures Before we describe the structures in the chromosphere let us for comparison give a very brief description of the structures found in the photosphere. We find that our modeled photosphere is filled with thin emerging flux-tube loops that cover only portions of the granules, like the example shown in the left panel of Figure 11. We also find examples of weak flux-sheets that cover entire granular cells, e.g., as shown in the middle panel of Figure 11. Such structures have been analyzed in depth by Moreno-Insertis et al. (2018) and observed by De Pontieu (2002) and Centeno et al. (2017). An example of a third type of field structure that often appears in the photosphere is the magnetic field canopy that expands from the photosphere to greater heights (seen in the right panel of Figure 11, see also Sainz Dalda et al., 2012; de la Cruz Rodríguez et al., 2013). Since this numerical model has very weak large-scale connectivity, the magnetic field in the canopies is rooted in intergranular lanes, and spreads drastically in the upper photosphere and lower chromosphere instead of reaching all the way to the corona. The example shown in Figure 11 reaches a height of $z=0.5$ Mm. Aside from spatial resolution, the main differences between the model shown here and the two models described in Moreno-Insertis et al. (2018) and Sainz Dalda et al. (2012) are that the model in the current paper does not include magnetic flux injected through the bottom boundary, and does not have an initially strong magnetic field configuration. Instead, the magnetic field has been built up in the photosphere through the action of kinetic into magnetic energy conversion from the convective motions starting from a very weak vertical field. In addition, due to the high spatial resolution and resulting small-scale structures found in the downflows, we find even thinner flux tubes than in the previous models. Another difference is that many of the flux sheets and tubes in our model have highly curved and reconnected field, i.e., the magnetic field lines have a highly complex connectivity within the photosphere. 3.3.2 Chromospheric field structures Consider now the chromosphere, the first thing noticed is that the magnetic field structure there is very different compared to the photospheric structures described above. The magnetic field connectivity between photospheric and chromospheric structures is very complex and it is difficult to directly trace (see also Abbett, 2007). Chromospheric magnetic features remain in the chromosphere without any clear connection to the photosphere at all, a field line typically covers distances of many megameters at chromospheric heights before descending (or ascending) and connecting to the photospheric (or coronal) field. Let us now turn to the structures found in the lower chromosphere. Two types of structures are found to dominate the very complex field configurations associated with the local growth of chromospheric magnetic energy. Both cases share some properties: at one or both ends, the magnetic field lines defining the structures expand like the petals of a flower, i.e., magnetic field lines spread out in a plane or cup shaped envelope. One or both ends of the structure can go to greater or lower heights. Both types of structure are formed in-situ in the chromosphere. Finally, they have in common that they are very confined along, at least, one axis, i.e., the structures appear as thin (elongated or short) sheets or thin flux tubes. The widths can be as small as a few tens of kilometers. The first type of structure, usually stronger and reaching greater heights within the chromosphere than the other, is oriented in the vertical direction with field lines localized as in a flux tube. They are usually found behind shock fronts or where different shock fronts have collided. One example can be seen in the top panels of Figure 12 and Movie 5. The structure shown reaches field strengths of up to $\sim 65$ G and heights of up to $1.5$ Mm before weakening due to expansion. The structures of this type that we have managed to follow have lifetimes from a few tens of seconds up to a few minutes and travel horizontally through the chromosphere at apparent speeds between 0 and $4$ km s${}^{-1}$. In order to measure the apparent horizontal velocities of the vertical magnetic field structures visible at the chromospheric heights, we applied Local Correlation Tracking (LCT222LCT is a technique that correlates small regions in two consecutive images to determine displacement vectors. Those small regions are defined by Gaussian tracking window whose full-width at half-maximum (FWHM) is large enough to contain significant structures to be tracked. Here, we use a tracking window with a FWHM=0.6 Mm. To make a smooth transition between images we also used moving-average over three consecutive frames whose cadence was 10 seconds.; November & Simon, 1988). The apparent speeds of the magnetic field structures are roughly similar to the real plasma flows. The second type of structure is a thin sheet, lying almost parallel to the surface in the lower chromosphere ($z\approx 0.6-0.8$ Mm) as shown in the bottom row of Figure 12 and Movie 6. Note that these sheets are very different from the photospheric flux sheets described at the beginning of this section (Moreno-Insertis et al., 2018). These structures form in-situ and not due to flux emergence and compression in the overshoot layer. In addition, the structure’s scale size and shape are not associated with granular cells. These structures are flat with horizontal widths of several hundred kilometers, occurring over a narrow height range (a few tens of km), and as long as a couple of granular cells. The examples of such sheets that we have analyzed are seen to be created as a consequence of interactions between the tube like structure described previously with horizontally traveling shocks. In this case the magnetic field is strongly stretched deforming tubes into sheet like structures. These can live up to $\sim 15$ minutes. They weaken towards the end of their lifetime due to reconnection and plasma expansion. The sheet like structures can travel at horizontal speeds $\leq 2$ km s${}^{-1}$ and can also be moved to deeper layers, where reconnection more easily occurs with photospheric field lines. The example shown in the lower panels of Figure 12 reaches magnetic field strengths of up to $65$ G, and towards the end of its life is found between $z=0.5$ and $0.65$ Mm above the surface. As in the first case, the connectivity to the photosphere is broadly spread, or the structure might not be properly connected to the lower layers at all. Consequently, both ends of the field structure spread over many different locations. In this case in particular, one end is in the photosphere, straddling a granule while the other end spreads over a wide region in the corona. The flux tubes described above are formed by passing shock fronts or by collisions between shocks. Consequently, they are not connected to any single location in the photosphere, instead spreading over a large area, into many and complex photosperic flux tubes and sheets. Sometimes, these structures travel in a different direction than the patterns in the photosphere below them. 3.4. Reconnection The complexity of the magnetic field in the chromosphere does not allow us to discern simple magnetic reconnection events. Figure 13 shows an example of reconnection in the chromosphere. This magnetic feature corresponds to the same feature shown in the top panels of Figure 12. The magnetic field lines from the magnetic feature are folded and reconnected with the same feature within the chromosphere. This happens continuously in the proximity of shock fronts and in the vicinity of the two types of structures described in the previous section. In order to estimate the amplitude of reconnection within the 3D numerical domain we measure the Joule heating per particle, ratio of the current and magnetic strength ($|J|/|B|$) and current parallel to the magnetic field. All these three parameters are indicative of magnetic reconnection and qualitatively provide the same conclusions in our simulation as detailed below. We find that the magnetic field reconnects more often in the lower chromosphere than in its neighboring layers in the simulated atmosphere. The top panel of Figure 14 shows the horizontal mean of the Joule heating per particle as a function of height and time. The Joule heating per particle drops in the upper photosphere and shows a drastic increase with height in the lower chromosphere. Note the similarities of the Joule heating per particle, kinetic energy per particle (Figure 2) and the various components of the velocity gradient (Figure 9) as a function of height within the lower atmosphere. This shows the strong relationship between these three processes in the lower chromosphere. Even though both the current and magnetic field strength decay with height, in the lower chromosphere the ratio of the current and magnetic strength (middle panel of Figure 14) has a local maximum which is indicative of the large number of reconnection events there. It is also remarkable that in the upper photosphere (0.3-0.5 Mm) the heating per particle and $|J|/|B|$ are relatively large compared to neighboring layers. This suggest a higher reconnection rate. The current parallel to the magnetic field (bottom panel), in agreement with the other two described above, shows lower values in the upper photosphere and a local peak in the lower chromosphere. Consequently, reconnection is a process that leads to a lack of connectivity between the photosphere and chromosphere. Since the reconnection is in a high plasma $\beta$ regime, i.e., gas pressure dominates, the reconnection processes in the lower chromosphere and upper photosphere do not show reconnection flows. Instead, one can see in Movie 3 that the lower chromosphere is dominated by magneto-acoustic shocks. 4. Discussion and conclusions We have performed a 3D radiative MHD numerical simulation from the upper convection zone to the lower corona. The model has high spatial resolution, a weak initial magnetic field configuration, and does not include the introduction of new magnetic flux through the boundaries. As a result this experiment allows us to study in detail the impact of the photospheric magnetic field from the quiet Sun on the chromosphere as well as the growth of chromospheric magnetic energy in a self-consistent simulated solar atmosphere. However, due to the very specific setup of the simulation and its properties, one must be aware of the rather simplified scenario of this model as follows: the unsigned magnetic field strength in this model resembles a very quiet internetwork region. Further comparisons with photospheric observations are needed to establish how well this model represents the solar atmosphere. Many studies have shown that large-scale connectivity plays a key role in the energy transfer (e.g., Gudiksen & Nordlund, 2005; Hansteen et al., 2007; Peter et al., 2004). Note that our simulation aims to represent a small region of internetwork field. However, and despite the simplified scenario studied here, we can address how a photospheric field impacts the atmosphere and the conversion from kinetic energy into magnetic energy in the chromosphere with internetwork fields. The mean magnetic flux from the photosphere can not maintain the amount of magnetic energy in the chromosphere. From previous studies, it is well known that the Poynting flux in the convection zone is negative (e.g., Nordlund, 2008). However, it has been less clear what is happening at greater heights. The treatment of the radiative transfer in the lower atmosphere is crucial for this (compare our results with those by Amari et al., 2015, which does not include a full treatment of the radiative losses and does not have a self-consistently heated solar atmosphere). Our results reveal that the magnetic field generated by kinetic motion acting in the upper convection zone and photosphere does not reach, on average, the chromosphere. It is true that there is some magnetic flux that reaches the chromosphere. However, due to turnover of the convective motions, the super-adiabaticity and strong downflows in the photosphere, the amount of magnetic flux that is removed from the chromosphere into deeper layers is larger than the amount that reaches the chromosphere from below. Our numerical experiment shows a mechanism to convert kinetic energy into magnetic energy in the lower chromosphere. This process involves stretch-twist-fold and reconnecting the magnetic field, however, it is not a fast-dynamo process since the growth of the magnetic energy is found to be linear and not exponential. The growth is linear because the flow is not fully turbulent. Instead, the chromosphere is dominated by magnetized shocks. Another important property that separates this process from a local dynamo is that it is not a closed system and there is a strong connection of energy and work transfer between different layers. The magnetic field strength reached in the lower chromosphere is of the order of a few tens of Gauss. With these values, one would expect that the magnetic features formed due to shocks in the chromosphere could be observed with the new generation of telescopes, i.e., the Daniel K. Inouye Solar Telescope (DKIST, Warner et al., 2018) or the European Solar Telescope (EST, Matthews et al., 2016). The diagnostic potential must be fully tested with full synthetic Stokes profiles in chromospheric lines like Ca ii H, 8542 Å, Na i D 5896 Å or Mg i 5173 Å. These features will be barely connected to deeper layers and highly confined in small regions. From these results, a follow up question to address is whether the magnetic energy generated from the kinetic energy could be dissipated into thermal energy. In this case, ion-neutral interaction effects, i.e., ambipolar diffusion (Braginskii, 1965) and non–equilibrium ionization (Leenaarts et al., 2007; Golding et al., 2014) should be taken into account self-consistently (Martínez-Sykora et al., 2019). Khomenko et al. (2018) included ambipolar diffusion in local-dynamo simulations that expands into the low chromosphere and showed that the ambipolar diffusion is capable of dissipating the magnetic energy generated by a local-dynamo into thermal energy (see also Khomenko & Collados, 2012; Martínez-Sykora et al., 2017b, a). Missing components in the Khomenko et al. (2018) simulations are a proper treatment of the chromospheric radiative transfer, as well as non-equilibrium ionization. Consequently, their work focuses on the photosphere/upper photosphere. Note that ambipolar diffusion may also impact quantitatively our results/values, but we believe that qualitatively the findings studied in this paper will remain valid. 5. Acknowledgments We gratefully acknowledge support by NASA grants, NNX16AG90G, NNX17AD33G, and NNG09FA40C (IRIS), NSF grant AST1714955. The simulations have been run on clusters from the Notur project, and the Pleiades cluster through the computing project s1061, s1630, s1980 and s2053 from the High End Computing (HEC) division of NASA. We thankfully acknowledge the support of the Research Council of Norway through grant 230938/F50, through its Center of Excellence scheme, project number 262622, and through grants of computing time from the Programme for Supercomputing. This work has benefited from discussions at the International Space Science Institute (ISSI) meetings on “Heating of the magnetized chromosphere” where many aspects of this paper were discussed with other colleagues. To analyze the data we have used python, Vapor (www.vapor.ucar.edu) and IDL. References Abbett (2007) Abbett W. P., 2007, ApJ, 665, 1469, The Magnetic Connection between the Convection Zone and Corona in the Quiet Sun Abbett & Fisher (2012) Abbett W. P., Fisher G. H., 2012, Sol. Phys., 277, 3, Radiative Cooling in MHD Models of the Quiet Sun Convection Zone and Corona Acheson (1979) Acheson D. J., 1979, Sol. Phys., 62, 23, Instability by magnetic buoyancy Alexakis (2011) Alexakis A., 2011, Phys. Rev. E, 83, 036301, Nonlinear dynamos at infinite magnetic Prandtl number Amari et al. (2015) Amari T., Luciani J.-F., Aly J.-J., 2015, Nature, 522, 188, Small-scale dynamo magnetism as the driver for heating the solar atmosphere Archontis et al. (2004) Archontis A., Moreno-Insertis F., Galsgaard K., Hood A., O’Shea E., 2004, A&A, 426, 1047, Emergence of magnetic flux from the convection zone into the corona Archontis et al. (2003) Archontis V., Dorch S. B. F., Nordlund Å., 2003, A&A, 397, 393, Numerical simulations of kinematic dynamo action Archontis et al. (2007) Archontis V., Dorch S. B. F., Nordlund Å., 2007, A&A, 472, 715, Nonlinear MHD dynamo operating at equipartition Biermann (1950) Biermann L., 1950, Zeitschrift Naturforschung Teil A, 5, 65, Über den Ursprung der Magnetfelder auf Sternen und im interstellaren Raum (miteinem Anhang von A. Schlüter) Braginskii (1965) Braginskii S. I., 1965, Reviews of Plasma Physics, 1, 205, Transport Processes in a Plasma Brandenburg (2011) Brandenburg A., 2011, ApJ, 741, 92, Nonlinear Small-scale Dynamos at Low Magnetic Prandtl Numbers Brandenburg (2014) Brandenburg A., 2014, ApJ, 791, 12, Magnetic Prandtl Number Dependence of the Kinetic-to-magnetic Dissipation Ratio Brandenburg & Subramanian (2005) Brandenburg A., Subramanian K., 2005, Phys. Rep., 417, 1, Astrophysical magnetic fields and nonlinear dynamo theory Cameron & Schüssler (2015) Cameron R., Schüssler M., 2015, Science, 347, 1333, The crucial role of surface magnetic fields for the solar dynamo Carlsson & Leenaarts (2012) Carlsson M., Leenaarts J., 2012, A&A, 539, A39, Approximations for radiative cooling and heating in the solar chromosphere Cattaneo (1999) Cattaneo F., 1999, ApJ, 515, L39, On the Origin of Magnetic Fields in the Quiet Photosphere Centeno et al. (2017) Centeno R., Blanco Rodríguez J., Del Toro Iniesta J. C., et al., 2017, ApJS, 229, 3, A Tale of Two Emergences: Sunrise II Observations of Emergence Sites in a Solar Active Region Childress & Gilbert (1995) Childress S., Gilbert A. D., 1995, Stretch, Twist, Fold de la Cruz Rodríguez et al. (2013) de la Cruz Rodríguez J., De Pontieu B., Carlsson M., Rouppe van der Voort L. H. M., 2013, ApJ, 764, L11, Heating of the Magnetic Chromosphere: Observational Constraints from Ca II $\lambda$8542 Spectra De Pontieu (2002) De Pontieu B., 2002, ApJ, 569, 474, High-Resolution Observations of Small-Scale Emerging Flux in the Photosphere Finn & Ott (1988) Finn J. M., Ott E., 1988, Physical Review Letters, 60, 760, Chaotic flows and magnetic dynamos Golding et al. (2014) Golding T. P., Carlsson M., Leenaarts J., 2014, ApJ, 784, 30, Detailed and Simplified Nonequilibrium Helium Ionization in the Solar Atmosphere Gudiksen et al. (2011) Gudiksen B. V., Carlsson M., Hansteen V. H., et al., 2011, A&A, 531, A154, The stellar atmosphere simulation code Bifrost. Code description and validation Gudiksen & Nordlund (2005) Gudiksen B. V., Nordlund Å., 2005, ApJ, 618, 1020, An Ab Initio Approach to the Solar Coronal Heating Problem Hansteen et al. (2007) Hansteen V. H., Carlsson M., Gudiksen B., 2007, 3D Numerical Models of the Chromosphere, Transition Region, and Corona, en Astronomical Society of the Pacific Conference Series, Vol. 368, Heinzel P., Dorotovič I., Rutten R. J. (eds.), The Physics of Chromospheric Plasmas, p. 107 Hayek et al. (2010) Hayek W., Asplund M., Carlsson M., et al., 2010, A&A, 517, A49, Radiative transfer with scattering for domain-decomposed 3D MHD simulations of cool stellar atmospheres. Numerical methods and application to the quiet, non-magnetic, surface of a solar-type star Hotta et al. (2015) Hotta H., Rempel M., Yokoyama T., 2015, ApJ, 803, 42, Efficient Small-scale Dynamo in the Solar Convection Zone Khomenko & Collados (2012) Khomenko E., Collados M., 2012, ApJ, 747, 87, Heating of the Magnetized Solar Chromosphere by Partial Ionization Effects Khomenko et al. (2017) Khomenko E., Vitas N., Collados M., de Vicente A., 2017, A&A, 604, A66, Numerical simulations of quiet Sun magnetic fields seeded by the Biermann battery Khomenko et al. (2018) Khomenko E., Vitas N., Collados M., de Vicente A., 2018, ArXiv e-prints, Three-dimensional simulations of solar magneto-convection including effects of partial ionization Kitiashvili et al. (2015) Kitiashvili I. N., Kosovichev A. G., Mansour N. N., Wray A. A., 2015, ApJ, 809, 84, Realistic Modeling of Local Dynamo Processes on the Sun Leenaarts et al. (2007) Leenaarts J., Carlsson M., Hansteen V., Rutten R. J., 2007, A&A, 473, 625, Non-equilibrium hydrogen ionization in 2D simulations of the solar atmosphere Martínez González & Bellot Rubio (2009) Martínez González M. J., Bellot Rubio L. R., 2009, ApJ, 700, 1391, Emergence of Small-scale Magnetic Loops Through the Quiet Solar Atmosphere Martínez González et al. (2010) Martínez González M. J., Manso Sainz R., Asensio Ramos A., Bellot Rubio L. R., 2010, ApJ, 714, L94, Small Magnetic Loops Connecting the Quiet Surface and the Hot Outer Atmosphere of the Sun Martínez-Sykora et al. (2017a) Martínez-Sykora J., De Pontieu B., Carlsson M., et al., 2017a, ApJ, 847, 36, Two-dimensional Radiative Magnetohydrodynamic Simulations of Partial Ionization in the Chromosphere. II. Dynamics and Energetics of the Low Solar Atmosphere Martínez-Sykora et al. (2017b) Martínez-Sykora J., De Pontieu B., Hansteen V. H., et al., 2017b, Science, 356, 1269, On the generation of solar spicules and Alfvenic waves Martínez-Sykora et al. (2019) Martínez-Sykora J., Leenaarts B., J. De Pontieu, Carlsson M., et al., 2019, ApJ, 847, Modeling of ion-neutral interaction effects and non-equilibrium ionization in the solar chromosphere Matthews et al. (2016) Matthews S. A., Collados M., Mathioudakis M., Erdelyi R., 2016, The European Solar Telescope (EST), en Proc. SPIE, Vol. 9908, Ground-based and Airborne Instrumentation for Astronomy VI, p. 990809 Moreno-Insertis et al. (2018) Moreno-Insertis F., Martinez-Sykora J., Hansteen V. H., Muñoz D., 2018, ApJ, 859, L26, Small-scale Magnetic Flux Emergence in the Quiet Sun Nordlund (2008) Nordlund Å., 2008, Physica Scripta Volume T, 133, 014002, Stellar (magneto-)convection November & Simon (1988) November L. J., Simon G. W., 1988, ApJ, 333, 427, Precise proper-motion measurement of solar granulation Parker (1955) Parker E. N., 1955, ApJ, 122, 293, Hydromagnetic Dynamo Models. Peter et al. (2004) Peter H., Gudiksen B. V., Nordlund Å., 2004, Apjl, 617, L85, Coronal Heating through Braiding of Magnetic Field Lines Rempel (2014) Rempel M., 2014, ApJ, 789, 132, Numerical Simulations of Quiet Sun Magnetism: On the Contribution from a Small-scale Dynamo Rempel (2017) Rempel M., 2017, ApJ, 834, 10, Extension of the MURaM Radiative MHD Code for Coronal Simulations Sainz Dalda et al. (2012) Sainz Dalda A., Martínez-Sykora J., Bellot Rubio L., Title A., 2012, ApJ, 748, 38, Study of Single-lobed Circular Polarization Profiles in the Quiet Sun Schekochihin et al. (2004) Schekochihin A. A., Cowley S. C., Taylor S. F., Maron J. L., McWilliams J. C., 2004, ApJ, 612, 276, Simulations of the Small-Scale Turbulent Dynamo Skartlien (2000) Skartlien R., 2000, ApJ, 536, 465, A Multigroup Method for Radiation with Scattering in Three-Dimensional Hydrodynamic Simulations Vögler & Schüssler (2007) Vögler A., Schüssler M., 2007, A&A, 465, L43, A solar surface dynamo Warner et al. (2018) Warner M., Rimmele T. R., Pillet V. M., et al., 2018, Construction update of the Daniel K. Inouye Solar Telescope project, en Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10700, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, p. 107000V
DTP/98/20 May 1998 The description of $F_{2}$ at low $Q^{2}$ A.D. Martin${}^{1}$, M.G. Ryskin${}^{1,2}$ and A.M. Stasto${}^{1,3}$ ${}^{1}$ Department of Physics, University of Durham, Durham, DH1 3LE, UK. ${}^{2}$ Petersburg Nuclear Physics Institute, 188350, Gatchina, St. Petersburg, Russia. ${}^{3}$ H. Niewodniczanski Institute of Nuclear Physics, 31-342 Krakow, Poland. We analyse the data for the proton structure function $F_{2}$ over the entire $Q^{2}$ domain, including especially low $Q^{2}$, in terms of perturbative and non-perturbative QCD contributions. The small distance configurations are given by perturbative QCD, while the large distance contributions are given by the vector dominance model and, for the higher mass $q\overline{q}$ states, by the additive quark approach. The interference between states of different $q\bar{q}$ mass (in the perturbative contribution) is found to play a crucial role in obtaining an excellent description of the data throughout the whole $Q^{2}$ region, including photoproduction. 1. Introduction There now exist high precision deep inelastic $ep$ scattering data [1, 2] covering both the low $Q^{2}$ and high $Q^{2}$ domains, as well as measurements of the photoproduction cross section. The interesting structure of these measurements, in particular the change in the behaviour of the cross section with $Q^{2}$ at $Q^{2}\sim 0.2{\rm GeV}^{2}$, highlight the importance of obtaining a theoretical QCD description which smoothly links the non-perturbative and perturbative domains. In any QCD description of a $\gamma^{*}p$ collision, the first step is the conversion of the initial photon into a $q\overline{q}$ pair, which is then followed by the interaction of the pair with the target proton. Let $\sigma(s,Q^{2})$ be the total cross section for the process $\gamma^{*}p\rightarrow X$ where $Q^{2}$ is the virtuality of the photon and $\sqrt{s}$ is the $\gamma^{*}p$ centre-of-mass energy. It is related to th e forward $\gamma^{*}p$ elastic amplitude $A$ by the optical theorem, ${\rm Im}\;A\>=\>s\sigma$. We may write a double dispersion relation [3] for $A$ and obtain for fixed $s$ $$\sigma(s,Q^{2})\;=\;\sum_{q}\>\int\>\frac{dM^{2}}{M^{2}+Q^{2}}\>\int\>\frac{dM% ^{\prime 2}}{M^{\prime 2}+Q^{2}}\;\rho(s,M^{2},M^{\prime 2})\>\frac{1}{s}\;{% \rm Im}\;A_{q\overline{q}+p}(s,M^{2},M^{\prime 2})$$ (1) where $M$ and $M^{\prime}$ are the invariant masses of the incoming and outgoing $q\bar{q}$ pair. The relation is shown schematically in Fig. 1. If we assume that forward $q\overline{q}+p$ scattering does not change the momentum of the quarks111In a more detailed treatment this assumption is no longer valid, see (S0.Ex15) and (S0.Ex17) below, and the discussion in section 4. then $A_{q\overline{q}+p}$ is proportional to $\delta(M^{2}-M^{\prime 2})$, and (1) becomes $$\sigma(s,Q^{2})\;=\;\sum_{q}\>\int_{0}^{\infty}\>\frac{dM^{2}}{(M^{2}+Q^{2})^{% 2}}\;\rho(s,M^{2})\>\sigma_{q\overline{q}+p}(s,M^{2})$$ (2) where the spectral function $\rho(s,M^{2})$ is the density of $q\overline{q}$ states. Following Badelek and Kwiecinski [4] we may divide the integral into two parts222Although Badelek and Kwiecinski base their fit to the data on eq. (2), they also discuss the more general case in which $M\neq M^{\prime}$ contributions may be included in the spectral function $\rho$., the region $M^{2}<Q_{0}^{2}$ described by the vector meson dominance model (VDM) and the region $M^{2}>Q_{0}^{2}$ described by perturbative QCD. Suppose that we assume $\rho\sigma_{q\overline{q}+p}$ is a constant independent of $M^{2}$ (which should be true modulo logarithmic QCD corrections) then the perturbative component of the integral is $$\int_{Q_{0}^{2}}^{\infty}\frac{dM^{2}}{(M^{2}+Q^{2})^{2}}\>\rho\sigma\;=\;\int% _{0}^{\infty}\frac{dM^{2}}{(M^{2}+Q^{2}+Q_{0}^{2})^{2}}\>\rho\sigma\;=\;\sigma% (s,Q^{2}+Q_{0}^{2}).$$ (3) Thus (2) becomes $$\sigma(s,Q^{2})\;=\;\sigma({\rm VDM})+\sigma^{{\rm QCD}}\>(s,Q^{2}+Q_{0}^{2})$$ (4) where the QCD superscript indicates that the last contribution is to be calculated entirely from perturbative QCD. We may use $$\sigma(s,Q^{2})\;=\;\frac{4\pi^{2}\alpha}{Q^{2}}\>F_{2}(x,Q^{2})$$ (5) where $x=Q^{2}/(s+Q^{2}-M^{2})$ to rewrite (4) as $$F_{2}(x,Q^{2})\;=\;F_{2}({\rm VDM})+\frac{Q^{2}}{Q^{2}+Q_{0}^{2}}\>F_{2}^{{\rm QCD% }}\>(\overline{x},Q^{2}+Q_{0}^{2})$$ (6) where $\overline{x}=(Q^{2}+Q_{0}^{2})/(s+Q^{2}+Q_{0}^{2}-M^{2})$. The vector meson dominance term has the form333Strictly speaking (7) is the formula $F_{T}$. The small longitudinal component will be discussed later. $$F_{2}({\rm VDM})\;=\;\frac{Q^{2}}{4\pi}\;\sum_{V}\>\frac{M_{V}^{4}\>\sigma_{V}% \>(s)}{\gamma_{V}^{2}(Q^{2}+M_{V}^{2})^{2}}$$ (7) where $M_{V}$ is the mass of vector meson $V$ and where the sum is over the vector mesons which fall in the region $M_{V}^{2}<Q_{0}^{2}$. The vector meson-proton cross sections $\sigma_{V}(s)$ can be determined from the $\pi p$ and $Kp$ total cross sections using the additive quark model and $\gamma_{V}^{2}$ from the leptonic width of the vector meson $V$. The last term in (6) can be determined from perturbative QCD using the known parton distributions. This approach was first proposed by Badelek and Kwiecinski (BK) [4]. We see that the BK model, (4) and (6), makes a parameter free prediction of $F_{2}(x,Q^{2})$ which is expected to be valid, for $s\gg Q^{2}$, for all $Q^{2}$ including very low $Q^{2}$. The BK predictions give an excellent description of the $F_{2}$ data for $Q^{2}\lower 3.01pt\hbox{$\;\stackrel{{\scriptstyle\textstyle>}}{{\sim}}\;$}1$ GeV${}^{2}$, but overshoot the new measurements of $F_{2}$ for smaller values of $Q^{2}$. This deficiency of the model was removed in a fit to the $F_{2}$ data performed by the H1 collaboration [1], but at the expense of using an unreasonably low value for $Q_{0}^{2}=0.45{\rm GeV}^{2}$ and of introducing an ad hoc factor of 0.77 to suppress the VDM term. The Badelek-Kwiecinski idea to separate perturbative and non-perturbative contributions is very attractive. To exploit it further we must achieve a better separation between the short and long distance contributions. To do this we take a two-dimensional integral over the longitudinal and transverse momentum components of the quark, rather than simply over the mass $M$ of the $q\overline{q}$ pair. The contribution coming from the small mass region is pure VDM and is given by (7). However, the behaviour of the cross section at large $M^{2}$ is a more delicate question. The part which comes from large $k_{T}$ of the quark can be calculated by perturbative QCD in terms of the known parton distributions, whereas for small $k_{T}$ we will use the additive quark model and the impulse approximation. That is only one quark interacts with the target and the quark-proton cross section is well approximated by one third of the proton-proton cross section. At this point it is interesting to note some recent excellent parametric fits of the data for $F_{2}$, or rather for $\sigma(\gamma^{*}p)$. The first is based on (2) and the generalised VDM [5]. To be more precise it is based entirely on a parametrization of the vector meson + proton cross section and does not take advantage of our present knowledge of perturbative QCD. As a consequence some anomalies appear. For instance the photoproduction cross section becomes negative for $\sqrt{s}<6{\rm GeV}$ (or $\sigma(Vp)<0$ for $M_{V}>0.26\sqrt{s})$. Second the model has anomalously large values of $R=\sigma_{L}/\sigma_{T}$ (where $F_{L}$ is obtained by including a factor $\xi Q^{2}/M_{V}^{2}$ on the right-hand-side of formula (7) for $F_{T}$). In the well-known deep inelastic region the model predicts $R>1$ for $Q^{2}>35{\rm GeV}^{2}$ and $x>0.01$ (and even $R>4$ for $x>0.1$) whereas the data indicate that $R\simeq 0.2-0.3$. This effect probably reflects, as the authors note, the omission of allowing $\xi$ to depend on $Q^{2}$, see (32) below. Rather their model has $\xi=0.171$ for all $Q^{2}$. An earlier approach based on the generalised VDM can be found in ref. [6]. In addition to the VDM contributions, this work contains a contribution at small $x$ coming from “heavy” long-lived fluctuations of the incoming photon, which are parametrized in terms of a “hard” Pomeron whose intercept is found to be $\alpha_{P^{\prime}}=1.289$. Another fit [7] of the $F_{2}$ data is based on the Regge motivated ALLM parametrization [8]. The description, with 23 parameters, describes the data well and may be used to interpolate the measurements. On the other hand the physical basis of the parametrization is not clear. For example a variable $x_{I\!\!P}$ is defined by $$\frac{1}{x_{I\!\!P}}\;=\;1\>+\>\frac{W^{2}-M^{2}}{Q^{2}+M_{I\!\!P}^{2}}$$ (8) where $W=\sqrt{s}$ is the $\gamma^{*}p$ centre-of-mass energy, $M$ is the proton mass and $M_{I\!\!P}$ reflects the energy scale of Pomeron exchange. This latter scale turns out to be extremely large, $M_{I\!\!P}^{2}=49.5$ GeV${}^{2}$, much larger than any hadron or glueball mass. Secondly the intercept, $\alpha_{R}(0)$, of the secondary trajectory decreases with $Q^{2}$, which is contrary to Regge theory (where $\alpha_{R}$ is independent of $Q^{2}$). The description of the $F_{2}$ or $\sigma(\gamma^{*}p)$ data presented in this paper is quite different. We use a physically motivated approach with very few free parameters, and we clearly separate the contributions to $F_{2}$ coming from the large (small quark $k_{T}$) and small (large $k_{T}$) distances. A recent study with a similar philosophy to ours can be found in ref. [9]. They achieve a qualitative description of the experimental data over a wide range of photon virtualities $(Q^{2})$ and energies $(W)$ in terms of short and long distance contributions. They emphasize that even in the very low $Q^{2}$ region the short distance contribution is not small, and also that at large $Q^{2}$ the long distance effects still contribute. Here we present a quantitative study which involves a more precise approximation for the $q\bar{q}+p$ cross section and includes consideration of the longitudinal structure function $F_{L}$. Other differences are that we compute the (small $k_{T}$) non-perturbative component using the VDM for small $q\bar{q}$ masses $M<Q_{0}$ and the additive quark model for $M>Q_{0}$; we do not need an artificial suppression444In ref. [9] an ad hoc suppression factor of 0.6 is used. of the VDM component. Moreover we make a detailed fit to the $F_{2}$ data in terms of an unintegrated gluon distribution which we determine using an unified evolution equation which embodies both DGLAP and BFKL evolution. 2. The $\gamma^{*}p$ cross section The spectral function $\rho$ occurring in (1) may be expressed in terms of the $\gamma^{*}\rightarrow q\overline{q}$ matrix element $\cal{M}$. We have $\rho\>\propto|{\cal M}|^{2}$ with, for transversely polarised photons, $$\displaystyle{\cal M}_{T}$$ $$\displaystyle=$$ $$\displaystyle\frac{\sqrt{z(1-z)}}{\overline{Q}^{2}+k_{T}^{2}}\quad\overline{u}% _{\lambda}(\gamma\cdot\varepsilon_{\pm})u_{\lambda^{\prime}}$$ $$\displaystyle=$$ $$\displaystyle\frac{(\varepsilon_{\pm}\cdot k_{T})[(1-2z)\lambda\mp 1]\>\delta_% {\lambda,-\lambda^{\prime}}\;+\;\lambda m_{q}\;\delta_{\lambda\lambda^{\prime}% }}{\overline{Q}^{2}+k_{T}^{2}}.$$ We use the notation of ref. [10], which was based on the earlier work of ref. [11]. Namely the photon polarisation vectors are $$\varepsilon_{T}\;=\;\varepsilon_{\pm}\;=\;\frac{1}{\sqrt{2}}\;(0,\;0,\;1,\;\pm i),$$ (10) and $\lambda,\lambda^{\prime}=\pm\>1$ corresponding to $q,\overline{q}$ helicities of $\pm\>\frac{1}{2}$. Also we introduce $$\overline{Q}^{2}\;=\;z(1-z)Q^{2}+m_{q}^{2}.$$ (11) Note that (S0.Ex3) is written in terms of “old-fashioned” time-ordered or light cone perturbation theory where both the $q$ and $\bar{q}$ are on-mass-shell. This form is appropriate when discussing the dispersion relation (1) in the $q\bar{q}$ invariant mass. For high photon momentum $p_{\gamma}$ the two time-ordered diagrams have a very different energy mismatch $$\left(\Delta E\>\simeq\>\frac{Q^{2}+M^{2}}{2p_{\gamma}}\right)\;\ll\;\left(% \Delta E^{\prime}\>\simeq\>p_{\gamma}\right),$$ (12) and so the contribution from the diagram $(\Delta E^{\prime})$ with the “wrong” time-ordering may be neglected. The remaining diagram, with energy denominator $1/\Delta E$, leads to the behaviour $1/(\bar{Q}^{2}+k_{T}^{2})$ contained in (S0.Ex3), as can be seen on using (14) below. In terms of the quark momentum variables $z,k_{T}^{2}$ of Fig. 1, equations (1) and (2) become $$\displaystyle\sigma_{T}$$ $$\displaystyle=$$ $$\displaystyle\sum_{q}\>\alpha\frac{e_{q}^{2}}{4\pi^{2}}\;\sum_{\lambda\>=\>\pm% \>1}\>\int dz\>d^{2}k_{T}\>({\cal M}_{T}{\cal M}_{T}^{*})N_{c}\;\frac{1}{s}\;{% \rm Im}\>A_{q\overline{q}+p}$$ $$\displaystyle=$$ $$\displaystyle\sum_{q}\>\alpha\frac{e_{q}^{2}}{2\pi}\;\int dz\>dk_{T}^{2}\frac{% [z^{2}+(1-z)^{2}]k_{T}^{2}+m_{q}^{2}}{(\overline{Q}^{2}+k_{T}^{2})^{2}}\;N_{c}% \;\sigma_{q\overline{q}+p}(k_{T}^{2})$$ where the number of colours $N_{c}=3$, and $e_{q}$ is the charge of the quark in units of $e$. We shall give the corresponding cross section $\sigma_{L}$ for longitudinal polarised photons in the section 2.1. The dispersion relation (2) in $M^{2}$ has become, in (S0.Ex6), a two dimensional integral. The relation between the variables is $$M^{2}\;=\;\frac{k_{T}^{2}+m_{q}^{2}}{z(1-z)}$$ (14) where $m_{q}$ is the mass of the quark. For massless quarks $z=\frac{1}{2}(1+\cos\theta)$, where $\theta$ is the angle of the outgoing quark with respect to the photon in the $q\bar{q}$ rest frame. The $dz$ integration is implicit in (2) as the integration over the quark angular distribution in the spectral function $\rho$. To determine $F_{2}(x,Q^{2})$ at low $Q^{2}$ we have to evaluate the contributions to $\sigma_{T}$ coming from the various kinematic domains. First the contribution from the perturbative domain with $M^{2}>Q_{0}^{2}$ and large $k_{T}^{2}$, and second from the non-perturbative or long-distance domains. 2.1. The $\gamma^{*}p$ cross section in the perturbative domain We may begin with the two gluon exchange contribution to quark–quark scattering $$\sigma_{q+q}\;=\;\frac{2}{9}\;4\pi\>\int\alpha_{S}^{2}(l_{T}^{2})\;\frac{dl_{T% }^{2}}{l_{T}^{4}}$$ (15) where $\pm\>\mbox{\boldmath$l$}_{T}$ are the transverse momenta of the gluons. Thus for $q$-proton scattering we obtain $$\sigma_{q+p}\;=\;\frac{2}{3}\;\pi^{2}\int\alpha_{S}(l_{T}^{2})\>f(x,l_{T}^{2})% \;\frac{dl_{T}^{2}}{l_{T}^{4}}$$ (16) where $$f(x,l_{T}^{2})=x\partial g(x,l_{T}^{2})/\partial\ln l_{T}^{2}$$ (17) is the unintegrated gluon density. The process is shown in Fig. 2. Finally for $q\overline{q}+{\rm proton}$ scattering we have to include the graph for $\overline{q}+p$ scattering. For both the $q$ and $\overline{q}$ interactions we have two diagrams of the type shown in Fig. 3 with ${\cal M}^{*}(\mbox{\boldmath$k$}_{T}+\mbox{\boldmath$l$}_{T})$ and ${\cal M}(k_{T})$. We obtain $$\displaystyle\sigma_{T}$$ $$\displaystyle=$$ $$\displaystyle\sum_{q}\>\frac{\alpha e_{q}^{2}}{\pi}\;\int\>d^{2}k_{1T}\>dz\>d^% {2}l_{T}\>\frac{f(x,l_{T}^{2})}{l_{T}^{4}}\;\alpha_{S}(l_{T}^{2})\;\times$$ $$\displaystyle\left\{\left[(1-z)^{2}+z^{2}\right]\>\left(\frac{\mbox{\boldmath$% k$}_{1T}}{D_{1}}\>+\>\frac{\mbox{\boldmath$l$}_{T}-\mbox{\boldmath$k$}_{1T}}{D% _{2}}\right)^{2}\>+\>m_{q}^{2}\left(\frac{1}{D_{1}}\>-\>\frac{1}{D_{2}}\right)% ^{2}\>\right\}$$ where $$x\;=\;(Q^{2}+M^{2})/s,$$ (19) $$\displaystyle D_{1}$$ $$\displaystyle=$$ $$\displaystyle k_{1T}^{2}\>+\>z(1-z)Q^{2}\>+\>m_{q}^{2},$$ (20) $$\displaystyle D_{2}$$ $$\displaystyle=$$ $$\displaystyle(\mbox{\boldmath$l$}_{T}-\mbox{\boldmath$k$}_{1T})^{2}\>+\>z(1-z)% Q^{2}\>+\>m_{q}^{2}.$$ Expression (S0.Ex9) is written as the square of the amplitude for quark-antiquark production, where we integrate over the quark momentum $k_{1T}$ in the inelastic intermediate state, see Fig. 2. The first term, proportional to $1/D_{1}$, corresponds to the amplitude where the gluon couples to the antiquark $k_{2}$, while in the second term, proportional to $1/D_{2}$, the gluon couples to the quark $k_{1}$. Of course form (S0.Ex9) can also be used to calculate the cross section for high $k_{T}$ dijet production $(\gamma^{*}p\rightarrow q\bar{q}p)$, where $k_{1T}$ and $k_{2T}$ refer to the transverse momenta of the outgoing quark jets. To separate the perturbative and non-perturbative contributions to the cross section (S0.Ex9) for our inclusive process we have to introduce a cut on the quark transverse momentum (as well as on the $q\bar{q}$ invariant mass $M$). At first sight it might appear that to obtain the perturbative contribution we simply require $k_{1T}>k_{0}$. However this implementation of the cut-off would not be correct. For instance if, as in Fig. 2, the two exchanged gluons couple to the $k_{1}$ line, then $\mbox{\boldmath$k$}_{2T}=\mbox{\boldmath$l$}_{T}-\mbox{\boldmath$k$}_{1T}$ may be small and in the limit $m_{q}\rightarrow 0$ and small $Q^{2}$ we would have an unphysical infrared singularity in the region of large $k_{1T}$ and $l_{T}$, but small $k_{2T}$, coming from the $1/D_{2}$ term in (S0.Ex9). To see better the origin of the infrared singularities we perform the square and write the expression in curly brackets in (S0.Ex9) in the form $$\displaystyle\left\{\frac{[(1-z)^{2}+z^{2}]k_{1T}^{2}+m_{q}^{2}}{D_{1}^{2}}\right.$$ $$\displaystyle+$$ $$\displaystyle\frac{[(1-z)^{2}+z^{2}](\mbox{\boldmath$l$}_{T}-\mbox{\boldmath$k% $}_{1T})^{2}+m_{q}^{2}}{D_{2}^{2}}$$ $$\displaystyle+$$ $$\displaystyle 2\left.\frac{[(1-z)^{2}+z^{2}]\mbox{\boldmath$k$}_{1T}\cdot(% \mbox{\boldmath$l$}_{T}-\mbox{\boldmath$k$}_{1T})-m_{q}^{2}}{D_{1}D_{2}}\right\}.$$ The danger comes from the second term, which corresponds to Fig. 2, whereas the last term, which describes interference, is infrared stable, as we will show later. Our aim is to separate off all the infrared contributions into the non-perturbative part. Therefore to evaluate the perturbative contribution coming from the second term we have to use the cut-off $|\mbox{\boldmath$l$}_{T}-\mbox{\boldmath$k$}_{1T}|>k_{0}$. This is equivalent to changing the variable of integration for the second term from $\mbox{\boldmath$k$}_{1T}$ to $\mbox{\boldmath$l$}_{T}-\mbox{\boldmath$k$}_{1T}$, and so its contribution is exactly equal to that of the first term. An alternative way to introduce the same cut-off is to separate off the incoming $q\bar{q}$ configurations with $k_{T}<k_{0}$ so that (S0.Ex9) becomes $$\displaystyle\sigma_{T}$$ $$\displaystyle=$$ $$\displaystyle\sum_{q}\>\frac{2\alpha e_{q}^{2}}{\pi}\>\int_{k_{0}^{2}}\>d^{2}k% _{T}dzd^{2}l_{T}\>\frac{f(x,l_{T}^{2})}{l_{T}^{4}}\>\alpha_{S}(l_{T}^{2})$$ $$\displaystyle\times$$ $$\displaystyle\left\{\frac{[(1-z)^{2}+z^{2}]k_{T}^{2}+m_{q}^{2}}{(\bar{Q}^{2}+k% _{T}^{2})^{2}}\>-\>\frac{[(1-z)^{2}+z^{2}]\mbox{\boldmath$k$}_{T}\cdot(\mbox{% \boldmath$k$}_{T}+\mbox{\boldmath$l$}_{T})+m_{q}^{2}}{(\bar{Q}^{2}+k_{T}^{2})(% \bar{Q}^{2}+(\mbox{\boldmath$k$}_{T}+\mbox{\boldmath$l$}_{T})^{2})}\right\}.$$ Note that the transverse momentum $\mbox{\boldmath$k$}_{T}$ of the incoming quark is equal to $\mbox{\boldmath$k$}_{1T}$ when the gluon couples to the antiquark (first term in (S0.Ex13)) and is equal to $\mbox{\boldmath$k$}_{1T}-\mbox{\boldmath$l$}_{T}$ when the gluon couples to the quark (second term in (S0.Ex13)). Working in terms of the variable $\mbox{\boldmath$k$}_{T}$ corresponding to the dispersion cut shown in F ig. 1 has the advantage that it is then easy to introduce cut-offs with respect to the invariant $q\bar{q}$ masses $M$ and $M^{\prime}$, which we need to impose in order to separate off the non-perturbative VDM contribution555Of course the use of the Feynman rules would yield the same result, but the time-ordered or light cone approach with the incoming $q$ and $\bar{q}$ on-shell is more convenient when we come to separate off the non-perturbative component in terms of $k_{T}<k_{0}$ and $M,M^{\prime}<Q_{0}$.. Another argument in the favour of the cut written in terms of initial quark momenta $k_{T}$ comes from the impact parameter representation. Instead of $k_{T}$ we may use the transverse coordinate $b$ and write the cross section (S0.Ex15) in the form $$\sigma_{T}\;\propto\;\int dzd^{2}b|\Psi_{\gamma}(b)|^{2}f(x,b)\alpha_{S}(b)$$ (23) where the gluon distribution $$f(x,b)\;=\;\int\>\frac{d^{2}l_{T}}{(2\pi)^{2}}[1-e^{i\mbox{\boldmath$l$}_{T}% \cdot\mbox{\boldmath$b$}}]\frac{f(x,l_{T}^{2})}{l_{T}^{4}}.$$ (24) The photon “wave function” is given by [12] $$|\Psi_{\gamma}(b)|^{2}\;=\;\sum_{q}\alpha e_{q}^{2}[z^{2}+(1-z)^{2}]\bar{Q}^{2% }K_{1}^{2}(\bar{Q}b),$$ (25) where for simplicity we have set $m_{q}=0$. The photon wave function is simply the Fourier transform of the matrix element ${\cal M}$ given by (S0.Ex3). It is most natural to take the infrared cut-off in coordinate space, say $b<b_{0}$. The variable which is the Fourier conjugate of $b$ is the incoming quark momentum $k_{T}$ of Fig. 1 (rather than the intermediate transverse momentum $k_{1T}$ of Fig. 2). This is further justification to impose the infrared cut in the form $k_{T}>k_{0}$. Now let us consider the interference contribution, that is the last term in (S0.Ex15). It is infrared stable since in the limit $m_{q}^{2}\rightarrow 0$ and $Q^{2}\rightarrow 0$ it takes the form $$\int\>\frac{d^{2}k_{T}\mbox{\boldmath$k$}_{T}\cdot(\mbox{\boldmath$k$}_{T}+% \mbox{\boldmath$l$}_{T})}{k_{T}^{2}(\mbox{\boldmath$k$}_{T}+\mbox{\boldmath$l$% }_{T})^{2}}\;\sim\;\int\>\frac{d(|\mbox{\boldmath$k$}_{T}+\mbox{\boldmath$l$}_% {T}|)}{k_{T}}$$ (26) when $|\mbox{\boldmath$k$}_{T}+\mbox{\boldmath$l$}_{T}|$ is small. We have used boundaries $k_{T}^{2}=k_{0}^{2}$ and $M^{2}=Q_{0}^{2}$ to separate the perturbative QCD (pQCD), additive quark model (AQM) and vector meson dominance (VDM) contributions. As a result the $\gamma^{*}p$ cross section formulae, (S0.Ex15), is asymmetric between the ingoing and outgoing quarks. The origin of the asymmetry is the difference of the transverse momentum of the outgoing quark $(\mbox{\boldmath$k$}_{T}+\mbox{\boldmath$l$}_{T})$ and the incoming quark $(\mbox{\boldmath$k$}_{T})$ in Fig. 3. Such a graph therefore represents the interference between $M$ and $M^{\prime}\neq M$ states. To obtain the pure pQCD contribution we require the incoming $q\overline{q}$ system to satisfy $M^{2}>Q_{0}^{2}$ and $k_{T}>k_{0}$. Ideally we would like to impose the same cuts on the outgoing $q\overline{q}$ system, namely $$M^{\prime 2}\;=\;\frac{(\mbox{\boldmath$k$}_{T}+\mbox{\boldmath$l$}_{T})^{2}\>% +\>m_{q}^{2}}{z(1-z)}\;>\;Q_{0}^{2}$$ (27) and $k_{T}^{\prime}=|\mbox{\boldmath$k$}_{T}+\mbox{\boldmath$l$}_{T}|>k_{0}$. However in a small region of phase space, where $\mbox{\boldmath$l$}_{T}$ lies close to $-\mbox{\boldmath$k$}_{T}$, we may have $M^{\prime}<Q_{0}$ and/or $k_{T}^{\prime}<k_{0}$. For this region we therefore have interference between the pQCD and VDM (or AQM) contributions. There is no double counting since neither our VDM or AQM666For the AQM contribution the interaction with the target proton is described by the forward elastic quark scattering amplitude and hence we have $z^{\prime}=z,\;k_{T}^{\prime}=k_{T}$ and $M=M^{\prime}$. components contain interference terms. This is fortunate because we cannot neglect the contribution from this small part of phase space of Fig. 3 without destroying gauge invariance, which is provided by the sum of the graphs in Figs. 2 and 3. We stress that the contribution coming from this limited region $\mbox{\boldmath$l$}_{T}$ close to $-\mbox{\boldmath$k$}_{T}$ is infrared stable and hence it is small and has little impact on the overall fit to the data. So far we have only calculated $\sigma_{T}$. In the same way we may calculate the cross section for longitudinally polarised incident photons. In this case the relation analogous to (S0.Ex6) reads $$\sigma_{L}\;=\;\sum_{q}\>\frac{\alpha e_{q}^{2}}{2\pi}\;\int dzdk_{T}^{2}\;% \frac{4Q^{2}\>z^{2}(1-z)^{2}N_{c}}{(\overline{Q}^{2}+k_{T}^{2})^{2}}\;\sigma_{% q\overline{q}+p}(k_{T}^{2}),$$ (28) which on evaluating $\sigma_{q\overline{q}+p}$ gives $$\displaystyle\sigma_{L}$$ $$\displaystyle=$$ $$\displaystyle\sum_{q}\>\frac{2\alpha e_{q}^{2}}{\pi}\;Q^{2}\int_{k_{0}^{2}}\>d% ^{2}k_{T}dz\>d^{2}l_{T}\;\frac{f(x,l_{T}^{2})}{l_{T}^{4}}\>\alpha_{S}(l_{T}^{2% })4z^{2}(1-z)^{2}\;\times$$ $$\displaystyle\left\{\frac{1}{(\bar{Q}^{2}+k_{T}^{2})^{2}}\;-\;\frac{1}{(\bar{Q% }^{2}+k_{T}^{2})(\bar{Q}^{2}+(\mbox{\boldmath$k$}_{T}+\mbox{\boldmath$l$}_{T})% ^{2})}\right\}.$$ ¿From the formal point of view the integrals over $l_{T}^{2}$ and $k_{T}^{2}$ cover the interval 0 to $\infty$. For the $l_{T}^{2}$ integration in the domain $l_{T}^{2}<l_{0}^{2}\sim 1{\rm GeV}^{2}$ we may use the approximation $$\alpha_{S}(l_{T}^{2})\>f(x,l_{T}^{2})\;=\;\frac{l_{T}^{2}}{l_{0}^{2}}\>\alpha_% {S}(l_{0}^{2})\;f(x,l_{0}^{2}).$$ (30) For $k_{T}^{2}<k_{0}^{2}$ we enter the long distance domain which we discuss next. To be precise we use the formula (S0.Ex15) and (S0.Ex17) to evaluate the cross sections only in the perturbative domain $M^{2}>Q_{0}^{2}$ and $k_{T}^{2}>k_{0}^{2}$. We exclude the region $M^{2}<Q_{0}^{2}$ and $k_{T}^{2}>k_{0}^{2}$ from the perturbative domain as the point-like (short-distance) component of the vector meson wave function will be included in the VDM term. 2.2. The $\gamma^{*}p$ cross section in the non-perturbative domain There are two different non-perturbative contributions. First for $M^{2}<Q_{0}^{2}$ we use the conventional vector meson dominance formula (7) for $F_{T}(x,Q^{2})$. We also should include the longitudinal structure function $F_{L}(x,Q^{2})$. $F_{L}$ is given by a formula just like (7) but with the introduction of an extra factor $\xi Q^{2}/M_{V}^{2}$ on the right-hand side. $\xi(Q^{2})$ is a phenomenological function which should decrease with increasing $Q^{2}$. The data for $\rho$ production indicate that $\xi(m_{\rho}^{2})\lower 3.01pt\hbox{$\;\stackrel{{\scriptstyle\textstyle<}}{{% \sim}}\;$}0.7$ [13], whereas at large $Q^{2}$ the usual properties of deep inelastic scattering predict that $$\frac{F_{L}}{F_{T}}\;\sim\;\frac{4k_{T}^{2}}{Q^{2}}\;\lower 3.01pt\hbox{$\;% \stackrel{{\scriptstyle\textstyle<}}{{\sim}}\;$}\;\frac{M_{V}^{2}}{Q^{2}}.$$ (31) So throughout the whole $Q^{2}$ region the contribution of $F_{L}$ is less than that of $F_{T}$. In order to calculate $F_{L}$ (VDM) we insert the factor $\xi Q^{2}/M_{V}^{2}$ in (7) and use an interpolating formula for $\xi$ $$\xi\;=\;\xi_{0}\left(\frac{M_{V}^{2}}{M_{V}^{2}+Q^{2}}\right)^{2}$$ (32) with $\xi_{0}=0.7$, which accommodates both the $\rho$ meson results and the deep inelastic expectations of (31). However the recent $\rho$ electroproduction, $\gamma^{*}p\rightarrow\rho p$, measurements [14] indicate that $\sigma_{L}(\rho)/\sigma_{T}(\rho)$ may tend to a constant value for large $Q^{2}$. We therefore also show the effect of calculating $F_{L}$ (VDM) from (7) using $$\xi\;=\;\xi_{0}\>\left(\frac{M_{V}^{2}}{M_{V}^{2}+Q^{2}}\right),$$ (33) see Fig. 9 below. The second non-perturbative contribution covers the low $k_{T}$ part of the $M^{2}>Q_{0}^{2}$ domain, that is the region with $k_{T}^{2}<k_{0}^{2}$. Here we use the additive quark model and the impulse approximation to evaluate the $\sigma_{q\overline{q}+p}$ cross sections in formulas (S0.Ex6) and (28). 2.3. Final formulae For completeness we list below the formulae that we use for the non-pQCD contributions coming from the $k_{T}<k_{0}$ domain. When $M<Q_{0}$, with $Q_{0}^{2}\simeq 1-1.5{\rm GeV}^{2}$, we use the vector meson dominance model. We have $$\sigma_{T}({\rm VDM})\;=\;\pi\alpha\sum_{V=\rho,\omega,\phi}\;\frac{M_{V}^{4}% \;\sigma_{V}(W^{2})}{\gamma_{V}^{2}(Q^{2}+M_{V}^{2})^{2}}$$ (34) $$\sigma_{L}({\rm VDM})\;=\;\pi\alpha\sum_{V=\rho,\omega,\phi}\;\frac{Q^{2}M_{V}% ^{2}\;\sigma_{V}(W^{2})}{\gamma_{V}^{2}(Q^{2}+M_{V}^{2})^{2}}\;\xi_{0}\;\left(% \frac{M_{V}^{2}}{Q^{2}+M_{V}^{2}}\right)^{2}$$ (35) with $\xi_{0}=0.7$, see (32). For the vector meson-proton cross sections, we take $$\displaystyle\sigma_{\rho}$$ $$\displaystyle=$$ $$\displaystyle\sigma_{\omega}\;=\;\textstyle{\frac{1}{2}}\left[\sigma(\pi^{+}p)% +\sigma(\pi^{-}p)\right]$$ $$\displaystyle\sigma_{\phi}$$ $$\displaystyle=$$ $$\displaystyle\sigma(K^{+}p)+\sigma(K^{-}p)-\textstyle{\frac{1}{2}}\left[\sigma% (\pi^{+}p)+\sigma(\pi^{-}p)\right].$$ (36) Finally for $M>Q_{0}$ (and $k_{T}<k_{0}$) we use the additive qua rk model and impulse approximation $$\sigma_{T}({\rm AQM})\;=\;\alpha\sum_{q}\;\frac{e_{q}^{2}}{2\pi}\;\int\>dzdk_{% T}^{2}\;\frac{[z^{2}+(1-z)^{2}]k_{T}^{2}+m_{q}^{2}}{(\tilde{Q}^{2}+k_{T}^{2})^% {2}}\;N_{c}\>\sigma_{q\overline{q}+p}\;(W^{2})$$ (37) $$\sigma_{L}({\rm AQM})\;=\;\alpha\sum_{q}\;\frac{e_{q}^{2}}{2\pi}\;\int\>dzdk_{% T}^{2}\;\frac{4Q^{2}\;z^{2}(1-z)^{2}}{(\tilde{Q}^{2}+k_{T}^{2})^{2}}\;N_{c}\>% \sigma_{q\overline{q}+p}\;(W^{2})$$ (38) where for $\sigma_{q\overline{q}+p}$ we take, for the light quarks, $$\sigma_{q\overline{q}+p}\>(W^{2})\;=\;\frac{2}{3}\;\sigma_{p\overline{p}}\>(s=% \textstyle{\frac{3}{2}}W^{2}).$$ (39) The “photon” wave function contains propagators like $1/(\overline{Q}^{2}+k_{T}^{2})$ and in impact parameter $b_{T}$ space it receives contributions from the whole of the $b_{T}$ plane extending out to infinity. On the other hand confinement restricts the quarks to have limited separation, say $b_{T}=|\mbox{\boldmath$b$}_{1T}-\mbox{\boldmath$b$}_{2T}|\lower 3.01pt\hbox{$% \;\stackrel{{\scriptstyle\textstyle<}}{{\sim}}\;$}1$ fm. To allow for this effect we have replaced $\overline{Q}^{2}$ by $\tilde{Q}^{2}=\overline{Q}^{2}+\mu^{2}$ in (37) and (38), where $\mu$ is typically the inverse pion radius. We therefore take $\mu^{2}=0.1{\rm GeV}^{2}$. This change has no effect for $Q^{2}\gg\mu^{2}$ but for $Q^{2}\lower 3.01pt\hbox{$\;\stackrel{{\scriptstyle\textstyle<}}{{\sim}}\;$}\mu% ^{2}$ it gives some suppression of the AQM contribution. 2.4. The quark mass In the perturbative QCD domain we use the (small) current quark mass $m_{{\rm curr}}$, while for the long distance contributions it is more natural to use the constituent quark mass $M_{0}$. To provide a smooth transition between these values (in both the AQM and perturbative QCD domains) we take the running mass obtained from a QCD-motivated model of the spontaneous chiral symmetry breaking in the instanton vacuum [15] $$m_{q}^{2}\;=\;M_{0}^{2}\>\left(\frac{\Lambda^{2}}{\Lambda^{2}+2\mu^{2}}\right)% ^{6}\>+\>m_{\rm curr.}^{2}.$$ (40) The parameter $\Lambda=6^{1/3}/\rho=1.09$ GeV, where $\rho=1/(0.6~{}{\rm GeV})$ is the typical size of the instanton. $\mu$ is the natural scale of the problem, that is $\mu^{2}=z(1-z)Q^{2}+k_{T}^{2}$ or $\mu^{2}=z(1-z)Q^{2}+(\mbox{\boldmath$l$}_{T}+\mbox{\boldmath$k$}_{T})^{2}$ as appropriate. For constituent and current quark masses we take $M_{0}=0.35$ GeV and $m_{\rm curr}=0$ for the $u$ and $d$ quarks, and $M_{0}=0.5$ GeV and $m_{\rm curr}=0.15$ GeV for the $s$ quarks. 3. The description of the data for $F_{2}$ Though in principle it would appear that we have a parameter-free777Apart of course from the form of the input gluon distribution, $g(x,l_{0}^{2}$). prediction of $F_{2}(x,Q^{2})$ at low $Q^{2}$, in practise we have to fix the values of the parameters $k_{0}^{2}$ and $Q_{0}^{2}$. Recall that $k_{T}^{2}=k_{0}^{2}$ specifies the boundary between the non-perturbative and perturbative QCD components, and that $M^{2}=Q_{0}^{2}$ specifies the boundary between the VDM and AQM contributions to the non- perturbative component. The results that we present correspond to the choice $Q_{0}^{2}=1.5$ GeV${}^{2}$, for which the VDM contribution is computed from the $\rho,\omega$ and $\phi$ meson contributions (with mass $M_{V}<Q_{0}$). The more sensitive parameter is $k_{0}^{2}$. We therefore present results for two choices, namely $k_{0}^{2}=0.2$ and 0.5 GeV${}^{2}$, which show some interesting and observable differences. The results are much more stable to the increase of $k_{0}^{2}$ from 0.5 to 1 GeV${}^{2}$. To calculate the perturbative contributions we need to know the unintegrated gluon distribution $f(x,l_{T}^{2})$, see (S0.Ex15) and (S0.Ex17). To determine $f(x,l_{T}^{2})$ we carry out the full programme described in detail in Ref. [16]. We solve a “unified” equation for $f(x,l_{T}^{2})$ which incorporates888Following Ref. [16] we appropriately constrain the transverse momenta of the emitted gluons along the BFKL ladder. There is an indication, from comparing the size of the next-to-leading $\ln(1/x)$ contribution [17] to the BFKL intercept with the effect due to the kinematic constraint [18], that the incorporation of the constraint into the evolution analysis gives a major part of the subleading $\ln(1/x)$ corrections. BFKL and DGLAP evolution on an equal footing, and al lows the description of both small and large $x$ data. To be precise we solve a coupled pair of integral equations for the gluon and sea quark distributions, as well as allowing for the effects of valence quarks. As in Ref. [16] we take $l_{0}^{2}=1$ GeV${}^{2}$, but due to the large anomalous dimension of the gluon the results are quite insensitive to the choice of $l_{0}$ in the interval 0.8–1.5 GeV. The starting distributions for the evolution are specified in terms of three parameters $N,\lambda$ and $\beta$ of the gluon $$xg(x,l_{0}^{2})\;=\;Nx^{-\lambda}(1-x)^{\beta}$$ (41) where $l_{0}^{2}=1$ GeV${}^{2}$. At small $x$ the gluon drives the sea quark distribution. The $k_{T}$ factorization theorem gives $$S_{q}(x,Q^{2})\;=\;\int_{x}^{1}\>\frac{dz}{z}\>\int\>\frac{dk^{2}}{k^{2}}\>S_{% \rm box}^{q}(z,k^{2},Q^{2})\>f\left(\frac{x}{z},Q^{2}\right)$$ (42) where $S_{\rm box}$ describes the quark box (and crossed box) contribution. The full expression for $S_{\rm box}$ is given in Ref. [16]. Thus the sea $S_{q}$ is given in terms of the gluon $f$ except for the contribution from the non-perturbative region $k^{2}<k_{0}^{2}$, where we take $$S_{u}^{\rm np}\;=\;S_{d}^{\rm np}\;=\;2S_{s}^{\rm np}\;=\;C\>x^{-0.08}(1-x)^{8}.$$ (43) The parameter $C$ is fixed by the momentum sum rule in terms of the parameters $N,\lambda$ and $\beta$ specifying the gluon. The charm component of the sea is obtained entirely from perturbative QCD (see [16]) with the charm mass $m_{c}=1.4$ GeV. The valence quark contribution plays a very minor role in our analysis and so we take it from the GRV set999The GRV valence distributions were fitted to the MRS(A) distributions [20] at $Q^{2}=4$ GeV${}^{2}$. of partons [19]. Of course the sea quark distributions $S_{q}(x,Q^{2})$ of (42) (and (43)) are used only to get a more precise determination of $f(x,l_{T}^{2})$ through the coupled evolution equations. These forms for $S_{q}$ are not used in our fit to the $F_{2}$ data since the sea contribution is already embedded in (S0.Ex15) and (S0.Ex17). We determine the parameters $N,\lambda$ and $\beta$ by fitting to the available data for $F_{2}$ with $x<0.05$. We present two fits corresponding to a larger perturbative QCD contribution (Fit A with $k_{0}^{2}=0.2$ GeV${}^{2}$) and a smaller pQCD component (Fit B with $k_{0}^{2}=0.5$ GeV${}^{2}$). The values of the gluon parameters are given in Table I and the quality of the description of the $F_{2}$ data is shown in Fig. 4. Only a selection of the data fitted are shown in Fig. 4. Both descriptions are in general satisfactory, but Fit A is superior mainly due to Fit B lying below the data for $Q^{2}\simeq 1$ GeV${}^{2}$. This difference is better seen in Fig. 5 which shows the fit as a function of $Q^{2}$ for various fixed values of $x$. We see that Fit A, with the larger perturbative component is more able to accommodate the charge in slope going from high to low $Q^{2}$. It is informative to show the components of the cross section. The breakdown is shown in Figs. 6 and 7 for Fits A and B respectively for the maximum energy $W=245$ GeV for which data are available. It appears that the low $Q^{2}$ behaviour of the pQCD component with low $l_{T}$ plays the vital role. The description of the $F_{2}$ data by Fit A is better than that obtained by Badelek and Kwiecinski [4], which is to be expected since we perform a fit to the data, albeit with a very economical parametrization. Fig. 5 also shows the HERA photoproduction measurements at $W=170$ and 210 GeV. These data are not included in the fit. We see that our description overshoots the published H1 [21] and ZEUS [22] measurements, although by a smaller margin than that of ref. [4]. On the other hand our extrapolation is in excellent agreement with a subsequent analysis of ZEUS data performed in ref. [23]. We will return to the comparison with photoproduction data when we study the effects of a different choice of the quark mass. 4. Discussion We have made what appears to be in principle a prediction of $F_{2}$, or rather of $\sigma_{\gamma^{*}p}$, over the entire $Q^{2}$ range which relies only on the form of the initial gluon distribution, see (41) and the parameter values of Table 1. However a comparison of the results of Fits A and B show that in practice the results are dependent on the choice of the boundary $k_{T}^{2}=k_{0}^{2}$ between the perturbative and non-perturbative contributions, where $\pm\mbox{\boldmath$k$}_{T}$ are the transverse momenta of the incoming $q$ and $\bar{q}$ which result from the $\gamma^{*}\rightarrow q\bar{q}$ transition. There are compelling reasons to select Fit A with $k_{0}^{2}=0.2$ GeV${}^{2}$, which has the larger perturbative QCD contribution. Fit A is not only preferred by the data, but it also yields an input gluon with a more reasonable small $x$ behaviour. In fact for Fit A $(k_{0}^{2}=0.2$ GeV${}^{2}$) the AQM contribution is almost negligible and the fit produces a reasonable $\lambda$, namely $\lambda=0.16$. On the other hand Fit B (with $k_{0}^{2}=0.5$ GeV${}^{2}$) requires a larger $\lambda$, $\lambda=0.32$, in order to compensate for the much more flat $x^{-0.08}$ behaviour of the rather large AQM component. Further support for Fit A comes from the predictions for the longitudinal structure function, $F_{L}$. Fig. 8 shows that the prediction from Fit B is much larger than that of Fit A due mainly to the large AQM contribution. Fig. 8 also shows the expectations for $F_{L}$ from the analysis of ref. [24] and from the MRST partons [25] of the most recent global parton analysis. We see these independent determinations of $F_{L}$ favour the prediction of Fit A. For completeness we show by the dashed curve in Fig. 9 the predictions of $\sigma_{L}/\sigma_{T}$ versus $Q^{2}$ obtained from Fit A. This figure also shows the effect of replacing (32) by (33) in the formula for the VDM contribution to $F_{L}$. Recall that (33) was motivated by the possibility that the ratio $\sigma_{L}(\rho)/\sigma_{L}(\rho)$ for $\rho$ meson electroproduction tends to a constant value $A$ as $Q^{2}\rightarrow\infty$. We see from Fig. 9 that this change to the VDM contribution affects $F_{L}$, and hence $\sigma_{L}/\sigma_{T}$, mainly in the interval $0.2<Q^{2}<10$ GeV${}^{2}$. It is straightforward to deduce from Fig. 9 the effect of changing the value of the parameter $\xi_{0}$ of (33) to match the constant limit $A$ observed for the $\rho$ ratio. A remarkable feature of the recent measurements [1, 2] of $\sigma(\gamma^{*}p)\>=\>(4\pi^{2}\alpha/Q^{2})\>F_{2}(x,Q^{2})$ at fixed $W$, is the transition from a flat behaviour in the low $Q^{2}$ domain to the steep $\sim\>Q^{-2}$ fall off characteristic of perturbative QCD, see Fig. 5. The transition appears to occur at $Q^{2}\sim 0.2{\rm GeV}^{2}$. Such a break with decreasing $Q^{2}$ may reflect either the saturation due to the onset of absorption corrections or the fact that we are entering the confinement domain. The observed features of the data favour the last possibility. First there is no similar break in the behaviour of $F_{2}$ as a function of $x$ at low $x$ which would be expected if absorptive corrections were important. A related observation is that the break, as a function of $Q^{2}$, appears to occur at the same value $Q^{2}\sim 0.2{\rm GeV}^{2}$ for those $W$ values for which data are available. Moreover we directly estimated the effect of the absorptive corrections using the eikonal rescattering model and found that they give a negligibly small effect on the $Q^{2}$ behaviour of the cross section and of $F_{2}$. On the other hand, if the break is due to confinement then it is expected to occur at a value of $\overline{Q}^{2}$ which corresponds to the distances of the order of 1 fm, that is $$z(1-z)\>Q^{2}\;\sim\;Q^{2}/5\;\sim\;(0.2{\rm GeV})^{2}$$ (44) which gives $Q^{2}\sim 0.2{\rm GeV}^{2}$ where the break is observed. In our calculations we have used a running quark mass which links the current $(m_{{\rm curr}})$ to the constituent $(M_{0})$ mass. The growth of $m_{q}$ in the transition region from perturbative QCD to the large distance domain is an important non-perturbative effect, which we find is required by the $F_{2}$ data. ¿From the theoretical point of view such a behaviour of $m_{q}$ may be generated by the spontaneous breakdown of chiral symmetry in the instanton QCD vacuum [15]. The qualitative features are that $m_{q}\sim M_{0}$ if the virtuality $q^{2}$ of the quark is less or of the order of the square of the inverse of the instanton size, but that $m_{q}$ decreases quickly as $q^{2}$ increases. In our analysis we have used a simplified power approximation for $m_{q}$, see (40). It is interesting to explore the effect of a different choice of quark mass. The dashed curves in Fig. 10 show the effect of using the constituent (fixed) mass $M_{0}$ of the quarks in all the contributions to $F_{2}$ or $\sigma(\gamma^{*}p)$. As expected in the large $Q^{2}\gg M_{0}^{2}$ perturbative domain the change has little effect. For small $Q^{2}$ it reduces the predictions. For example, the photop roduction estimates for $W\sim 200$ GeV are reduced by more than 10% and would bring our analysis more into line with the published H1 and ZEUS photoproduction measurements. However our running quark mass predictions (continuous curves) are more physically motivated and should be more reliable. It will be interesting to see if their agreement with the experimental values extracted in ref. [23] is maintained when the new photoproduction measurements are available from the HERA experiments. A noteworthy point of our description of the $F_{2}$ data is the importance of the non-diagonal $(M\neq M^{\prime})$ perturbative QCD contribution to the double dispersion relation (1). The contribution, which comes from the interference terms in (S0.Ex15) (and (S0.Ex17)), corresponds to the diagram shown in Fig. 3. It clearly has a negative sign, and moreover $$\left\{M^{2}\;=\;\frac{k_{T}^{2}+m_{q}^{2}}{z(1-z)}\right\}\;\neq\;\left\{M^{% \prime 2}\;=\;\frac{(\mbox{\boldmath$k$}_{T}+\mbox{\boldmath$l$}_{T})^{2}+m_{q% }^{2}}{z(1-z)}\right\}.$$ (45) After the integration over the azimuthal angle in (S0.Ex15), the interference term exactly cancels the diagonal first term for any $l_{T}<k_{T}$ in the limit of $Q^{2}\rightarrow 0$ and $m_{q}=0$. As a result the perturbative component of the cross section coming from the region of small $l_{T}$ essentially vanishes101010Of course there is also a non-negligible contribution coming from the domain $l_{T}>k_{T}$ which does not vanish as $Q^{2}\rightarrow 0$. as $Q^{2}\rightarrow 0$. This property, seen in the $l_{T}<l_{0}$ components shown in Figs. 6 and 7, helps to reproduce the very flat $Q^{2}$ behaviour of $\sigma(\gamma^{*}p)$ observed at low $Q^{2},\;Q^{2}\lower 3.01pt\hbox{$\;\stackrel{{\scriptstyle\textstyle<}}{{\sim}% }\;$}0.2{\rm GeV}^{2}$. The fact that this low $l_{T}$ gluon contribution becomes very small as $Q^{2}$ decreases (and in fact vanishes for $l_{T}<k_{T}$ in the $Q^{2}\rightarrow 0$ limit) may be considered as a justification of the perturbative QCD contribution to $F_{2}$ for low $Q^{2}$. The VDM cross section (and other diagonal contributions as well) decrease as $1/(M_{V}^{2}+Q^{2})^{2}$ so we require just such a component which increases with $Q^{2}$ in order to compensate the decrease of the diagonal terms. The compensation is well illustrated by Figs. 6 and 7 which show the behaviour of the various components as a function of $Q^{2}$. Of course the compensation (that is the effect of the vanishing of the low $l_{T}^{2}$ contribution as $Q^{2}\rightarrow 0$) is more manifest in the Fit A where a larger part of the phase space is described in terms of perturbative QCD. It is interesting to note that in this paper we have included two different types of interference effect. First we have the dominant interference between the large $M$ and $M^{\prime}$ states which gives rise to the decrease of the pure perturbative small $l_{T}$ component of the cross section as $Q^{2}\rightarrow 0$, and which is responsible for the good description of the low $Q^{2}$ data. Then there is the interference between the perturbative and non-perturbative amplitudes which we have modelled using the perturbative formula in the region of small $M^{\prime}$ and/or small $|\mbox{\boldmath$k$}_{T}+\mbox{\boldmath$l$}_{T}|$. We have noted that this contribution is small due to the infrared stability of the integral, as was shown in (26). In summary we obtain an excellent description of $F_{2}$, or rather of $\sigma_{\gamma^{*}p}$, over the entire $Q^{2}$ range (from very low to high values of $Q^{2}$) in terms of physically motivated perturbative and non-perturbative contributions. The choice of the boundary between the perturbative and non-perturbative domains which gives an excellent fit to the data, is also found to yield a sensible gluon distribution and reasonable predictions for $F_{L}$. Acknowledgements We thank Krzysztof Golec-Biernat, Jan Kwiecinski and Vladimir Shekelyan for valuable discussions and their interest in this work. MGR thanks the Royal Society, INTAS (95-311) and the Russian Fund of Fundamental Research (98 02 17629), for support. AMS thanks the Polish State Committee for Scientific Research (KBN) grants No. 2 P03B 089 13 and 2 P03B 137 14 for support. Also this work was supported in part by the EU Fourth Framework Programme ‘Training and Mobility of Researchers’, Network ‘Quantum Chromodynamics and the Deep Structure of Elementary Particles’, contract FMRX-CT98-0194 (DG 12 - MIHT). References [1] H1 collaboration: C. Adloff et al., Nucl. Phys. B497 (1997) 3; H1 collaboration: S. Aid et al., Nucl. Phys. B470 (1996) 3. [2] ZEUS collaboration: J. Breitweg et al., Phys. Lett. B407 (1997) 432. [3] V.N. Gribov, Sov. Phys. JETP 30 (1970) 709; J.J. Sakurai and D. Schildknecht, Phys. Lett. 40B (1972) 121; B. Gorczyca and D. Schildknecht, Phys. Lett. 47B (1973) 71. [4] B. Badelek and J. Kwiecinski, Phys. Lett.  B295 (1992) 263. [5] D. Schildknecht and H. Spiesberger, Bielefeld preprint BI-TP 97/25, hep-ph/9707447; D. Schildknecht, Acta Phys. Pol. B28 (1997) 2453. [6] G. Kerley and G. Shaw, Phys. Rev. D56 (1997) 7291. [7] H. Abramowicz and A. Levy, DESY preprint-97-251 (1997). [8] H. Abramowicz, E.M. Levin, A. Levy and U. Maor, Phys. Lett. B269 (1991) 465. [9] E. Gotsman, E.M. Levin and U. Maor, DESY preprint 97-154, hep-ph/9708275. [10] E.M. Levin, A.D. Martin, M.G. Ryskin and T.  Teubner, Z. Phys. C74 (1997) 671. [11] A.H. Mueller, Nucl. Phys. B335 (1990) 115; S. Brodsky and P. Lepage, Phys. Rev. D22 (1980) 2157. [12] N.N. Nikolaev and B.G. Zakharov, Z. Phys. C49 (1991) 607. [13] T.H. Bauer, R.D. Spital, D.R. Yennie and F.M.  Pipkin, Rev. Mod. Phys. 50 (1978) 261. [14] ZEUS collaboration: contribution to the 6th International Workshop on “Deep Inelastic Scattering and QCD” (DIS98), Brussels, April 1998. [15] V.Yu. Petrov et al., RUB-TPII-8/97, hep-ph/9710270. [16] J. Kwiecinski, A.D. Martin and A.M. Stasto, Phys. Rev. D56 (1997) 3991. J. Kwiecinski, A.D. Martin and A.M. Stasto, Acta Phys. Polon.   B28, No.12 (1997) 2577. [17] V.S. Fadin and L.N. Lipatov, hep-ph/9802290; see also V.S. Fadin and L.N. Lipatov, contribution to DIS 98, Brussels, April 1998; G. Camici and M. Ciafaloni, contribution to DIS 98, Brussels, April 1998. [18] J. Kwiecinski, A.D. Martin and P.J. Sutton, Z. Phys. C71 (1996) 585. [19] M. Glück, E. Reya and A. Vogt, Z. Phys. C67 (1995) 433. [20] A.D. Martin, R.G. Roberts and W.J. Stirling, Phys. Rev. D50 (1994) 6734. [21] H1 collaboration: S. Aid et al., Z. Phys. C69 (1995) 27. [22] ZEUS collaboration: M. Derrick et al., Z. Phys. C63 (1994) 408. [23] J. Mainusch, Measurement of the total photon-proton cross section at HERA energies, University of Hamburg, Ph.D. thesis (1995), DESY F35D-95- 14. [24] B. Badelek, J. Kwiecinski and A. Stasto, Z. Phys. C74 (1997) 297. A. Stasto, Acta Phys. Polon. B27, (1996) 1353. [25] A.D. Martin, R.G. Roberts, W.J. Stirling and R.S. Thorne, hep-ph/9803445, Eur. Phys. J. C (in press). Figure Captions Fig. 1 The schematic representation of the double dispersion (1) for the $\gamma^{*}p$ total cross section $\sigma(s,Q^{2})$ at fixed c.m. energy $\sqrt{s}$. The cut variables, $M$ and $M^{\prime}$, are the invariant masses of the incoming and outgoing $q\overline{q}$ states in the quasi-elastic forward amplitude, $A_{q\overline{q}+p}$. Fig. 2 The quark-proton interaction via two gluon exchange. The spectator (anti)quark is shown by the dashed line. $f(x,l_{T}^{2})$ is the unintegrated gluon distribution of the proton. Fig. 3 A “non-diagonal” $q\overline{q}-{\rm proton}$ interaction. Fig. 4 The description of the $F_{2}$ data obtained in Fits A and B. Only a subset of the data fitted is shown. Fig. 5 The curves are the values of the virtual photon-proton cross section $\sigma_{\gamma^{*}p}$ of (5) as a function of $Q^{2}$ for various values of the energy $W=\sqrt{s}$ corresponding to Fits A and B (multiplied by the factor shown in brackets). The data [1, 2] are assigned to the value of $W$ which is closest to the experimental $W$ bin. The upper, lower photoproduction (solid triangular) data points correspond to $W=210$, 170 GeV and are from the H1 [21] and ZEUS [22] collaborations respectively. The open triangular points are obtained from an analysis of ZEUS photoproduction data reported in a thesis by Mainusch [23]. Fig. 6 The various components of $\sigma_{\gamma^{*}p}$ (as defined in Section 2.3) shown as a function of $Q^{2}$ at $W=245$ GeV for Fit A (with $k_{0}^{2}=0.2$ GeV${}^{2}$). The bold curve shows their sum, $\sigma_{\gamma^{*}p}$, compared to the HERA measurements. Fig. 7 The same as Fig. 6 but for Fit B (with $k_{0}^{2}=0.5$ GeV${}^{2}$). Th e poorer description of the data in the region $Q^{2}\sim 1$ GeV${}^{2}$, as compared to Fit A, is clearly apparent and can be attributed to the smaller perturbative QCD component at low gluon $l_{T}$. Fig. 8 The predictions for $F_{L}$ versus $Q^{2}$ at $W=210$ GeV from Fits A, B (with $k_{0}^{2}=0.2$ and 0.5 GeV${}^{2}$ respectively), together with the values obtained by Badelek, Kwiecinski and Stasto [24] and from the MRST set of partons [25]. Fig. 9 The dashed curve is the prediction for $\sigma_{L}/\sigma_{T}$ versus $Q^{2}$ at $W=210$ GeV from Fit A. For comparison the continuous curve is the prediction obtained using a different choice of the VDM contribution to $F_{L}$; namely using (33) in the place of (32). Fig. 10 The dotted curves show the effect of using a (fixed) constituent mass, $M_{0}$, in all contributions. The running mass fit (continuous curves) and the data are those of Fig. 5.
Unveiling Physical Processes in Type Ia Supernovae with a Laue Lens Telescope , J. A. Tomsick, S. E. Boggs Space Sciences Laboratory, University of California Berkeley, 7 Gauss Way, Berkeley CA 94720-7450, USA E-mail:    P. von Ballmoos, J. Rousselle Centre d’Etude Spatiale des Rayonnements, UMR 5187, 9, av du Colonel Roche, 31028 Toulouse, France Abstract: We present in this paper a focusing gamma-ray telescope with the goal of addressing the true nature of Type Ia Supernovae (SNe Ia). This telescope is based on a Laue lens focusing a 100 keV wide energy band centered on 847 keV, which corresponds to a bright line emitted by the decay chain of ${}^{56}$Ni, a radioactive element massively produced during SNe Ia events. Spectroscopy and light curve measurements of this gamma-ray line allow for a direct measurement of the underlying explosion physics and dynamics, and thus discriminate among the competing models. However, reaching this goal requires the observation of several events with high detection significance, meaning more powerful telescopes. The telescope concept we present here is composed of a Laue lens held 30 m from the focal plane instrument (a compact Compton telescope) by an extendible mast. With a 3-$\sigma$ sensitivity of 1.8$\times$10${}^{-6}$ ph/s/cm${}^{2}$ in the 3%-broadened line at 847 keV (in 1 Ms observation time), dozens of SNe Ia could be detected per year out to $\sim$40 Mpc, enough to perform detailed time-evolved spectroscopy on several events each year. This study took place in the framework of the DUAL mission proposal which was recently submitted to ESA for the third medium class mission of the Cosmic Vision program. 1 Introduction Type Ia supernovae (SNe Ia) are used as a cosmological standard candle to determine extragalactic distances with ever increasing precision. Based on this, extensive efforts during the past two decades led to the astonishing result that the expansion of the Universe appears to be accelerating, which implies the existence of a dark energy [1, 2]. Despite the fact that this result is apparently sound, it is well known that SNe Ia are actually not standard candles. Their intrinsic luminosity can vary over a factor of three, as was emphasized for instance in 1991 with the extremely faint SN 1991bg [3] and the extremely bright SN 1991T [4]. However, Phillips (1993)[5] showed that there is a correlation between the shape of a SN Ia light curve and its luminosity: Supernovae with the steepest decline are the least luminous. This one-parameter empirical law proved to work extremely well, but does not provide any explanation for why it is the case. Why SNe Ia brightnesses change with light curve shape is a mystery, which is actually related to our lack of understanding of the supernova explosions themselves. Without even knowing the progenitor system of these objects, it is difficult to infer any observational feature. Whereas a deeper understanding of SNe Ia physics is not likely to threaten the outstanding cosmological results obtained during the last two decades, it would help guide the empirical work, and provide confidence that stellar evolution is not subtly undermining the cosmological inferences. More importantly, it would help reduce the systematics associated with their use as distance indicator. Aside from cosmology, SNe Ia are actually a full-fledged topic of study on their own, as they are laboratories for extreme physics. Understanding their underlying physical processes would also clarify their role in the nucleosynthesis of the heavy elements present in the Universe Future ground-based observatories and space-borne missions promise to bring thousands of new supernova discoveries that will allow their classification into subcategories with significant statistics. Sorting SNe Ia by, for instance, host galaxy type, host galaxy metallically, color and redshift, will certainly uncover new correlations and allow us to infer some of their intrinsic properties. Observations in the gamma-ray domain cannot bring such huge statistics. Even the most sensitive instruments such as those presented in this paper can “only” detect a few tens of SNe Ia per year. However, the observation of the gamma-ray lines emitted by the decay of radioactive elements synthesized during the supernova explosion is a very powerful probe of the physics of these objets. All SN Ia models predict that a large quantity of radioactive ${}^{56}$Ni is synthesized, which decays following the ${}^{56}$Ni $\rightarrow$ ${}^{56}$Co $\rightarrow$ ${}^{56}$Fe chain, emitting many gamma-ray lines including the bright line at 847 keV. The observation of this line reveals the location of the radioactivity within the ejecta through the time-dependence of the photon escape, and the ejection velocities of various layers through the line Doppler profiles. Therefore, spectroscopy and light curve measurements of the gamma-ray lines allow for direct measurement of the underlying explosion physics and dynamics, and thus discriminate among the competing models [6]. There are currently three classes of competing models to describe SNe Ia, all of which feature a degenerate C/O white dwarf (WD) in a binary system. A C/O WD is the final state of a main sequence star whose mass was in the 0.5 - 6.5 M${}_{\odot}\,$range, which allows the fusion of He into C and O via the triple-alpha process, but not the fusion of C into Ne, thus producing an inert core of C and O. After the star has ejected its remaining He and H layers into a planetary nebula, the C/O WD remains and has a maximum mass of $\sim$ 1.1 - 1.2 M${}_{\odot}\,$[7]. The most commonly accepted scenario to produce a SN Ia involves a massive C/O WD accreting matter from a companion star in close binary system. As the mass of the WD builds up and reaches the Chandraskhar mass limit ($\sim$ 1.38 M${}_{\odot}\,$), its central density rises above 10${}^{9}$ g cm${}^{-3}$ resulting in the ignition of C fusion near its center. The outward-propagating thermonuclear flame quickly becomes turbulent, causing an energetic explosion. Although this is the “standard model”, this scenario is challenged by the fact that copious X-ray emission is expected during an accretion phase that should last about 10${}^{7}$ years, but it is not observed. This fact suggests that this scenario might not be the dominant one [8]. An alternative or perhaps a complementary class of models involves a C/O WD of mass in the range 0.6-0.9 M${}_{\odot}\,$bound to a main sequence star. Like in the first scenario, the companion star feeds the WD with its matter, mostly H and He. After a certain time, fusion reactions ignite at the base of the accreted layer producing an inward propagating compression wave, driving the central density and temperature in a range that allows for carbon fusion under explosive conditions. In this sub-Chandrasekhar scenario, the amount of ${}^{56}$Ni produced increases with the mass of the WD. This could provide an explanation for the brightness decline rate relationship described by Phillips (1993), as more ${}^{56}$Ni implies brighter SNe Ia and a longer diffusion time [6]. The third class of models involves the coalescence of two C/O WD where the orbital distance decays by emission of gravitational waves [9]. While there are several possibilities for bringing the two WD together (for instance, through an accretion disc formed by the less massive WD or through a direct collision), the result is a massive C/O WD with a mass exceeding the Chandrasekhar limit in most cases. At this point, the story is the same as in the first scenario. This model is supported by recent observational hints; For instance, Scalzo et al. (2010)[10] reports that SN 2007if produced $\sim$1.6 M${}_{\odot}\,$of ${}^{56}$Ni, and Tanaka et al. (2010) [11] suggest that SN 2009dc produced $\gtrsim$ 1.2 M${}_{\odot}\,$of ${}^{56}$Ni. In both cases, the progenitor should have had a mass exceeding the Chandrasekhar limit. In order to discriminate between these models, detections with high significance ($\gtrsim 25\sigma$) are needed to have enough counts to obtain detailed spectra. The requirement is to detect at least one SN Ia per year with such photon statistics. However, past and present instruments were not sensitive enough to reach this goal, mainly due to the fact that they were not using focusing optics. Both coded mask and Compton telescope share the same problem, their instrumental background is (roughly) proportional to their volume, which means that their sensitivity scales only with the square root of their exposed area. Focusing instruments have two tremendous advantages: first, the volume of the focal plane detector can be made much smaller than for non-focusing instruments, and second, the residual background, often time-variable, can be measured simultaneously with the source, and can be reliably subtracted. The concept of a Laue gamma-ray lens holds the promise of extending focusing capabilities into the MeV range. In order to achieve the ultimate sensitivity for the gamma-ray lens mission, the focal plane detector must be designed to match the characteristics of the lens focal spot. A Compton telescope is a good solution because the focal plane is intrinsically finely pixelated, optimized for MeV gamma-ray detection, and because the direction of incident gamma-rays can be determined by the Compton reconstruction to enable discrimination gamma-rays coming from the lens (“electronic collimation”). In this paper, we describe a Laue lens gamma-ray telescope concept dedicated to the study of SNe Ia through the observation of the 847 keV line. This work has been done in the framework of the DUAL mission [12, 13], which has recently been proposed to the European Space Agency as a medium class mission. In the next section, we introduce the basics of Laue lenses. Then, the telescope concept is presented, with an emphasis on the Laue lens. Finally, a sensitivity estimate is given, and the scientific perspectives it offers are discussed. 2 Laue lens concept A Laue lens concentrates gamma-rays using Bragg diffraction in the volume of a large number of crystals arranged in concentric rings and accurately oriented in order to diffract radiation coming from infinity towards a common focal point (see e.g. Ref. [14]). In the simplest design, each ring is composed of identical crystals with their axis of symmetry defining the optical axis of the lens (c.f. Figure 1). Bragg’s law links the angle $\theta_{B}$ between the ray’s direction of incidence and the diffraction planes to the diffracted energy $E$ through the diffracting planes d-spacing $d_{hkl}$: $$\displaystyle 2d_{hkl}\sin\theta_{B}={hc\over E}\>\>\Leftrightarrow\>\>2d_{HKL% }\sin\theta_{B}=n\,{hc\over E}$$ (1) with $h$, $k$, $l$ being the Miller indeces defining the set of diffracting planes at work (another notation uses $H$, $K$, $L$ prime numbers and $n$ the order of diffraction), $h$ is the Planck constant and $c$ is the velocity of light in vacuum. Considering a focal distance $f$, the mean energy diffracted by a ring only depends on its radius $r$ and the d-spacing of its constituent crystals [15] : $$\displaystyle E={hc\over 2d_{hkl}\sin\left({1\over 2}\arctan\left({r\over f}% \right)\right)}\varpropto{f\over d_{hkl}\,r}~{}~{}~{}.$$ (2) As we understand from equation (2), if the variation of radius is compensated by a change in the d-spacing of the crystals, several rings can diffract the same energy adding up the effective area over a narrow energy band ($E_{1}=E_{2}$ in Fig. 1). However, if identical crystal and reflection (i.e. the same d-spacing) is employed on many adjacent rings, a continuous energy band is obtained (and $E_{1}>E_{2}$ in Fig. 1). The Laue lens presented hereafter is a combination of these two principles. More generally, the principle is applicable from $\sim$100 keV up to $\sim$1.5 MeV, limited at the low end by the fact that the crystals thickness maximizing the reflectivity becomes too small to allow their safe handling, and by the reflectivity that becomes too low at the other end. Although Laue lenses are better adapted to cover relatively narrow energy bandpasses, it is possible to get a sizable continuous effective area over several hundreds of keV, as demonstrated for instance in Ref. [16] and [17]. 3 Proposed telescope design The proposed telescope consists of a Laue lens focusing in a $\sim$100-keV wide energy bandpass centered on 847 keV. At the focus is a compact Compton camera. A deployable mast maintains the two instruments 30 m apart, the Laue lens being attached to the spacecraft while the Compton camera is at the tip of the mast, far from the spacecraft mass. With the observatory placed at the second Sun-Earth Lagrangian point, this configuration would ensure a minimal instrumental background in the detector and cancel the differential gravitational forces that such a long structure would undergo in low Earth orbit, and this is basically the DUAL mission concept. For the mission proposed to ESA, the compact Compton camera is left unshielded and thus sees nearly the whole sky all the time. In DUAL, the Compton camera acts simultaneously as focal plane for the lens and as all-sky monitor, surveying continuously the gamma-ray sky and accumulating more data on every source in the sky. 3.1 Laue lens optic The Laue lens optic proposed for the DUAL mission is composed of 5800 crystal slabs of 10$\times$10 mm${}^{2}$ arranged in 32 concentric rings, each populated by identical crystals. The crystals, Rh, Ag, Pb, Cu and Ge are mosaic crystals made of pure material. Table 1 summarizes the parameters of the lens and details its composition, while Figure 2 shows the arrangement of the crystals in rings. Samples of every crystal material used in this design have been measured in a high energy photon beam (either using beamline ID15A at ESRF or instrument GAMS 5 at ILL, both in Grenoble, France), proving that they exist with the required mosaicity [18]. Actually, there is no doubt that Ge and Cu are available for the realization of a Laue lens: 576 Ge crystals produced by IKZ (Berlin, Germany) were used for the CLAIRE Laue lens prototype [19], and Cu crystals recently benefitted from an ESA-funded study, which established their reproducibility with constant quality in large quantities. The “new” materials, Rh, Ag and Pb, are currently being studied at Space Sciences Laboratory in collaboration with Mateck GmbH (Juelich, Germany). The aim is to develop the method of producing a large series of crystals with constant quality, which encompasses growth of homogeneous ingots, orientation, quality check, cut, cut-induced damages removal (by acid etching) and final characterization. This activity is ongoing and should produce results around Fall 2011. The calculation of the lens effective area (left panel of Figure 2 and right panel of Figure 3) uses experimentally validated parameters to feed the mosaic crystal model (Darwin’s model using the dynamical theory of diffraction [20]). It takes into account 3 mm of equivalent aluminum for the substrate absorption and assumes crystals non-ideally oriented following a Gaussian distribution of $\sigma$=10 arcsec (c.f. next section). The peak effective area is 330 cm${}^{2}$ at 847 keV and averages 318 cm${}^{2}$ over a 26-keV wide band centered on 847 keV, which corresponds to the expected 3% broadening of the line. As shown in the right panel of Figure 3, 60% of the effective area is concentrated onto 1.45 cm${}^{2}$, which represents a gain of 132. 3.2 Crystal orientation precision requirement The aim of a Laue lens is to concentrate the maximum signal onto the smallest surface. The sensitivity of a Laue lens telescope is actually proportional to $$\displaystyle f\propto{\sqrt{A_{focalspot}}\over A_{eff}\,\epsilon_{focalspot}}$$ (3) with $A_{focalspot}$ being the area on the detector that maximizes the detection efficiency, $\epsilon_{focalspot}$ is the fraction of signal that is encompassed in the area $A_{focalspot}$ and $A_{eff}$ the average effective area over the energy band of interest (a 0.03 $\times$ 847 keV wide band centered on 847 keV in the present case). We use this calculation to evaluate the tolerance on the orientation of the crystal about their most critical angle, that of the axis tangent to each ring, i.e. the Bragg angle. Using this notation, Table 2 shows the variation of the sensitivity with the misorientation of the crystals around their nominal angle. The distribution of misorientation is considered Gaussian. A misorientation of $\sigma_{misorientation}$ = 30 arcsec results in a factor of 2 in sensitivity loss. Thus, the goal for the assembly of the lens is 10 arcsec, and the requirement is 20 arcsec. 3.3 Field of view and pointing precision requirement Calculated according to the same principle as in the previous section, Figure 4 shows the variation of the sensitivity (normalized to the on-axis value) for a point source, as a function of its angular distance to the optical axis of the lens (i.e. the off-axis angle). As one can see, the sensitivity drops extremely quickly with the off-axis angle. An off-axis angle of 125 arcsec induces a sensitivity loss of an order of magnitude. Indeed this lens is not designed to make an image or to survey the sky. It is designed to observe point sources of known position, as the peak of the 847 keV line light curve arises $\sim$ 50 days after the WD explosion, well after the optical light curve has reached its maximum intensity. From this curve, one can determine the pointing accuracy required. A decrease of sensitivity by a factor of two occurs when the source is 42 arcsec off-axis. Thus, the requirement is to keep the source no more than 30 arcsec from the on-axis position (sensitivity loss factor of 1.4), with a goal of 15 arcsec (sensitivity loss factor of 1.2). 3.4 Substrate and thermal control For such a small lens (external diameter lower than 1 m), a monolithic substrate is preferred as it would save the weight of module-to-module attachments and prevent any loss of area at the interfaces, insuring a maximum packing factor for the crystals. An ideal material to build the substrate is silicon carbide, as it is extremely lightweight, allows for precise machining, and is a good thermal conductor. Based on existing devices, one can estimate the weight of the substrate to be lower than 10 kg for the proposed Laue lens [21]. The thermal stability requirement derives from the crystals that should keep their orientation within $\pm$10 arcsec with respect to the lens optical axis. It means that any warping should not induce any slope of more than 10 arcsec with respect to the nominal plane. During the MAX [22] pre-phase A study at CNES (the French Space Agency), the thermal control of the Laue lens was investigated [23]. It showed that a cocoon of multilayer insulator surrounding the lens combined with a few heaters and thermistors on the lens structure are sufficient to limit the thermal gradient to 2${}^{\circ}$C across the lens for any sun exposition, which in the case of MAX would prevent any warping worst than 10 arcsec. The proposed Laue lens has a much simpler and smaller structure that MAX’s, so it is reasonable to believe that this result can be safely applied to the present case. Hence, a simple thermal control mainly based on passive insulation should allow the lens to maintain its assembly temperature within 2${}^{\circ}$C, which should be enough to ensure nominal performance. 3.5 Leads for performance improvement The replacement of the silver crystals by gold would result in an increase of the effective area of 5% and a decrease of the crystal mass of 6%. Additionally, two rings populated with InSb crystals could be added on the inside of the lens, adjacent to the Ge ring. This would add $\sim$1kg to the total mass and would increase the effective area by $\sim$ 3.5% (i.e. 12 cm${}^{2}$). Additionally, smaller crystals with slightly less mosaicity would increase the sensitivity as it would decrease the size of the focal spot on the detector. Figure 5 shows that crystals of 8$\times$8 mm${}^{2}$ with a mosaicity of 40 arcsec (instead of 10 $\times$10 mm${}^{2}$ and 45 arcsec) would increase the sensitivity by $\sim$ 5%. It appears that although there are possibilities for sensitivity improvement with the present design, they are minor. If a major improvement is to be achieved, the only way, to our knowledge, is to increase the focal length. A longer focal length would allow more area for the crystals, leading to a sizable increase of the effective area at the cost of an increase of the lens mass. As an example, a similar Laue lens having a focal length of 40 m has been designed. It results in an increase of sensitivity of 33%, for a crystal mass of 67 kg (7000 crystal slabs of 10$\times$10 mm${}^{2}$) and an outer radius of 112 cm. 3.6 Focal plane The focal plane considered for the performance estimates (c.f. next section) is the All-Sky Compton Imager featured in the DUAL mission. This camera is made of 45 planar cross-stripped high purity Ge detectors, similar to that developed for the Nuclear Compton Telescope [24], arranged in 5 layers of 3$\times$3 elements. Each detector measures 10$\times$10$\times$1.5 cm${}^{3}$ with 2-mm pitch strips, allowing event positioning within a volume of 1.6 mm${}^{3}$. These detectors combine spatial resolution and excellent spectral resolution, making them perfect elements for a Compton camera. It is to be noted however that there is no need for such a big focal plane for the Laue lens alone. This design was driven by the need to have a powerful all-sky imager. Regarding the Laue lens, the same performance would be obtained by a single stack of 5 HPGe elements surrounded by 4 additional elements acting as side walls. 4 Performance - conclusions The sensitivity of the instrument presented in this paper has been calculated using the Megalib code [25] in order to account for Compton reconstruction of the events in the focal plane, thus applying “electronic collimation” by rejecting events whose cone of possible incidence direction do not intersect the lens and whose first interaction lies in the PSF of the lens. The resulting 3-$\sigma$ sensitivity is 1.8$\times$10${}^{-6}$ ph s${}^{-1}$ cm${}^{-2}$ for 10${}^{6}$ s observation time of a 3% broadened line. This is about 30 times more sensitive than INTEGRAL/SPI. With such tremendous sensitivity, SNe Ia can be detected out to $\sim$40 Mpc, which means dozens of detection per year, with at least one with high significance. Enough to make a real breakthrough in our understanding of these events. The Laue lens concept presented in this paper weighs about 80 kg (including margin), and is based on existing crystals and existing substrate technology. Different mounting methods for the crystals are being developed in Toulouse (France), Ferrara (Italy) and more recently in Berkeley (USA), driving important progresses in the technology readiness level. According to industrial partners, the deployable mast technology is not a critical point for this kind of mission, as the NuSTAR mission [26] with soon demonstrate with its 10-m long mast. All the ingredients are getting ready to finally be able to construct a telescope powerful enough to address the true nature of SNe Ia. Acknowledgments.The authors thank Dr. Andreas Zoglauer for his contribution in determining the sensitivity of the proposed telescope through an end to end calculation including the Compton reconstruction of the events in the focal plane, and the simulation of the instrumental background for an L2 orbit assuming the DUAL all-sky Compton telescope as focal plane. References [1] Riess, A. G., Filippenko, A. V., Challis, P., Clocchiatti, A., Diercks, A., Garnavich, P. M., Gilliland, R. L., Hogan, C. J., Jha, S., Kirshner, R. P., Leibundgut, B., Phillips, M. M., Reiss, D., Schmidt, B. P., Schommer, R. A., Smith, R. C., Spyromilio, J., Stubbs, C., Suntzeff, N. B., and Tonry, J., Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant, AJ 116, 1009-1038 (1998). [2] Perlmutter, S., Aldering, G., Goldhaber, G., Knop, R. A., Nugent, P., Castro, P. G., Deustua, S., Fabbro, S., Goobar, A., Groom, D. E., Hook, I. M., Kim, A. G., Kim, M. Y., Lee, J. C., Nunes, N. J., Pain, R., Pennypacker, C. R., Quimby, R., Lidman, C., Ellis, R. S., Irwin, M., McMahon, R. G., Ruiz-Lapuente, P., Walton, N., Schaefer, B., Boyle, B. J., Filippenko, A. V., Matheson, T., Fruchter, A. S., Panagia, N., Newberg, H. J. M., Couch, W. J., and The Supernova Cosmology Project, Measurements of Omega and Lambda from 42 High-Redshift Supernovae, ApJ 517, 565-586 (1999). [3] Filippenko, A. V., Richmond, M. W., Branch, D., Gaskell, M., Herbst, W., Ford, C. H., Treffers, R. R., Matheson, T., Ho, L. C., Dey, A., Sargent, W. L. W., Small, T. A., and van Breugel, W. J. M., The subluminous, spectroscopically peculiar type IA supernova 1991bg in the elliptical galaxy NGC 4374, AJ 104, 1543-1556 (1992). [4] Phillips, M. M., Wells, L. A., Suntzeff, N. B., Hamuy, M., Leibundgut, B., Kirshner, R. P., and Foltz, C. B., SN 1991T - Further evidence of the heterogeneous nature of type IA supernovae,’ AJ 103, 1632-1637 (1992). [5] Phillips, M. M., The absolute magnitudes of Type IA supernovae, ApJ 413, L105-L108 (1993). [6] Pinto, P. A., Eastman, R. G., and Rogers, T., A Test for the Nature of the Type IA Supernova Explosion Mechanism, ApJ 551, 231–243 (2001). [7] Weidemann, V., Revision of the initial-to-final mass relation, A&A 363, 647-656 (2000). [8] Gilfanov, M. and Bogdán, Á., An upper limit on the contribution of accreting white dwarfs to the typeIa supernova rate, Nature 463, 924-925 (2010). [9] Mochkovitch, R. and Livio, M., The coalescence of white dwarfs and type I supernovae - The merged configuration, A&A 236, 378-384 (1990). [10] Scalzo, R. A., Aldering, G., Antilogus, P., Aragon, C., Bailey, S., Baltay, C., Bongard, S., Buton, C., Childress, M., Chotard, N., Copin, Y., Fakhouri, H. K., Gal-Yam, A., Gangler, E., Hoyer, S., Kasliwal, M., Loken, S., Nugent, P., Pain, R., Pécontal, E., Pereira, R., Perlmutter, S., Rabinowitz, D., Rau, A., Rigaudier, G., Runge, K., Smadja, G., Tao, C., Thomas, R. C., Weaver, B., and Wu, C., Nearby Supernova Factory Observations of SN 2007if: First Total Mass Measurement of a Super-Chandrasekhar-Mass Progenitor, ApJ 713, 1073-1094 (2010). [11] Tanaka, M., Kawabata, K. S., Yamanaka, M., Maeda, K., Hattori, T., Aoki, K., Nomoto, K., Iye, M., Sasaki, T., Mazzali, P. A., and Pian, E., Spectropolarimetry of Extremely Luminous Type Ia Supernova 2009dc: Nearly Spherical Explosion of Super-Chandrasekhar Mass White Dwarf, ApJ 714, 1209-1216 (2010). [12] von Ballmoos, P., Takahashi, T., and Boggs, S. E., A DUAL mission for nuclear astrophysics, NIM A 623, 431-433 (2010). [13] Boggs, S., Wunderer, C., von Ballmoos, P., Takahashi, T., Gehrels, N., Tueller, J., Baring, M., Beacom, J., Diehl, R., Greiner, J., Grove, E., Hartmann, D., Hernanz, M., Jean, P., Johnson, N., Kanbach, G., Kippen, M., Knödlseder, J., Leising, M., Madejski, G., McConnell, M., Milne, P., Motohide, K., Nakazawa, K., Oberlack, U., Phlips, B., Ryan, J., Skinner, G., Starrfield, S., Tajima, H., Wulf, E., Zoglauer, A., and Zych, A., DUAL Gamma-Ray Mission, in NASA Decadal Survey ”Astro 2010” white paper (2010). [14] Lund, N., A study of focusing telescopes for soft gamma rays, Exp. A 2, 259-273 (1992). [15] Halloin, H. and Bastie, P., Laue diffraction lenses for astrophysics: Theoretical concepts, Exp. Astron. 20, 151-170 (2005). [16] Barrière, N. M., Natalucci, L., Abrosimov, N., von Ballmoos, P., Bastie, P., Courtois, P., Jentschel, M., Knödlseder, J., Rousselle, J., and Ubertini, III, P., Soft gamma-ray optics: new Laue lens design and performance estimates, Proc. SPIE 7437 (2009). [17] Barrière, N., von Ballmoos, P., Bastie, P., Courtois, P., Abrosimov, N. V., Andersen, K., Buslaps, T., Camus, T., Halloin, H., Jentschel, M., Knödlseder, J., Roudil, G., Serre, D., and Skinner, G. K., R&d progress on second-generation crystals for laue lens applications, Proc. SPIE 6688, 66880O (2007). [18] Barrière, N. M., Tomsick, J., Boggs, S. E., Rousselle, J., and von Ballmoos, P., Development of efficient laue lenses: experimental results and projects, Proc. SPIE 7732 (2010). [19] von Ballmoos, P., Halloin, H., Evrard, J., Skinner, G., Abrosimov, N., Alvarez, J., Bastie, P., Hamelin, B., Hernanz, M., Jean, P., Knödlseder, J., Lonjou, V., Smither, B., and Vedrenne, G., CLAIRE’s first light, NAR 48, 243-249 (2004). [20] Barrière, N., Rousselle, J., von Ballmoos, P., Abrosimov, N. V., Courtois, P., Bastie, P., Camus, T., Jentschel, M., Kurlov, V. N., Natalucci, L., Roudil, G., Frisch Brejnholt, N., and Serre, D., Experimental and theoretical study of the diffraction properties of various crystals for the realization of a soft gamma-ray Laue lens, J. Applied Cryst. 42, 834-845 (2009). [21] Devilliers, C., Cesic-A New Technology for Lightweight and Cost Effective Space Instruments Structures and Mirrors, ESA Special Publication 621 (2006). [22] Barrière, N., Ballmoos, P. V., Halloin, H., Abrosimov, N., Alvarez, J. M., Andersen, K., Bastie, P., Boggs, S., Courtois, P., Courvoisier, T., Harris, M., Hernanz, M., Isern, J., Jean, P., Knödlseder, J., Skinner, G., Smither, B., Ubertini, P., Vedrenne, G., Weidenspointner, G., and Wunderer, C., MAX, a Laue diffraction lens for nuclear astrophysics, ExpA 20, 269-278 (2005). [23] Hinglais, E., Distributed Space Segment for High Energy Astrophysics: Similarities And specificites, ExpA 20, 435-445 (2005). [24] Bellm, E. C., Chiu, J.-L., Boggs, S. E., Chang, H.-K., Yuan-Hann, C., Huang, M. A., Amman, M. S., Bandstra, M. S., Hung, W.-C., Jean, P., Liang, J.-S., Lin, C.-H., Liu, Z.-K., Luke, P. N., Perez-Becker, D., Run, R.-S., , and Zoglauer, A., The spring 2010 balloon campaign of the Nuclear Compton Telescope, Proc. SPIE 7732 (2010). [25] Zoglauer, A., Andritschke, R., and Schopper, F., MEGAlib The Medium Energy Gamma-ray Astronomy Library, NAR 50, 629–632 (2006). [26] Harrison, F. A., Boggs, S., Christensen, F., Craig, W., Hailey, C., Stern, D., Zhang, W., Angelini, L., An, H., Bhalereo, V., Brejnholt, N., Cominsky, L., Cook, W. R., Doll, M., Giommi, P., Grefenstette, B., Hornstrup, A., Kaspi, V. M., Kim, Y., Kitaguchi, T., Koglin, J., Liebe, C. C., Madejski, G., Kruse Madsen, K., Mao, P., Meier, D., Miyasaka, H., Mori, K., Perri, M., Pivovaroff, M., Puccetti, S., Rana, V., and Zoglauer, A., The Nuclear Spectroscopic Telescope Array (NuSTAR), Proc. SPIE 7732, 77320S (2010).
[ [ ${}^{1}$Institute for Astronomy, University of Hawai‘i, 2680 Woodlawn Drive, Honolulu, HI 96822 USA e-mail: tdupuy@ifa.hawaii.edu ${}^{2}$School of Physics, University of Sydney NSW2006, Australia (#1) Abstract By definition, brown dwarfs never reach the main-sequence, cooling and dimming over their entire lifetime, thus making substellar models challenging to test because of the strong dependence on age. Currently, most brown dwarfs with independently determined ages are companions to nearby stars, so stellar ages are at the heart of the effort to test substellar models. However, these models are only fully constrained if both the mass and age are known. We have used the Keck adaptive optics system to monitor the orbit of HD 130948BC, a brown dwarf binary that is a companion to the young solar analog HD 130948A. The total dynamical mass of 0.109$\pm$0.003 M${}_{\odot}$ is the most precise mass measurement (3%) for any brown dwarf binary to date and shows that both components are substellar for any plausible mass ratio. The ensemble of available age indicators from the primary star suggests an age comparable to the Hyades, with the most precise age being 0.79${}^{+0.22}_{-0.15}$ Gyr based on gyrochronology. Therefore, HD 130948BC is unique among field L and T dwarfs as it possesses a well-determined mass, luminosity, and age. Our results indicate that substellar evolutionary models may underpredict the luminosity of brown dwarfs by as much as a factor of $\approx$2–3$\times$. The implications of such a systematic error in evolutionary models would be far-reaching, for example, affecting determinations of the initial mass function and predictions of the radii of extrasolar gas-giant planets. This result is largely based on the reliability of stellar age estimates, and the case study of HD 130948A highlights the difficulties in determining the age of an arbitrary field star, even with the most up-to-date chromospheric activity and gyrochronology relations. In order to better assess the potential systematic errors present in substellar models, more refined age estimates for HD 130948A and other stars with binary brown dwarf companions (e.g., $\epsilon$ Ind Bab) are critically needed. keywords: stars: brown dwarfs; techniques: high angular resolution; binaries: close, visual; infrared: stars ††volume: #1 Confronting Substellar Theoretical Models] Confronting Substellar Theoretical Models with Stellar Ages Dupuy et al.] Trent J. Dupuy${}^{1}$, Michael C. Liu${}^{1}$, and Michael J. Ireland${}^{2}$ 2008 258 \pagerange1–7 \jnameThe Ages of Stars \editorsE.E. Mamajek, D.R. Soderblom & R.F.G. Wyse, eds. \firstsection 1 Introduction Theoretical models of objects below the substellar limit ($M<0.075$ M${}_{\odot}$) are essential for characterizing the several hundred brown dwarfs and extrasolar gas-giant planets discovered to date. Thus, these models have become ubiquitous in the literature, even though empirical tests of their ability to accurately predict the properties of brown dwarfs has been limited to only a handful of relatively warm objects. To test substellar evolutionary models, the input parameters of mass and age must be determined. For young brown dwarfs, the M6.5 eclipsing binary 2MASS J05352184$-$0546085 in the Orion Nebula provides a unique benchmark (Stassun et al., 2006). Prior to this year, only three binaries provided dynamical mass measurements for field objects at or below the substellar limit: the M8.5+M9 binary LHS 1070BC (Leinert et al., 2001; Seifahrt et al., 2008); the M8.5+M9 binary Gl 569Bab (Zapatero Osorio et al., 2004; Simon et al., 2006); and the L0.5+L1 binary 2MASS J0746+2000AB (Bouy et al., 2004). Recent work has contributed several more dynamical masses for objects lower in both temperature and mass than previously studied: the mid-L dwarf GJ 802B (Ireland et al., 2008); the T5+T5.5 dwarf binary 2MASS J1534-2952AB (Liu et al., 2008); and the L4+L4 binary HD 130948BC (Dupuy et al., 2008). While mass measurements alone can provide very stringent tests of theoretical models (e.g., see Liu et al. 2008), substellar evolutionary models are only fully constrained when both the mass and age can be determined. In fact, precise ages are critical for such tests because brown dwarfs – unlike stars – never reach a main-sequence, so their properties depend very sensitively on their age. Of the substellar field dwarfs with measured masses, only HD 130948BC has a precisely determined age. These nearly-identical L dwarfs were discovered by Potter et al. (2002) as companions to the young solar analog HD 130948A (G2V, [Fe/H] = 0.05). Hipparcos measured a distance of 18.17$\pm$0.11 pc (van Leeuwen, 2007) for the primary star, which enables a very precise dynamical mass measurement when paired with our well-determined orbital solution. 2 The Mass of HD 130948BC We have used Keck adaptive optics (AO) imaging to monitor the relative orbit of the two components of HD 130948BC (Figure 1). Combined with archival Hubble Space Telescope (HST) imaging and a re-analysis of the Gemini discovery data, our data span $\approx$7 years ($\approx$70% of the orbital period). We fit a simple analytic PSF model to derive astrometry from the Keck and Gemini images, while TinyTim model PSFs were fit to the HST images. An individually tailored Monte Carlo simulation was used to determine the astrometric uncertainty for each observation epoch. The resulting astrometry is extremely precise with typical Keck errors of 300 $\mu$as, corresponding to $\approx$1 R${}_{\odot}$ at the distance of this system, while the orbit is roughly the size of the asteroid belt. We determined the binary’s orbital parameters and their confidence limits using a Markov Chain Monte Carlo (MCMC) technique. The best-fit orbit has a reduced $\chi^{2}$ of 1.06 (9 degrees-of-freedom), thus validating our astrometric error estimates. Applying Kepler’s Law to the MCMC-derived orbital period ($P=9.9^{+0.7}_{-0.6}$ yr) and semimajor axis ($a=121\pm 6$ mas) yields a dynamical mass of 0.1089${}^{+0.0020}_{-0.0017}$ M${}_{\odot}$. Accounting for the additional uncertainty in the Hipparcos distance results in a dynamical mass of 0.109$\pm$0.003 M${}_{\odot}$ (114$\pm$3 M${}_{\rm Jup}$). In the following analysis, we apportion the total mass between the two components by converting the measured luminosity ratio into a mass ratio using evolutionary models. The resulting individual masses are very insensitive to the models used because the flux ratio is so close to unity (the steepness of the mass–luminosity relation means that even small differences in mass result in large differences in luminosity). Regardless, we are careful to conduct our analysis in a self-consistent manner free of circular logic. 3 The Age of HD 130948A As a young solar analog, multiple indicators are available to assess the age of HD 130948A: $\bullet$ Rotation/Gyrochronology — Gaidos et al. (2000) measured two rotation periods of 7.69 and 7.99 days for HD 130948A. Thus, we adopt a rotation period of 7.84$\pm$0.21 days and a $B-V$ color of 0.576$\pm$0.016 mag from the Hipparcos catalog. We employ the Mamajek & Hillenbrand (2008) calibration of the “gyrochronology” relation originally introduced by Barnes (2007). The age we derive is 0.79${}^{+0.22}_{-0.15}$ Gyr, where the confidence limits are determined through a Monte Carlo approach in which the period, color, and empirical coefficients are drawn from normal distributions corresponding to their uncertainties. $\bullet$ Chromospheric Activity — Henry et al. (1996) and Wright et al. (2004) measure $\log(R^{\prime}_{HK})$ values of $-$4.45 and $-$4.50 for HD 130948A, respectively. Using the activity–age relation of Mamajek & Hillenbrand (2008), we derive ages of 0.4 and 0.6 Gyr from these $\log(R^{\prime}_{HK})$ values. The empirical relation is expected to gives ages with an uncertainty of $\approx$0.25 dex, so we adopt a mean age of 0.5$\pm$0.3 Gyr from this method. $\bullet$ X-ray Activity — HD 130948A was detected by ROSAT, and Hünsch et al. (1999) measure $\log(L_{X})$ = 29.01 dex (cgs), which gives $\log(R_{X})$ = $-$4.70. Using the empirical relation of Gaidos (1998), this corresponds to an age of 0.1–0.3 Gyr, depending on whether we adopt $\alpha$ of 0.5 or $1/\exp$. The X-ray relation of Mamajek & Hillenbrand (2008), derived by combining their $\log(R_{X})$–$\log(R^{\prime}_{HK})$ and $\log(R^{\prime}_{HK})$–age relations, gives an age of 0.5 Gyr. The X-ray luminosity of HD 130948A is in agreement with single G stars in the Pleiades and Hyades (28.9–29.0; Stern et al. (1995); Stelzer & Neuhäuser (2001)). $\bullet$ Isochrones — Using high resolution spectroscopic data combined with a bolometric luminosity and model isochrones, Valenti & Fischer (2005) derived an age estimate of 1.8 Gyr, with a possible age range of 0.4–3.2 Gyr. From the same data and with more detailed analysis, Takeda et al. (2007) found a median age of 0.72 Gyr, with a 95% confidence range of 0.32–2.48 Gyr. $\bullet$ Lithium — Measurements by Duncan (1981), Hobbs (1985), and Chen et al. (2001) give lithium equivalent widths of 95$\pm$14, 96$\pm$3, and 103$\pm$3 mÅ, respectively, for HD 130948A. Compared to stars of similar color, these values are slightly lower than the mean for the Pleiades and slightly higher than for UMa and the Hyades, though consistent with the scatter in each cluster’s measurements Soderblom et al. (1993a, b, c) In summary, the most precise age estimate available for HD 130948A comes from gyrochronology, which gives an age of 0.79${}^{+0.22}_{-0.15}$ Gyr. All other age indicators agree with this estimate, though this is due to their large uncertainties rather than a true consensus. 4 Substellar Evolutionary Models Fully Constrained With a measured mass, luminosity, and age, HD 130948BC provides the first direct test of the luminosity evolution predicted by theoretical models for substellar field dwarfs. Both the Tucson models (Burrows et al., 1997) and Lyon models (DUSTY; Chabrier et al., 2000) underpredict the luminosities of HD 130948B and C given their masses and age. The discrepancy is quite large, about a factor of 2 for the Lyon models and a factor of 3 for the Tucson models (Figure 2). If the age and luminosities of HD 130948B and C had been used to infer their masses, the resulting estimates would have been too large by 20–30%. In order to explain this discrepancy entirely, model radii would have to be underpredicted by 30–40%. Alternatively, the age of HD 130948A would need to be $\approx$0.4 Gyr in order to resolve this discrepancy. Although such a young age is marginally consistent with the various age indicators; it is on the extreme young end of two independent, well-calibrated age estimates (gyrochronology and stellar isochrones). In order to better assess this discrepancy between models and data, a more refined age estimate for HD 130948A (e.g., from asteroseismology) is critically needed. 5 Lithium Depletion in HD 130948BC Since brown dwarfs are fully convective objects, they can rapidly deplete their initial lithium if their core temperature is ever high enough to do so. This threshhold is reached around 0.065 M${}_{\odot}$, and since this is below the hydrogen-burning mass-limit, this fact has been exploited to identify sufficiently old objects bearing lithium as substellar. In fact, the exact mass-limit for lithium burning is slightly different depending on which sets of theoretical models are used, and the masses of HD 130948B and C happen to be very close to these theoretically predicted mass-limits (Figure 3). According to the Tucson models, neither component is massive enough to have ever depleted a significant amount of its initial lithium. The Lyon models, on the other hand, predict that HD 130948B is massive enough to have depleted most of its lithium, while HD 130948C is not. Thus, resolved optical spectroscopy designed to detect the lithium doublet at 6708 Å would provide a very discriminating test of substellar evolutionary models, which are otherwise nearly indistinguishable (e.g., see Figure 2). This experiment can currently only be conducted with HST/STIS given the very small binary separation ($<130$ mas). 6 Future Prospects Brown dwarfs hold the potential to address many astrophysical problems. For example, they are excellent laboratories in which to study ultracool atmospheres under a variety of conditions, and they may eventually be useful as Galactic chronometers given how sensitively their properties depend on their age (see contribution by A. Burgasser). However, the theoretical models we rely upon to characterize brown dwarfs have only begun to be rigorously tested by benchmark systems such as HD 130948BC. More results are expected to be forthcoming over the next several years for other brown dwarf binaries with stellar companions: $\epsilon$ Ind Bab (McCaughrean et al., 2004); Gl 417BC (Bouy et al., 2003); and GJ 1001BC (Golimowski et al., 2004). However, the utility of these systems as benchmarks critically depends on the confidence in the age estimates for their primary stars. Therefore, these stars deserve special attention so that state-of-the-art age-dating techniques (e.g., asteroseismology and gyrochronology) may be applied to them. Also, extending the empirical relations between age, stellar rotation, and chromospheric activity to include objects with as late a spectral type as possible will enable many more systems to be used as benchmarks for testing models. These relations are currently only calibrated for stars as late as early-K, but about half of the stars with brown dwarf companions have spectral types between early-K and early-M. References Barnes (2007) Barnes, S. A. 2007, ApJ, 669, 1167 Bouy et al. (2003) Bouy, H., Brandner, W., Martín, E. L., Delfosse, X., Allard, F., & Basri, G. 2003, AJ, 126, 1526 Bouy et al. (2004) Bouy, H., et al. 2004, A&A, 423, 341 Burrows et al. (1997) Burrows, A., Marley, M., Hubbard, W. B., Lunine, J. I., Guillot, T., Saumon, D., Freedman, R., Sudarsky, D., & Sharp, C. 1997, ApJ, 491, 856 Chabrier et al. (2000) Chabrier, G., Baraffe, I., Allard, F., & Hauschildt, P. 2000, ApJ, 542, 464 Chen et al. (2001) Chen, Y. Q., Nissen, P. E., Benoni, T., & Zhao, G. 2001, A&A, 371, 943 Duncan (1981) Duncan, D. K. 1981, ApJ, 248, 651 Dupuy et al. (2008) Dupuy, T. J., Liu, M. C., & Ireland, M. J. 2008, ApJ, in press (astro-ph/0807.2450) Gaidos (1998) Gaidos, E. J. 1998, PASP, 110, 1259 Gaidos et al. (2000) Gaidos, E. J., Henry, G. W., & Henry, S. M. 2000, AJ, 120, 1006 Golimowski et al. (2004) Golimowski, D. A., et al. 2004, AJ, 128, 1733 Henry et al. (1996) Henry, T. J., Soderblom, D. R., Donahue, R. A., & Baliunas, S. L. 1996, AJ, 111, 439 Hobbs (1985) Hobbs, L. M. 1985, ApJ, 290, 284 Hünsch et al. (1999) Hünsch, M., Schmitt, J. H. M. M., Sterzik, M. F., & Voges, W. 1999, A&AS, 135, 319 Ireland et al. (2008) Ireland, M. J., Kraus, A., Martinache, F., Lloyd, J. P., & Tuthill, P. G. 2008, ApJ, 678, 463 Leinert et al. (2001) Leinert, C., Jahreiß, H., Woitas, J., Zucker, S., Mazeh, T., Eckart, A., & Köhler, R. 2001, A&A, 367, 183 Liu et al. (2008) Liu, M. C., Dupuy, T. J., & Ireland, M. J. 2008, ApJ, in press (astro-ph/0807.0238) Mamajek & Hillenbrand (2008) Mamajek, E., & Hillenbrand, L. 2008, ApJ, submitted McCaughrean et al. (2004) McCaughrean, M. J., Close, L. M., Scholz, R.-D., Lenzen, R., Biller, B., Brandner, W., Hartung, M., & Lodieu, N. 2004, A&A, 413, 1029 Potter et al. (2002) Potter, D., Martín, E. L., Cushing, M. C., Baudoz, P., Brandner, W., Guyon, O., & Neuhäuser, R. 2002, ApJ, 567, L133 Seifahrt et al. (2008) Seifahrt, A., Röll, T., Neuhäuser, R., Reiners, A., Kerber, F., Käufl, H. U., Siebenmorgen, R., & Smette, A. 2008, A&A, 484, 429 Simon et al. (2006) Simon, M., Bender, C., & Prato, L. 2006, ApJ, 644, 1183 Soderblom et al. (1993a) Soderblom, D. R., Fedele, S. B., Jones, B. F., Stauffer, J. R., & Prosser, C. F. 1993a, AJ, 106, 1080 Soderblom et al. (1993b) Soderblom, D. R., Jones, B. F., Balachandran, S., Stauffer, J. R., Duncan, D. K., Fedele, S. B., & Hudon, J. D. 1993b, AJ, 106, 1059 Soderblom et al. (1993c) Soderblom, D. R., Pilachowski, C. A., Fedele, S. B., & Jones, B. F. 1993c, AJ, 105, 2299 Stassun et al. (2006) Stassun, K. G., Mathieu, R. D., & Valenti, J. A. 2006, Nature, 440, 311 Stelzer & Neuhäuser (2001) Stelzer, B., & Neuhäuser, R. 2001, A&A, 377, 538 Stern et al. (1995) Stern, R. A., Schmitt, J. H. M. M., & Kahabka, P. T. 1995, ApJ, 448, 683 Takeda et al. (2007) Takeda, G., Ford, E. B., Sills, A., Rasio, F. A., Fischer, D. A., & Valenti, J. A. 2007, ApJS, 168, 297 Valenti & Fischer (2005) Valenti, J. A., & Fischer, D. A. 2005, ApJS, 159, 141 van Leeuwen (2007) van Leeuwen, F. 2007, Hipparcos, the New Reduction of the Raw Data Wright et al. (2004) Wright, J. T., Marcy, G. W., Butler, R. P., & Vogt, S. S. 2004, ApJS, 152, 261 Zapatero Osorio et al. (2004) Zapatero Osorio, M. R., Lane, B. F., Pavlenko, Y., Martín, E. L., Britton, M., & Kulkarni, S. R. 2004, ApJ, 615, 958 {discussion} F. Walter: It is always risky to attempt to pin down the age of a field star, even using multiple techniques that may agree. How much would the age have to be changed to place the L dwarfs on the proper evolutionary tracks? T. Dupuy: I agree and would really like to see another independent measurement of the age, for example, from asteroseismology. The age of the system would have to be about 0.4 Gyr to bring the models into agreement with the data. E. Jensen: Is there a measured metallicity for HD 130948A? The metallicity will affect the evolutionary models, both in the H-R diagram and the predicted Li depletion. T. Dupuy: That’s exactly right; a detail I didn’t go into. The metallicity of HD 130948A is basically solar, which means we can use the standard models. This is another reason why having brown dwarfs with stellar companions is great – because you can make sure you’re not being confused by metallicity effects like those Adam talked about. A. West: Does the fact that this system is a close binary affect the measured luminosity (because the radii are affected)? T. Dupuy: The binary separation is about 2.2 AU, so it’s unlikely that tidal effects are at work in this system. Also, it turns out that the two components receive about as much flux from each other as they do from the primary star, so irradiation shouldn’t be affecting them much. J. Fernandez: The next main source of benchmarks for brown dwarfs will be Kepler and Corot. The precise determination of ages for these primary stars will be crucial.
Towards the P-wave nucleon-pion scattering amplitude in the $\Delta(1232)$ channel Srijit Paul and Giorgio Silvi$\;{}^{ab}$, Srijit Paul and Giorgio Silvi$\;{}^{d}$, Constantia Alexandrou$\;{}^{ac}$, Giannis Koutsou$\;{}^{a}$, Stefan Krieg$\;{}^{bd}$, Luka Leskovec$\;{}^{e}$, Stefan Meinel$\;{}^{fg}$, John W. Negele$\;{}^{h}$, Marcus Petschlies$\;{}^{i}$, Andrew Pochinsky$\;{}^{h}$, Gumaro Rendon$\;{}^{f}$ and Sergey Syritsyn${}^{j}$ ${}^{a}$Computation-based Science and Technology Research Center, The Cyprus Institute 20 Kavafi Str., Nicosia, 2121, Cyprus ${}^{b}$Faculty of Mathematics und Natural Sciences, University of Wuppertal Wuppertal-42119, Germany ${}^{c}$Department of Physics, University of Cyprus, POB 20537, 1678 Nicosia, Cyprus ${}^{d}$Institute for Advanced Simulation, Forschungszentrum Jülich GmbH, Jülich 52425, Germany ${}^{e}$Theory Center, Jefferson Lab, Newport News, VA 23606, USA ${}^{f}$Department of Physics, University of Arizona, Tucson, AZ 85721, USA ${}^{g}$RIKEN BNL Research Center, Brookhaven National Laboratory, Upton, NY 11973, USA ${}^{h}$Center for Theoretical Physics, Massachusetts Institute of Technology Cambridge, MA 02139, USA ${}^{i}$Helmholtz-Institut für Strahlen- und Kernphysik, University of Bonn, Bonn 53115, Germany ${}^{j}$Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794, USA E-mail: , , , , , , , , , , , Abstract: We use lattice QCD and the Lüscher method to study elastic pion-nucleon scattering in the isospin $I=3/2$ channel, which couples to the $\Delta(1232)$ resonance. Our $N_{f}=2+1$ flavor lattice setup features a pion mass of $m_{\pi}\approx 250$ MeV, such that the strong decay channel $\Delta\rightarrow\pi N$ is close to the threshold. We present our method for constructing the required lattice correlation functions from single- and two-hadron interpolating fields and their projection to irreducible representations of the relevant symmetry group of the lattice. We show preliminary results for the energy spectra in selected moving frames and irreducible representations, and extract the scattering phase shifts. Using a Breit-Wigner fit, we also determine the resonance mass $m_{\Delta}$ and the $g_{\Delta-\pi N}$ coupling. 1 Introduction The study of scattering of strongly-interacting hadrons on the lattice has given us quantitative theoretical insight into unstable hadrons, which are otherwise difficult to analyze perturbatively [1]. Even though the theoretical foundations for studying the $2\rightarrow 2$ scattering resonances on the lattice were laid in 1990s by Lüscher [2], the generalizations [3, 4, 5, 6], further extensions [7, 8], practical implementation of these algorithms along with the development of solvers(multigrid) and the availability of lattice ensembles generated using the physical pion mass took two decades. In the last decade, there have been extensive studies of low-lying meson resonances, starting with the $\rho$ meson [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27] that served as the first evidence for the practical applicability of the Lüscher methodology for extracting resonance parameters. The next step is to study more complex $2\rightarrow 2$ strongly interacting scattering systems. Here we explore the nucleon-pion scattering in the $I=3/2$ and $J^{P}=3/2^{+}$ channel where the lowest-lying baryon resonance, the $\Delta(1232)$, is located. This resonance has a mass of $\approx 1210$ MeV and a decay width of $\Gamma_{\Delta\to N\pi}\approx 117$ MeV [28]. The $I=3/2$ $P$-wave $N\pi$ channel is the dominant decay mode, with a branching fraction of $99.4\%$. The PDG only lists one other decay mode - $N\gamma$ with a branching fraction $0.6\%$. The process is almost completely elastic [29], but nearby inelastic resonances with similar quantum numbers could have a small contribution on the phase shift that needs to be taken into account in the analysis. The lattice computation involves the evaluation of two-point correlation functions between the QCD-stable interacting fields: pion, nucleon, and the interpolating field with the quantum numbers of the resonance. The correlators are built from a combination of smeared forward, sequential and stochastic propagators. An important step in the proper identification of the spectra comes from the use of the projection method. It is used to build interpolators that transform according to a given irrep $\Lambda$ and row $r$, and overlap to the angular-momentum eigenstates of interest. Previous studies of the $\Delta$ coupling to the $N\pi$ channel have used the Michael-McNeile method to determine the coupling [30] as well as the Lüscher method [31, 32, 33]. None of the previous studies have fully taken into account the partial-wave mixing. In this preliminary work, we also neglect this mixing, but we plan to include it in the full-fledged calculation. 2 Lattice setup Our calculation is based on the gauge-field ensembles with the actions and algorithms described in Ref. [34]. The gluon action is the tree-level improved Symanzik action, and the same clover-improved Wilson action is used for the sea and valence quarks. The gauge links in the fermion action are smeared using two levels of HEX smearing [34]. Here we report on our preliminary results from $3072$ measurements on the A7 ensemble, whose parameters are given in Table 1. This constitutes the first part of our two-part study of pion-nucleon scattering at $m_{\pi}\approx 250$ MeV. The parameters of the ensemble A8 for the follow-up simulation with larger spatial volume are listed in the table as well. 3 Interpolators and correlation functions To extract the lattice spectrum in the nucleon and $\Delta$ channel, we construct correlation matrices from two-point functions of 1- and 2-hadron interpolating fields with quantum numbers of the nucleon ($N$) and Delta ($\Delta$), respectively. As building blocks we use the familiar single-hadron interpolating fields for the pion, nucleon and $\Delta$. In the following, $\vec{p}$ denotes the three-momentum, and we omit the time coordinate for brevity. The pion interpolator is given by $$\pi(\vec{p})=\sum_{\vec{x}}\bar{d}(\vec{x})\gamma_{5}u(\vec{x})e^{i\vec{p}% \cdot\vec{x}}\,.$$ (1) For both the nucleon and the Delta, we include two choices of interpolators, $$\begin{split}\displaystyle N_{\mu}^{(1)}(\vec{p})=\sum_{\vec{x}}\epsilon_{abc}% (u_{a}(\vec{x}))_{\mu}(u_{b}^{T}(\vec{x})C\gamma_{5}d_{c}(\vec{x}))\,e^{i\vec{% p}\cdot\vec{x}},\\ \displaystyle N_{\mu}^{(2)}(\vec{p})=\sum_{\vec{x}}\epsilon_{abc}(u_{a}(\vec{x% }))_{\mu}(u_{b}^{T}(\vec{x})C\gamma_{0}\gamma_{5}d_{c}(\vec{x}))\,e^{i\vec{p}% \cdot\vec{x}}\,,\end{split}$$ (2) and $$\begin{split}\displaystyle\Delta_{\mu i}^{(1)}(\vec{p})=\sum_{\vec{x}}\epsilon% _{abc}(u_{a}(\vec{x}))_{\mu}(u_{b}^{T}(\vec{x})C\gamma_{i}u_{c}(\vec{x}))\,e^{% i\vec{p}\cdot\vec{x}},\\ \displaystyle\Delta_{\mu i}^{(2)}(\vec{p})=\sum_{\vec{x}}\epsilon_{abc}(u_{a}(% \vec{x}))_{\mu}(u_{b}^{T}(\vec{x})C\gamma_{i}\gamma_{0}u_{c}(\vec{x}))\,e^{i% \vec{p}\cdot\vec{x}}.\end{split}$$ (3) Finally, the two-hadron interpolators are obtained from products of the form $$N_{\mu}^{(1,2)}(\vec{p}_{N})\,\pi(\vec{p}_{\pi}).$$ (4) The two-point functions obtained from the interpolators above are evaluated by Wick contraction and factorization into products of diagram building blocks. The latter are calculated numerically from point-to-all, sequential and stochastic propagators. Figure 1 shows the topology types for the diagrams obtained from the $\Delta-\Delta$ and $\Delta-\pi N$ two-point correlation functions in the left panel. The right panel shows the same for the $\pi N-\pi N$ correlator. In both panels, each topology represents 2 to 4 actual diagrams. 4 Moving frames In order to apply the Lüscher method we have to locate our energy region of interest below the $N\pi\pi$ threshold. Due to the quantization of momenta, $\vec{p}=\frac{2\pi}{L}(n_{x},n_{y},n_{z})$, and the fairly small volume of ensemble A7, the energy spectrum in the $\pi N$ or $\Delta$ rest frame is quite sparse below $E_{\mathrm{thr}}=m_{N}+2m_{\pi}$, which leaves only a few energy values located in the region of interest. In order to better constrain the phase shift curve with additional points, we make use of moving frames. The Lorentz boost contracts the box, giving a different effective value of the spatial length $L$ along the boost direction [5]. Our choices of total momenta for the $\pi N$ system are listed in the first column of Table 2. 5 Angular momentum on the lattice In the continuum, states are classified according to their angular momentum $J$, corresponding to the irreducible representations of $SU(2)$, and parity $P$. On the lattice, the rotational symmetry is broken and the symmetry group is reduced to the double-cover of the cubic group [35, 36]. The 48 elements of the cubic group $O_{h}$ (rotation and inversion operations) leave the lattice grid unchanged. Since we characterize states of spin-1/2 and spin-3/2 fields in this work, we need to consider the double-cover group $O_{h}^{D}$, which includes the $2\pi$ rotations (negative identity), thus having twice the number of elements [37]. In moving frames, the symmetry is further reduced to smaller groups (little groups), which have a smaller set of irreps and a higher mixing of angular momenta in each one of them. Additionally, parity does not represent a useful quantum number in boosted frames, since both parities are featured in the same irrep. In Table 2 we list the symmetry groups associated with the moving frames, along with their irreps containing the angular momenta of interest. This many-to-one mapping of angular-momentum states leads to angular-momentum mixing on the lattice. In most irreps, we must determine not only the $(J=3/2,l=1)$ amplitude where the $\Delta$ appears as a resonance, but also the $(J=3/2,l=2)$, $(J=1/2,l=0)$ and $(J=1/2,l=1)$ amplitudes that could have a small contribution from nearby resonances (listed in Table 3). Fortunately, these remain non-resonant in the energy region below approximately $1.6$ GeV. 6 Projection method We apply group-theoretical projections to obtain operators that transform irreducibly under rotations and reflections of the proper symmetry group. The master formula to project to a specific irrep $\Lambda$, row $r$ and occurrence $n$ is given by [38] $$O^{\Lambda,r,n}(p)=\frac{d_{\Lambda}}{g_{G^{D}}}\sum_{R\in G^{D}}\Gamma_{r,r}^% {\Lambda}(R)U_{R}\phi(p)U^{\dagger}_{R}\;\;\;\;\;r\in\{1,\dots,d_{\Lambda}\},$$ (5) where $\Gamma^{\Lambda}$ are the representation matrices of the elements $R$ (rotations and reflections) of the double group $G^{D}$, $\phi(p)$ is a single/multi hadron operator (e.g., the operators presented in Sec. 3), $U_{R}$ is the quantum field operator that applies the transformation $R$. Additionally, $d_{\Lambda}$ is the dimension of the irrep $\Lambda$, and $g_{G^{D}}$ is the number of elements in the group $G^{D}$. The single-hadron operators transform under rotations as $$\begin{split}\displaystyle R\pi(t,\vec{x})R^{-1}&\displaystyle=\pi(t,R^{-1}% \vec{x}),\\ \displaystyle RN(t,\vec{x})R^{-1}&\displaystyle=S(R)N(t,R^{-1}\vec{x}),\\ \displaystyle R\Delta(t,\vec{x})^{\alpha}_{k}R^{-1}&\displaystyle=A(R)_{kk^{% \prime}}S(R)\Delta_{k^{\prime}}(t,R^{-1}\vec{x}),\\ \end{split}$$ (6) where $A(R)=U^{1}(\omega,\Theta,\Psi)$ denotes the 3-dimensional irrep of $SU(2)$, and $S(R)$ is the 2-dimensional spinor representation of $SU(2)$, $$\displaystyle S(R)=\begin{bmatrix}U^{1/2}(R)&0\\ 0&U^{1/2}(R)\end{bmatrix}.$$ (7) The inversions are given by $$\begin{split}\displaystyle I\pi(t,\vec{x})I^{-1}&\displaystyle=-\pi(t,-\vec{x}% ),\\ \displaystyle IN(t,\vec{x})I^{-1}&\displaystyle=\gamma_{t}N(t,-\vec{x}),\\ \displaystyle I\Delta(t,\vec{x})I^{-1}&\displaystyle=\gamma_{t}\Delta(t,-\vec{% x}).\end{split}$$ (8) Our choice of the Euclidean $\gamma$-matrices is the DeGrand-Rossi basis. The choice of total momentum $\vec{P}$ determines the relevant symmetry group. Looking at Table 4, the $J$ value of the hadron tells us in which irrep (of the rest frame) it should be contained. In order to have a complete identification of the irreps, we deduce them from the characters $\chi$ (trace) of the transformation matrices. It is possible to find the occurrence $m$ of the irrep $\Gamma^{\Lambda}$ in the matrix $M(R)$ realizing the transformation (e.g. $S(R)$ for the single nucleon) using the formula [39, 40] $$m=\frac{1}{g_{G^{D}}}\sum_{R\in G^{D}}\chi^{\Gamma^{\Lambda}(R)}\chi^{M(R)}.$$ (9) The multiplicities $m$ give us the numbers of occurrences of the irreps for the specific operator we want to project (see Table 4). This corresponds to the number of independent projected operators we can extract for a specific irrep $\Lambda$ and row $r$. In Figs. 2 and 3, we show schematically the decomposition of the transformation matrices $S(R)$ for the nucleon and $A(R)\otimes S(R)$ for the Delta in all frames relevant for this study. Once we correctly identified the tensor decomposition in each irrep, we used our code to project the $N$, $\Delta$ and $N\pi$ interpolators (the single $\pi$ does not need projection). Most of the projections lead to linearly dependent operators [38]. We made use of the Gram-Schmidt procedure to construct linear combinations of operators orthogonal to each other. As an example for the nucleon-pion system, we show one projected operator in the rest frame in irrep $H_{g}$ and row $r=1$: $$\frac{1}{2}\pi(1,0,0)N_{2}(-1,0,0)+\frac{1}{2}i\pi(0,1,0)N_{2}(0,-1,0)-\frac{1% }{2}i\pi(0,-1,0)N_{2}(0,1,0)-\frac{1}{2}\pi(-1,0,0).N_{2}(1,0,0).$$ The momentum directions are given in brackets and the subscript of the nucleon operator labels the Dirac index. Taking into account all linearly independent operators, rows and momentum directions in the 8 irreps of the 4 frames considered, we reach a total 1720 projected operators for Delta and the nucleon-pion system. In order to maximize the statistics, we use all of them when building the correlation functions. 7 Spectrum results With a basis of projected interpolators we construct correlation matrices $C_{ij}^{\Lambda,r,m}$ and make use of the variational method (Generalized EigenValue Problem) [41] to determine the energy spectrum in each irrep. Figure 4 shows the effective energies and the fit results for the chosen time ranges. We perform single-exponential fits directly on the principal correlators. To ensure that early-time excited-state contamination is negligible, we perform stability tests by looking for a plateau while varying $t_{min}$ in the fit time interval $\left[t_{min},\,t_{max}\right]$. This is illustrated in Fig. 5. 8 Scattering analysis The Lüscher quantization condition [42], which connects the finite-volume energy levels with infinite-volume scattering phase shifts of two particles, is given by $${\rm det}\bigg{(}\mathbbm{1}+it_{\ell}(s)(\mathbbm{1}+i{M}^{\vec{P}})\bigg{)}=0,$$ (10) where $t_{\ell}(s)=\frac{1}{\cot{\delta_{\ell}(s)}-i}.$ Using the $t$-matrix parametrization and the total-angular-momentum basis of the interacting particles, we obtain $$\det[M^{\vec{P}}_{Jlm,J^{\prime}l^{\prime}m^{\prime}}-\mathbold{\delta_{JJ^{% \prime}}}\mathbold{\delta_{ll^{\prime}}}\mathbold{\delta_{mm^{\prime}}}\cot% \delta_{Jl}]=0,$$ (11) where $M^{\vec{P}}_{Jlm,J^{\prime}l^{\prime}m^{\prime}}$ contains the finite-volume-spectra for the scattering of two particles with spins $\vec{\rm\bf s_{1}}$ and $\vec{\rm\bf s_{2}}$ and total linear momentum $\vec{P}$. The angular momentum $\vec{\rm\bf l}$ is the contribution from the $l$th partial wave such that the total angular momentum $\vec{\rm\bf J}$ is equal to $\vec{\rm\bf l}+\vec{\rm\bf s_{1}}+\vec{\rm\bf s_{2}}$. For fixed $J$ and $l$, we have $-J\leq m\leq J$. Thus, the $M$ matrix represents the mixing of the different angular momenta in finite volume. In the $t$-matrix, $\delta_{Jl}$ denotes the infinit-volume phase shift for a given $J$ and $l$. It becomes quite evident from the addition of angular momenta that $M$ is an infinite-dimensional matrix, because of the infinitely many possible values of $l$. To enable a lattice calculation, we need to select a cut-off $l_{max}$, and ignore higher partial waves. This can be justified for small center-of-mass momenta $p^{*}$, because $\delta(p^{*})\propto(p^{*})^{2l+1}$. After neglecting higher partial waves, the finite-dimensional matrix $M$ can be further simplified into a block diagonal form through a basis transformation of the irreps of the symmetry groups of the lattice. Given a lattice symmetry group $G$ with irrep $\Lambda$ (from Table. 2), the matrix element in the new basis can be written as $$\langle\Lambda rJln\,|\,\hat{M}^{\vec{P}}\,|\,\Lambda^{\prime}r^{\prime}J^{% \prime}l^{\prime}n^{\prime}\rangle=\sum_{m,\,m^{\prime}}c_{Jlm}^{\,\Lambda rn}% \,\,c^{\,\Lambda^{\prime}r^{\prime}n^{\prime}}_{J^{\prime}l^{\prime}m^{\prime}% }\,\,M^{\vec{P}}_{Jlm,J^{\prime}l^{\prime}m^{\prime}},$$ (12) where the row $r$ runs from $1$ to the dimension of $\Lambda$, $n$ labels the occurrence of the irrep, and $c_{Jlm}^{\,G\Lambda n}$ and $c^{\,G^{\prime}\Lambda^{\prime}n^{\prime}}_{J^{\prime}l^{\prime}m^{\prime}}$ are the relevant Clebsh-Gordan coefficients as calculated in [7]. From Schur’s lemma, we know that $\hat{M}^{\vec{P}}$ is block-diagonalized in the new basis, $$\langle\Lambda rJln\,|\,\hat{M}^{\vec{P}}\,|\,\Lambda^{\prime}r^{\prime}J^{% \prime}l^{\prime}n^{\prime}\rangle=\delta_{\Lambda\Lambda^{\prime}}\delta_{rr^% {\prime}}M^{\Lambda}_{Jln,J^{\prime}l^{\prime}n^{\prime}}.$$ (13) The advantage of the basis transformation can be observed for example in the moving frame $\vec{P}=\frac{2\pi}{L}(0,0,1)$ with symmetry group $C^{D}_{4v}$. With two irreps $G_{1}$ and $G_{2}$, Eq. (11) simplifies to one quantization condition per irrep $${\rm det}(M^{G_{1}}_{Jln,J^{\prime}l^{\prime}n^{\prime}}-\delta_{JJ^{\prime}}% \delta_{ll^{\prime}}\delta_{nn^{\prime}}\cot{\delta^{G_{1}}_{Jl}})=0,$$ (14) $${\rm det}(M^{G_{2}}_{Jln,J^{\prime}l^{\prime}n^{\prime}}-\delta_{JJ^{\prime}}% \delta_{ll^{\prime}}\delta_{nn^{\prime}}\cot{\delta^{G_{2}}_{Jl}})=0.$$ (15) This basis transformation, along with the angular-momentum content, is also shown schematically in Fig. 6. The angular-momentum content depicts the mixing of various partial waves which contribute to the $J=3/2$ channel. The quantization conditions can be written in terms of $w_{js}$ functions( where $|l-l^{\prime}|\leq j\leq l+l^{\prime}$ and $-j\leq s\leq j$) that include the generalized Zeta functions as in Ref. [7]. For the case of $G_{2}$, considering all relevant $J,l$ values of the phase shift $\delta_{J,l}$ present in the energy region of interest, the quantization condition can be written as $$9\left(w_{1,0}-w_{3,0}\right){}^{2}-25\left(-\cot\delta_{\frac{3}{2},1}^{G_{2}% }+w_{0,0}-w_{2,0}\right)\left(-\cot\delta_{\frac{3}{2},2}^{G_{2}}+w_{0,0}-w_{2% ,0}\right)=0.$$ (16) The mixing between $l=1$ and $l=2$, is clearly shown in Eq. (16). If we neglect contributions from $l=2$, we arrive at the quantization condition as used in Ref. [7]. Due to the existence of eigenstates with same total angular momentum $J$, but different orbital angular momenta $l$, we have mixtures in scattering amplitudes originating from nearby $\Delta$ resonances (see Tab 3). In this preliminary study, we are only interested in the phase shift $\delta_{J=\frac{3}{2},l=1}$. Having fixed $l_{max}$, we neglect the contributions from higher partial waves, but we need to take into account contributions from lower partial waves and $J$’s for the complete analysis. 9 Phase-shift results Our preliminary results for the scattering phase shifts obtained from the Lüscher quantization condition are shown in Fig. 7. To extract the resonance parameters, we directly fit a model for the $t$-matrix to the energy spectra; this methodology is known as the inverse Lüscher analysis. The scattering amplitude for a narrow resonance in QCD can be characterized by the Breit Wigner parametrization, $$t_{\ell}(s)=\frac{\sqrt{s}\,\Gamma(s)}{m_{R}^{2}-s-i\sqrt{s}\,\Gamma(s)},$$ (17) where $s$ is the center of mass energy squared, $m_{R}$ is the mass of resonance, and $\Gamma{(s)}$ is the decay width of the resonance. For the $\Delta$ resonance, the decay width can be expressed in effective field theory [43] to lowest order as $$\Gamma^{LO}_{EFT}=\frac{g_{\Delta-\pi N}^{2}}{48\pi}\frac{E_{N}+m_{N}}{E_{N}+E% _{\pi}}\frac{p^{*3}}{m_{N}^{2}},$$ (18) with the dimensionless coupling $g_{\Delta-\pi N}$ and center-of-mass momentum $p^{*}$. We denote the lattice result for the $n^{th}$ finite-volume energy level in the $\Lambda$ irrep of the moving frame with momentum $\vec{P}$ as $\sqrt{s_{n}^{\Lambda,\vec{P}}}^{[avg]}$. The corresponding model function $\sqrt{s_{n}^{\Lambda,\vec{P}}}^{[model]}$ is constructed by combining the Lüscher quantization condition (11) with the Breit-Wigner model given by Eqs. (17) and (18). We define the $\chi^{2}$ function to be minimized as $$\chi^{2}=\sum_{\vec{P},\Lambda,n}\sum_{\vec{P}^{\prime},\Lambda^{\prime},n^{% \prime}}\left(\sqrt{s_{n}^{\Lambda,\vec{P}}}^{[avg]}\ -\sqrt{s_{n}^{\Lambda,% \vec{P}}}^{[model]}\right)[C^{-1}]_{\vec{P},\Lambda,n;\vec{P}^{\prime},\Lambda% ^{\prime},n^{\prime}}\left(\sqrt{s_{n^{\prime}}^{\Lambda^{\prime},\vec{P}^{% \prime}}}^{[avg]}-\sqrt{s_{n^{\prime}}^{\Lambda^{\prime},\vec{P}^{\prime}}}^{[% model]}\right).$$ (19) We include all irreps of the two groups $O_{h}^{D}$ and $C_{2v}^{D}$ and one irrep of $C_{4v}^{D}$. The fit gives $$\displaystyle m_{\Delta}$$ $$\displaystyle=$$ $$\displaystyle 1414\pm 36\>\>{\rm MeV},$$ (20) $$\displaystyle g_{\Delta-\pi N}$$ $$\displaystyle=$$ $$\displaystyle 26\pm 7,$$ (21) $$\displaystyle\dfrac{\chi^{2}}{dof}$$ $$\displaystyle=$$ $$\displaystyle 1.05,$$ (22) and the corresponding phase shift curve is also shown in Fig. 7. 10 Discussion We compare our preliminary results for $m_{\Delta}$ and $g_{\Delta-\pi N}$ with other lattice calculations and with the values extracted from experiment in Table 5. Our results are consistent with previous lattice determinations using the Lüscher method at similar pion masses. This preliminary analysis was done with only $1/3$ of the total number of configurations available for the A7 ensemble, and we are now increasing the statistics. Furthermore, we plan to add the A8 ensemble (Table 1) in the near future. The phase-shift curve will be constrained more precisely with the additional points calculated from the bigger $(32^{3}\times 48)$ lattice at the same pion mass. We also plan to include non-resonant contributions from other possible $J$’s and $l$’s, taking into account partial-wave mixing. Finally, we will investigate other decay-width models to better understand the model-dependence of the extracted resonance parameters. 11 Acknowledgements This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231. SM and GR are supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics under Award Number DE-SC0009913. SK is supported by the Deutsche Forschungsgemeinschaft grant SFB-TRR 55. SK and GS were partially funded by the IVF of the HGF. SM and SS also thank the RIKEN BNL Research Center for support. JN was supported in part by the DOE Office of Nuclear Physics under grant DE-SC-0011090. AP was supported in part by the U.S. Department of Energy Office of Nuclear Physics under grant DE-FC02-06ER41444. LL was supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contract DE-AC05-06OR23177. SP is supported by the Horizon 2020 of the European Commission research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 642069. We acknowledge the use of the USQCD software QLUA for the calculation of the correlators. References [1] P. Stoler, Form-factors of excited baryons at high $Q^{2}$ and the transition to perturbative QCD, Phys. Rev. Lett. 66 (1991) 1003–1006. [2] M. Lüscher and U. Wolff, How to calculate the elastic scattering matrix in two-dimensional quantum field theories by numerical simulation, Nuclear Physics B 339 (jul, 1990) 222–252. [3] K. Rummukainen and S. A. Gottlieb, Resonance scattering phase shifts on a nonrest frame lattice, Nucl. Phys. B450 (1995) 397–436, [hep-lat/9503028]. [4] C. H. Kim, C. T. Sachrajda and S. R. Sharpe, Finite-volume effects for two-hadron states in moving frames, Nucl. Phys. B727 (2005) 218–243, [hep-lat/0507006]. [5] L. Leskovec and S. Prelovsek, Scattering phase shifts for two particles of different mass and non-zero total momentum in lattice QCD, Phys. Rev. D85 (2012) 114507, [1202.2145]. [6] R. A. Briceño, Two-particle multichannel systems in a finite volume with arbitrary spin, Phys. Rev. D89 (2014) 074507, [1401.3312]. [7] M. Gockeler, R. Horsley, M. Lage, U. G. Meissner, P. E. L. Rakow, A. Rusetsky, G. Schierholz and J. M. Zanotti, Scattering phases for meson and baryon resonances on general moving-frame lattices, Phys. Rev. D86 (2012) 094513, [1206.4141]. [8] R. A. Briceño, M. T. Hansen and S. R. Sharpe, Relating the finite-volume spectrum and the two-and-three-particle $S$ matrix for relativistic systems of identical scalar particles, Phys. Rev. D95 (2017) 074510, [1701.07465]. [9] C. Alexandrou, L. Leskovec, S. Meinel, J. Negele, S. Paul, M. Petschlies, A. Pochinsky, G. Rendon and S. Syritsyn, $P$-wave $\pi\pi$ scattering and the $\rho$ resonance from lattice QCD, Phys. Rev. D96 (2017) 034525, [1704.05439]. [10] S. A. Gottlieb, P. B. MacKenzie, H. B. Thacker and D. Weingarten, Hadronic Couplings Constants in Lattice Gauge Theory, Nucl. Phys. B263 (1986) 704. [11] UKQCD collaboration, C. McNeile and C. Michael, Hadronic decay of a vector meson from the lattice, Phys. Lett. B556 (2003) 177–184, [hep-lat/0212020]. [12] CP-PACS collaboration, S. Aoki et al., Lattice QCD Calculation of the $\rho$ Meson Decay Width, Phys. Rev. D76 (2007) 094506, [0708.3705]. [13] QCDSF collaboration, M. Göckeler, R. Horsley, Y. Nakamura, D. Pleiter, P. E. L. Rakow, G. Schierholz and J. Zanotti, Extracting the $\rho$ resonance from lattice QCD simulations at small quark masses, PoS LATTICE2008 (2008) 136, [0810.5337]. [14] ETM collaboration, K. Jansen, C. McNeile, C. Michael and C. Urbach, Meson masses and decay constants from unquenched lattice QCD, Phys. Rev. D80 (2009) 054510, [0906.4720]. [15] X. Feng, K. Jansen and D. B. Renner, Resonance parameters of the $\rho$ meson from lattice QCD, Phys. Rev. D83 (2011) 094505, [1011.5288]. [16] Budapest-Marseille-Wuppertal collaboration, J. Frison et al., Rho decay width from the lattice, PoS LATTICE2010 (2010) 139, [1011.3413]. [17] C. B. Lang, D. Mohler, S. Prelovsek and M. Vidmar, Coupled channel analysis of the $\rho$ meson decay in lattice QCD, Phys. Rev. D84 (2011) 054503, [1105.5636]. [18] CS collaboration, S. Aoki et al., $\rho$ Meson Decay in 2+1 Flavor Lattice QCD, Phys. Rev. D84 (2011) 094505, [1106.5365]. [19] C. Pelissier and A. Alexandru, Resonance parameters of the $\rho$-meson from asymmetrical lattices, Phys. Rev. D87 (2013) 014503, [1211.0092]. [20] Hadron Spectrum collaboration, J. J. Dudek, R. G. Edwards and C. E. Thomas, Energy dependence of the $\rho$ resonance in $\pi\pi$ elastic scattering from lattice QCD, Phys. Rev. D87 (2013) 034505, [1212.0830]. [21] D. J. Wilson, R. A. Briceno, J. J. Dudek, R. G. Edwards and C. E. Thomas, Coupled $\pi\pi,K\bar{K}$ scattering in $P$-wave and the $\rho$ resonance from lattice QCD, Phys. Rev. D92 (2015) 094502, [1507.02599]. [22] RQCD collaboration, G. S. Bali, S. Collins, A. Cox, G. Donald, M. Göckeler, C. B. Lang and A. Schäfer, $\rho$ and $K^{*}$ resonances on the lattice at nearly physical quark masses and $N_{f}=2$, Phys. Rev. D93 (2016) 054509, [1512.08678]. [23] J. Bulava, B. Fahy, B. Hörz, K. J. Juge, C. Morningstar and C. H. Wong, $I=1$ and $I=2$ $\pi-\pi$ scattering phase shifts from $N_{\mathrm{f}}=2+1$ lattice QCD, Nucl. Phys. B910 (2016) 842–867, [1604.05593]. [24] B. Hu, R. Molina, M. Döring and A. Alexandru, Two-flavor Simulations of the $\rho(770)$ and the Role of the $K\bar{K}$ Channel, Phys. Rev. Lett. 117 (2016) 122001, [1605.04823]. [25] D. Guo, A. Alexandru, R. Molina and M. Döring, Rho resonance parameters from lattice QCD, Phys. Rev. D94 (2016) 034501, [1605.03993]. [26] Z. Fu and L. Wang, Studying the $\rho$ resonance parameters with staggered fermions, Phys. Rev. D94 (2016) 034505, [1608.07478]. [27] C. Andersen, J. Bulava, B. Hörz and C. Morningstar, The $I=1$ pion-pion scattering amplitude and timelike pion form factor from $N_{\rm f}=2+1$ lattice QCD, 1808.05007. [28] Particle Data Group collaboration, C. Patrignani et al., Review of Particle Physics, Chin. Phys. C40 (2016) 100001. [29] M. Shrestha and D. M. Manley, Multichannel parametrization of $\pi N$ scattering amplitudes and extraction of resonance parameters, Phys. Rev. C86 (2012) 055203, [1208.2710]. [30] C. Alexandrou, J. W. Negele, M. Petschlies, A. Strelchenko and A. Tsapalis, Determination of $\Delta$ resonance parameters from lattice QCD, Phys. Rev. D88 (2013) 031501, [1305.6081]. [31] U.-G. Meissner, The Beauty of Spin, J. Phys. Conf. Ser. 295 (2011) 012001, [1012.0924]. [32] V. Verduci, Pion-nucleon scattering in lattice QCD. PhD thesis, Graz U., 2014. [33] C. W. Andersen, J. Bulava, B. Hörz and C. Morningstar, Elastic $I=3/2p$-wave nucleon-pion scattering amplitude and the $\Delta$(1232) resonance from N${}_{f}$=2+1 lattice QCD, Phys. Rev. D97 (2018) 014506, [1710.01557]. [34] S. Durr, Z. Fodor, C. Hoelbling, S. D. Katz, S. Krieg, T. Kurth, L. Lellouch, T. Lippert, K. K. Szabo et al., Lattice QCD at the physical point: Simulation and analysis details, JHEP 08 (2011) 148, [1011.2711]. [35] R. C. Johnson, Angular momentum on a lattice, Phys. Lett. 114B (1982) 147–151. [36] S. Basak, R. G. Edwards, G. T. Fleming, U. M. Heller, C. Morningstar, D. Richards, I. Sato and S. Wallace, Group-theoretical construction of extended baryon operators in lattice QCD, Phys. Rev. D72 (2005) 094506, [hep-lat/0506029]. [37] V. Bernard, M. Lage, U.-G. Meissner and A. Rusetsky, Resonance properties from the finite-volume energy spectrum, JHEP 08 (2008) 024, [0806.4495]. [38] C. Morningstar, J. Bulava, B. Fahy, J. Foley, Y. C. Jhang, K. J. Juge, D. Lenkner and C. H. Wong, Extended hadron and two-hadron operators of definite momentum for spectrum calculations in lattice QCD, Phys. Rev. D88 (2013) 014511, [1303.6816]. [39] D. C. Moore and G. T. Fleming, Multiparticle States and the Hadron Spectrum on the Lattice, Phys. Rev. D74 (2006) 054504, [hep-lat/0607004]. [40] F. A. Cotton, Chemical applications of group theory. John Wiley & Sons, 2003. [41] B. Blossier, M. Della Morte, G. von Hippel, T. Mendes and R. Sommer, On the generalized eigenvalue method for energies and matrix elements in lattice field theory, JHEP 04 (2009) 094, [0902.1265]. [42] R. A. Briceno, J. J. Dudek and R. D. Young, Scattering processes and resonances from lattice QCD, Rev. Mod. Phys. 90 (2018) 025001, [1706.06223]. [43] V. Pascalutsa and M. Vanderhaeghen, Chiral effective-field theory in the Delta(1232) region: I. Pion electroproduction on the nucleon, Phys. Rev. D73 (2006) 034003, [hep-ph/0512244]. [44] C. Alexandrou, J. W. Negele, M. Petschlies, A. V. Pochinsky and S. N. Syritsyn, Study of decuplet baryon resonances from lattice QCD, Phys. Rev. D93 (2016) 114515, [1507.02724].
Spectral Study of the HESS J1745-290 Gamma-Ray Source as Dark Matter Signal J. A. R. Cembranos111E-mail:cembra@fis.ucm.es, V. Gammaldi222E-mail:vivigamm@pas.ucm.es, and A. L. Maroto333E-mail:maroto@fis.ucm.es Departamento de Física Teórica I, Universidad Complutense de Madrid, E-28040 Madrid, Spain (December 2, 2020) Abstract We study the main spectral features of the gamma-ray fluxes observed by the High Energy Stereoscopic System (HESS) from the J1745-290 Galactic Center source during the years 2004, 2005 and 2006. In particular, we show that these data are well fitted as the secondary gamma-rays photons generated from dark matter annihilating into Standard Model particles in combination with a simple power law background. We present explicit analyses for annihilation in a single standard model particle-antiparticle pair. In this case, the best fits are obtained for the $u\bar{u}$ and $d\bar{d}$ quark channels and for the $W^{+}W^{-}$ and $ZZ$ gauge bosons, with background spectral index compatible with the Fermi-Large Area Telescope (LAT) data from the same region. The fits return a heavy WIMP, with a mass above $\sim 10$ TeV, but well below the unitarity limit for thermal relic annihilation. pacs: 04.50.Kd, 95.36.+x, 98.80.-k I Introduction Numerous and different data collected during the last years have established the Standard Cosmological Model as a simple framework showing a remarkable agreement with observations. This model is based on Einstein’s General Relativity and a homogeneous and isotropic ansatz for the metric. However, in addition to ordinary matter, two new elements need to be added: a cosmological constant or other form of dark energy to account for the late time acceleration of the universe, and dark matter (DM) to explain the formation and dynamics of cosmic structures. However, despite the multiple efforts, the fundamental nature of DM remains still as an open problem. There are strong astrophysical evidences for DM from galactic to cosmological scales, but the interactions with ordinary matter have not been probed beyond gravitational effects. In this sense, both direct and indirect DM searches are fundamental to explore particle models of DM. In the framework of indirect searches, the observation of gamma-ray fluxes from astrophysical sources represents one of the main approaches to the DM puzzle. If DM annihilates into SM particles, the subsequent chains of decay and hadronization of unstable products produce gamma-ray photons generically. The observation of this signal is highly affected by astrophysical uncertainties in the gamma-ray backgrounds and in the DM densities and distribution. On the other hand, depending on both astrophysical and particle physics models of DM, the signature could be distinguishable from the background. Appealing targets for gamma-ray observations of annihilating DM are mainly selected according to the abundance of DM in the source and their distance. Galaxy clusters, dwarfs spheroidal galaxies (dSph) or the galactic center (GC) of the Milky Way are traditional targets of interest. Galaxy clusters are very rich in DM, but they are very distant objects. DSphs are also characterized by high DM densities but at much shorter distances. In any case, despite their proximity, observations of dSphs have not been able to detect any gamma-ray flux signature over the background so far SEGUE ; FerdSp . On the other hand, the GC represents a very close target, but the complexity of the region, due to the high concentration of sources, makes the analysis quite involved. In this work, we will focus on the analysis of very high energy (VHE) gamma-rays coming from the GC. Different observations from the GC have been reported by several collaborations such as CANGAROO CANG , VERITAS VER , HESS Aha ; HESS , MAGIC MAG and Fermi-LAT Vitale ; ferm . In particular, we will analyze the data collected by the HESS collaboration during the years 2004, 2005, and 2006 associated with the HESS J1745-290 GC source HESS . The interpretation of these fluxes as DM signal has been widely discussed in the literature from the very early days of the publication of the observed data Horns:2004bk ; Bergstrom1 ; Profumo:2005xd ; DMint . It was concluded that the spectral features of these gamma-rays disfavored the DM origin AN ; DMint . However, in recent studies Cembranos:2012nj ; Belikov:2012ty , it has been pointed out that these fluxes are compatible with the spectral signal of DM annihilation provided it is complemented with a simple background. This extra source of gamma-rays can be originated by radiative processes generated by particle acceleration in the vicinity of Sgr A East supernova and the supermassive black hole SgrA . In this work, we present a systematic study of this assumption: In Section II, we show an analysis of the source, while a brief review of the gamma-ray flux coming from annihilating DM in galactic sources is presented in Section III. The fit of the HESS data with a total fitting function given by the combination of the background power law component with annihilating DM signature is discussed in Section IV. In Section V, we include in the analysis the data collected by Fermi-LAT from the same region. We summarize the main results of our work and prospects for future analyses in Section VI. Finally, an appendix provides useful details for reproducing the statistical study performed in these analyses. II HESS J1745-290 data DM is expected to be clumped in the central region of standard galaxies. In this sense, the central part of the Milky Way could be an important source of gamma-rays produced in the DM annihilation processes. Because of its closeness, the GC represents a very appealing target for the indirect search of DM, but the complex nature of this area makes the identification of the sources quite difficult. Several sources have been detected not only in the gamma-ray, but also in the infrared and X-ray ranges of the spectrum. The absence of variability in the collection of HESS J1745-290 data in the TeV scale, during the years 2004, 2005 and 2006, suggests that the emission mechanism and emission regions differ from those invoked in the variable infrared and X-ray emissions X . The significance of the signal reduces to few tenths of degree HESS , but the angular distribution of the very high energy gamma-ray emission shows the presence of an adjunctive diffuse component. The fundamental nature of this source is still unclear. These gamma-rays could have been originated by particle propagation ferm ; SgrA in the neighborhood of the Sgr A East supernova remnant and the supermassive black hole Sgr A, both located at the central region of our galaxy Atoyan ; AN . If it was originated by the DM distribution, the morphology of the source requires a very compressed DM structure. In fact, it has been claimed Blumenthal ; Prada:2004pi that baryonic gas could account for similar effects when falling to the central region, modifying the gravitational potential and increasing the DM density in the center (see however Romano ; Salucci:2011ee ). If this is the case, the DM annihilating fluxes are expected to be enhanced by an important factor with respect to DM alone simulations, such as the classical NFW profile Prada:2004pi . In previous works HESS ; Atoyan , important deviations from a power law spectrum have been already reported, and a cut-off at several tens of TeVs has been proved as a distinctive feature of the spectrum. For instance, the observed data were compared to a simple power law: $$\frac{d\Phi_{Bg}}{dE}=\Phi_{0}\cdot\left(\frac{E}{\text{GeV}}\right)^{-\Gamma}\;,$$ (1) and a power law with high energy exponential cut-off: $$\frac{d\Phi_{Bg}}{dE}=\Phi_{0}\cdot\left(\frac{E}{\text{GeV}}\right)^{-\Gamma}% \cdot e^{-\frac{E}{E_{cut}}}\;,$$ (2) where $\Phi_{0}$ is the flux normalization, $\Gamma$ is the spectral index and $E_{cut}$ is the cut-off energy. The measured spectrum for the three-year dataset ranges from 260 GeV to 60 TeV. We have reproduced these analyses with the results that can be found in Figs. 1 and 2. They are consistent with previous works HESS ; Atoyan , since the spectrum is well described by a power law with exponential cut-off (Fig. 2), while a simple power law is clearly inconsistent (Fig. 1). III Gamma-rays from dark matter annihilation If the signal observed from the GC is a combination of gamma-ray photons from annihilating DM and a background, the total fitting function for the observed differential gamma-ray flux will be: $$\frac{d\Phi_{Tot}}{dE}=\frac{d\Phi_{Bg}}{dE}+\frac{\Phi_{DM}}{dE}.$$ (3) We will assume a simple power law background parameterized as: $$\frac{d\Phi_{Bg}}{dE}=B^{2}\cdot\left(\frac{E}{\text{GeV}}\right)^{-\Gamma}.$$ (4) On the other hand, the differential gamma-ray flux originated from DM annihilation in galactic structures and substructures can be written generically as: $$\frac{d\Phi_{\text{DM}}}{dE}=\sum^{\text{channels}}_{i}\frac{\langle\sigma_{i}% v\rangle}{2}\cdot\frac{dN_{i}}{dE}\cdot\frac{{\Delta\Omega\,\langle J\rangle}_% {\Delta\Omega}}{4\pi M}\,,$$ (5) where $\langle\sigma_{i}v\rangle$ are the thermal averaged annihilation cross-sections of two DM particles into SM particles (also labeled by the subindex i). We are assuming that the DM particle is its own antiparticle. $M$ is the mass of the DM particle, and the number of photons produced in each annihilating channel $dN_{i}/dE$, involves decays and/or hadronization of unstable products such as quarks and leptons. Because of the non-perturbative QCD effects, the calculation of $dN_{i}/dE$ requires Monte Carlo events generators such as PYTHIA pythia . In our analysis, we will focus on gamma-rays coming from external bremsstrahlung and fragmentation of SM particle-antiparticle pairs produced by DM annihilation. We will ignore DM decays, the possible production of monoenergetic photons, N-body annihilations (with $N>2$), or photons produced from internal bremsstrahlung, that are model dependent. In particular and in order to simplify the discussion and provide useful information for a general analysis, we will consider DM annihilation into each single channel of SM particle-antiparticle pairs, i.e. $$\frac{d\Phi^{i}_{\text{DM}}}{dE}=A_{i}^{2}\cdot\frac{dN_{i}}{dE}\;,$$ (6) where $$A_{i}^{2}=\frac{\langle\sigma_{i}v\rangle\,\Delta\Omega\,\langle J\rangle_{% \Delta\Omega}}{8\pi M^{2}}$$ (7) is a new constant to be fitted together with the DM particle mass $M$, and the background parameters $B$ and $\Gamma$. The astrophysical factor ${\langle J\rangle}_{\Delta\Omega}$ of Eq. (5) will be also indirectly fitted by means of the parameter $A_{i}$. In its most general expression, it is computed in the direction $\Psi$ defined by the line of observation towards the GC: $$\displaystyle{\langle J\rangle}_{\Delta\Omega}=\frac{1}{\Delta\Omega}\int_{% \Delta\Omega}\text{d}\Omega\int_{0}^{l_{max}(\Psi)}\rho^{2}[r(l)]dl(\Psi)\,,$$ (8) where $l$ is the distance from the Sun to any point in the halo. The radial distance $r$ is measured from the GC, and is related to $l$ by $r^{2}=l^{2}+D_{\odot}^{2}-2D_{\odot}l\cos\Psi$, where $D_{\odot}\simeq 8.5$ kpc is the distance from the Sun to the center of the Galaxy. The distance from the Sun to the edge of the halo in the direction $\theta$ is $l_{max}=D_{\odot}\cos\theta+\sqrt{r^{2}-D_{\odot}^{2}\sin\theta}$. The photon flux must be averaged over the solid angle of the detector, that is typically of order $\Delta\Omega=2\pi(1-\cos\Psi)\simeq 10^{-5}$ for detectors with sensitivities in the TeV energy scale, such as the HESS Cherenkov telescopes array. The dark halo in the GC is usually modeled by the NFW profile Navarro:1996gj : $$\rho(r)\equiv\frac{N}{r(r-r_{s})^{2}}\;,$$ (9) where $N$ is the overall normalization and $r_{s}$ the scale radius. This profile is in good agreement with non-baryonic cold DM simulations of the GC. In this case and accounting for just annihilating DM, the astrophysical factor is: $\langle J^{\text{NFW}}_{(2)}\rangle\simeq 280\cdot 10^{23}\;\text{GeV}^{2}% \text{cm}^{-5}$, value that we will use as standard reference. IV Single-channel fits As commented before, the particle model part of the differential gamma-ray flux expected from the GC is simulated by means of Monte Carlo event generators, such as PYTHIA pythia . However, the fact that simulations have to be performed for fixed DM mass implies that we cannot obtain explicit $M$ dependence for the photon spectra. In order to overcome this limitation, the simulated spectra of each annihilation channel has been reproduced with the analytic fitting functions provided in Ref. Ce10 in terms of the WIMP mass, by means of mass dependent parameters. The combination of such simulated spectra with a power law background (Eq. (3)) is finally fitted. We assume a typical experimental resolution of $15\%$ ($\Delta E/E\simeq 0.15$) and a perfect detector efficiency. For quarks (except the top) electron and $\tau$ leptons, the most general formula needed to reproduce the behaviour of the differential number of photons in an energy range may be written as: $$\displaystyle\frac{dN}{dE}$$ $$\displaystyle=$$ $$\displaystyle\Big{[}\,a_{1}\text{exp}\left(-b_{1}\left(\frac{E}{M}\right)^{n_{% 1}}-b_{2}\left(\frac{E}{M}\right)^{n_{2}}-\frac{c_{1}}{\left(\frac{E}{M}\right% )^{d_{1}}}+\frac{c_{2}}{\left(\frac{E}{M}\right)^{d_{2}}}\right)$$ (10) $$\displaystyle+$$ $$\displaystyle q\,\left(\frac{E}{M}\right)^{1.5}\,\text{ln}\left[p\left(1-\left% (\frac{E}{M}\right)\right)\right]\frac{\left(\frac{E}{M}\right)^{2}-2\left(% \frac{E}{M}\right)+2}{\left(\frac{E}{M}\right)}\Big{]}E^{-1.5}M^{0.5}\;.$$ The value of the different parameters change with the SM particle annihilation channel and in some cases, with the range of the WIMP mass. The cases of interest are described below and the value of the parameters reported in Appendix A. In the case of the electron-positron channel, the only contribution to the gamma-rays flux comes from the bremsstrahlung of the final particles. Therefore, in the previous expression (10) the exponential contribution is absent, $q=\alpha_{\text{QED}}/\pi$, and $p=\left(M/m_{e^{-}}\right)^{2}$ . This choice of the parameters corresponds to the well-known Weizsäcker-Williams approximation (Fig. 3). In the case of the $\mu^{+}\mu^{-}$ channel, the exponential contribution in the expression above (10) is also absent. A proper fitting function for such a channel can be written as: $$\displaystyle\frac{\text{d}N}{\text{d}E}\,=\,q\,\left(\frac{E}{M}\right)^{1.5}% \,\text{ln}\left[p\left(1-\left(\frac{E}{M}\right)^{l}\right)\right]\frac{% \left(\frac{E}{M}\right)^{2}-2\left(\frac{E}{M}\right)+2}{\left(\frac{E}{M}% \right)}E^{-1.5}M^{0.5}\;.$$ (11) All the parameters are here mass dependent and their expression for a range of mass $10^{3}\,\text{GeV}\lesssim M\lesssim 5\times 10^{4}$ GeV are reported in Tab. 1. The best fit for the $\mu$ lepton is shown in Fig. 4. The $\tau^{+}\tau^{-}$ spectrum needs the entire Eq. (10) for an accurate analysis. The value of the mass independent parameter and the expression of the mass dependent ones used in this work are reported in Tab. 2. The best fit for the $\tau$ lepton is shown in Fig. 5. The value of the mass independent parameter and the expression of the mass dependent ones in (10) for the $u\bar{u}$ channel are reported in Tab. 3. The analytic fitting function is given for mass values up $8000$ GeV, because of limitations in the Monte Carlo event generator PYTHIA software. An extrapolation up to larger values of the mass has been performed. In this case the best fit is shown in Fig. 6. The parameters for the $d\bar{d}$ channel are given in Tab. 4. This channel provides the best fit (See Fig. 7) of all those considered in the paper and is used as a reference for comparison with other channels. The parameters for the $s\bar{s}$, $c\bar{c}$ and $b\bar{b}$ are reported in Tabs. 5, 6 and 7. The best fit of these hadronic channels are shown in Figs. 8, 9 and 10. The $t\bar{t}$ needs a different fitting function: $$\frac{dN}{dE}=\,a_{1}\,\text{exp}\left(-b_{1}\,\left(\frac{E}{M}\right)^{n_{1}% }-\frac{c_{1}}{\left(\frac{E}{M}\right)^{d_{1}}}-\frac{c_{2}}{\left(\frac{E}{M% }\right)^{d_{2}}}\right)\left\{\frac{\text{ln}\left[p\left(1-\left(\frac{E}{M}% \right)^{l}\right)\right]}{\text{ln}\,p}\right\}^{q}E^{-1.5}M^{0.5}\;.$$ (12) The value of the mass independent parameter and the expression of the mass dependent ones (in the selected range of mass) are reported in Tab. 8. The best fit for the $t\bar{t}$ channel by using Eq. (12) is shown in Fig. 11. For the $W$ and $Z$ gauge bosons the parametrization is: $$\frac{dN}{dE}=\,a_{1}\,\text{exp}\left(-b_{1}\,\left(\frac{E}{M}\right)^{n_{1}% }-\frac{c_{1}}{\left(\frac{E}{M}\right)^{d_{1}}}\right)\left\{\frac{\text{ln}% \left[p\left(j-\frac{E}{M}\right)\right]}{\text{ln}\,p}\right\}^{q}E^{-1.5}M^{% 0.5}\;,$$ (13) where the value of the parameters used in this study are reported in Tab. 9. The best fits for the $W^{+}W^{-}$ and $ZZ$ channels are shown in Figs. 12 and 13. The best fit is provided by the $d\bar{d}$ channel with $\chi^{2}/dof=0.73$ for a total of 24 degrees of freedom (dof). In any case, other hadronic channels such as $u\bar{u}$ (see Fig. 6) or $s\bar{s}$, also provide very good fits within 1$\sigma$. In the same way, softer spectra as the one provided by $t\bar{t}$ (see Fig. 11), $W^{+}W^{-}$ (Fig. 12) or $ZZ$ (Fig. 13) channels are consistent with data without statistical significance difference. On the contrary, leptonic channels (not only $e^{+}e^{-}$, or $\mu^{+}\mu^{-}$ but also $\tau^{+}\tau^{-}$, Figs. 3, 4 and 5), $c\bar{c}$ and $b\bar{b}$ channels (Fig. 9 and 10) are ruled out with more than 99% confidence level when compared to the best channel. It is interesting to note that taking into account all the channels that provide a good fit, the DM mass is constrained to $15\;\text{TeV}\lesssim M\lesssim 110\;\text{TeV}$ within 2$\sigma$. The lighter values are consistent with hadronic annihilations ($u\bar{u}$) and the heavier ones with the annihilation in $t\bar{t}$, that is more similar to electroweak channels. V FERMI 1FGL J1745.6-2900 data It has been argued that the Fermi-LAT source 1FGL J1745.6-2900 and the previously considered HESS source J1745-290 are spatially coincident Cohen . In ferm data from the first 25 months of observations of the mentioned Fermi-LAT source have been analyzed. It has been shown that the observed spectrum in the range 100 MeV- 300 GeV can be very well described by a broken power law with a break energy of $E_{br}=2.0^{+0.8}_{-1.0}$ GeV, and slopes $\Gamma_{1}=2.20\pm 0.04$ and $\Gamma_{2}=2.68\pm 0.05$ for lower and higher energies than $E_{br}$ respectively. Notice that the fitted value of $\Gamma_{2}$ from Fermi-LAT data is in very good agreement with the spectral index of the diffuse background obtained from HESS data in our previous analyses. Indeed, at the 95% confidence level, the allowed range for the spectral index of the diffuse background is $2.6\lesssim\Gamma\lesssim 3.7$. In this case, the lower values are consistent with all the allowed channels, but the higher values are only accessible to the light quark channels. In any case all the channels that provide a good fit to HESS data are also consistent with the spectral index observed by Fermi-LAT. In Fig. 14, we show the case of the $W^{+}W^{-}$ channel to illustrate this consistency. Both the signal and background parameters are compatible with the $W^{+}W^{-}$ channel fit without the Fermi-LAT data. With the new data, the analysis even improves to $\chi^{2}/dof=0.75$. This interpretation implies that the Fermi-LAT telescope is able to detect just the background component of the total energy spectrum of the gamma-ray emission associated with DM. With the fit of the parameter $A$ (Eq. (7)) and by assuming a standard thermal cross section of $\langle\sigma v\rangle=3\cdot 10^{-26}\;\text{cm}^{3}\text{s}^{-1}$, we can get an estimation of the astrophysical factor $\langle J\rangle$. We find $10^{25}\,\text{GeV}^{2}\,\text{cm}^{-5}\lesssim\langle J^{\text{NFW}}\rangle% \lesssim 10^{26}\,\text{GeV}^{2}\,\text{cm}^{-5}$. It implies that the boost factors $b\equiv\langle J\rangle/\langle J^{\text{NFW}}\rangle$ spread on a range between two and three orders of magnitude. It is interesting to note that the enhancement of the distribution of DM required to fit the data is compatible with the expectation of N-body simulations when the effect of the baryonic matter in the inner part of GC are taken into account Prada:2004pi . VI Conclusions In this work, we have analyzed the possibility of explaining the gamma-ray data observed by HESS from the central part of our galaxy by being partially produced by DM annihilation. The complexity of the region and the ambiguous localisation of the numerous emitting sources inside this region, seems to validate the hypothesis of a background component. We have proved that even DM annihilations into single channels of the SM provide good fits provided that the DM signal is complemented with a diffuse background compatible with Fermi LAT observations. The fits returns a DM mass between $15\;\text{TeV}\lesssim M\lesssim 110\;\text{TeV}$ and a background spectral index between $2.6\lesssim\Gamma\lesssim 3.7$ within 2$\sigma$. Some channels are clearly preferred with respect to others, such as the $d\bar{d}$ for the quark channels with $\chi^{2}/dof=0.73$, or the $W^{+}W^{-}$ and $ZZ$ channels with $\chi^{2}/dof=0.84$ and $\chi^{2}/dof=0.85$. On the contrary, the leptonic channels are seriously disfavored. The morphology of the signal is consistent with compressed dark halos by taking into account baryonic dissipation Blumenthal ; Prada:2004pi with boost factors of $b\sim 10^{3}$ for typical thermal cross sections. The DM particle that may have originated these data needs to be heavier than $\sim 10$ TeV. These large DM masses required for fitting the HESS data are not in contradiction with the unitarity limit for thermal relic annihilation. For example, this limit is around $110$ TeV for scalar DM particles annihilating in s-wave (for $\Omega_{\text{DM}}h^{2}=0.11$) Griest:1989wd ; Cembranos:2012nj . On the contrary, these heavy DM particles are practically unconstrained by direct detection experiments or particle colliders lab . An interesting example of such type of DM candidates which could have high enough mass and account for the right amount of DM in the form of a thermal relic, is the branon branons ; branonsgamma . Branons are new degrees of freedom corresponding to brane fluctuations in brane-world models. In general, they are natural candidates for DM because they are massive fields weakly interacting with the SM particles, since their interaction is suppressed with the fourth power of the brane tension scale $f$. For masses over $100$ GeV, the main contribution to the photon spectra comes from branons annihilating into gauge bosons $ZZ$, and $W^{+}W^{-}$. In Cembranos:2012nj it was shown that a branon DM with mass $M\simeq 50.6$ TeV, provides an excellent fit to HESS data. The corresponding background being compatible with Fermi-LAT data. The compatibility of its thermal abundance with the WMAP constraints WMAP , demands a cross section of $\langle\sigma v\rangle=(1.14\pm 0.19)\cdot 10^{-26}\;\text{cm}^{3}\text{s}^{-1}$, what is equivalent to a brane tension of $f\simeq 27.5$ TeV. The analysis of other cosmic rays cosmics from the GC and from other astrophysical objects is fundamental to cross check the hypotheses considered in this work. Updated analyses of this kind of signals for heavy DM combined with simple background components would be of great interest in order to constrain the possible DM origin of the studied gamma-ray fluxes. Acknowledgements We thank Alvaro de la Cruz-Dombriz for useful comments. This work has been supported by MICINN (Spain) This work has been supported by MICINN (Spain) project numbers FIS 2008-01323, FIS2011-23000, FPA2011-27853-01 and Consolider-Ingenio MULTIDARK CSD2009-00064. References (1) MAGIC collaboration, [arXiv:astro-ph/1103.0477v1] (2011). (2) A. A. Abdo et al. [arXiv:astro-ph.CO/1001.4531v1] (2010). (3) K. Tsuchiya, R. Enomoto, L. T. Ksenofontov et al. ApJ, 606, L115 (2004). (4) K. Kosak, H. M. Badran, I. H. Bond et al., ApJ, 608, L97 (2004). (5) F. Aharonian, A. G. Akhperjanian, K.M. Aye et al. A&A, 425, L13 (2004b). (6) F. Aharonian, A. G. Akhperjanian, K.M. Aye et al. A&A, 503, 817 (2009). (7) J. Albert, E. Aliu, H. Anderhub et al., ApJ, 638, L101 (2006). (8) V. Vitale, A. Morselli and f. t. F. /L. Collaboration, arXiv:0912.3828 [astro-ph.HE]. (9) M. Chernyakova et. al., ApJ 726, 60 (2011); T. Linden, E. Lovegrove and S. Profumo, arXiv:1203.3539 [astro-ph.HE]. (10) D. Horns, Phys. Lett. B 607, 225 (2005) [Erratum-ibid. B 611, 297 (2005)]. (11) L. Bergstrom, T. Bringmann, M. Eriksson and M. Gustafsson, Phys. Rev. Lett.  94, 131301 (2005); Phys. Rev. Lett.  95, 241301 (2005). (12) S. Profumo, Phys. Rev. D 72, 103521 (2005). (13) F. Aharonian et. al., Phys. Rev. Lett. 97, 221102 (2006). (14) F. Aharonian and A. Neronov, ApJ 619, 306 (2005). (15) J. A. R. Cembranos, V. Gammaldi and A. L. Maroto, Phys. Rev. D 86, 103506 (2012). (16) A. V. Belikov, G. Zaharijas and J. Silk, Phys. Rev. D 86, 083516 (2012). (17) R. M. Crocker, M. Fatuzzo, J. R. Jokipii et al., ApJ, 622, 892 (2005). (18) Q. Wang, F. Lu and E. Gotthelf, MNRAS, 367, 937 (2006); B. Aschenbach, N. Grosso, D. Porquet, et. al., A&A 417, 71 (2004). (19) A. Atoyan and C. D. Dermer, ApJ 617, L123 (2004). (20) G.R. Blumenthal, S.M. Faber, R. Flores, J. R. Primack, ApJ 301, 27 (1986); O. Y. Gnedin, A. V. Kravtsov, A. A. Klypin and D. Nagai, ApJ 616, 16 (2004). (21) F. Prada, A. Klypin, J. Flix Molina, M. Martínez, E. Simonneau, Phys. Rev. Lett.  93, 241301 (2004). (22) E. Romano-Díaz, I. Shlosman, Y. Hoffman, and C. Heller, ApJ 685, L105 (2008); ApJ 702, 1250 (2009); A. V. Maccio’ et. al., arXiv:1111.5620 [astro-ph.CO]. (23) P. Salucci, M. I. Wilkinson, M. G. Walker, G. F. Gilmore, E. K. Grebel, A. Koch, C. F. Martins and R. F. G. Wyse, arXiv:1111.1165 [astro-ph.CO]. (24) T. Sjostrand, S. Mrenna and P. Skands, JHEP05 (2006) 026 (LU TP 06-13, FERMILAB-PUB-06-052-CD-T) [hep-ph/0603175]. (25) J. F. Navarro, C. S. Frenk, and S. D. White, ApJ 490, 493 (1997). (26) J. A. R. Cembranos, A. de la Cruz-Dombriz, A. Dobado, R. Lineros and A. L. Maroto, Phys. Rev.  D 83, 083507 (2011); AIP Conf. Proc.  1343, 595-597 (2011); J. Phys. Conf. Ser.  314, 012063 (2011); A. de la Cruz-Dombriz and V. Gammaldi, arXiv:1109.5027 [hep-ph]; http://teorica.fis.ucm.es/PaginaWeb/photon_spectra.html (27) Cohen-Tanugi, J., Pohl, M., Tibolla, O. and Nuss, E. 2009, in Proc. 31st ICRC (Lodz), 645 (http://icrc2009.uni.lodz.pl/proc/pdf/icrc0645.pdf) (28) K. Griest and M. Kamionkowski, Phys. Rev. Lett.  64, 615 (1990). (29) J. Alcaraz et al., Phys. Rev. D 67, 075010 (2003); P. Achard et al., Phys. Lett. B597, 145 (2004); J. A. R. Cembranos, A. Dobado and A. L. Maroto, Phys. Rev. D65 026005 (2002); Phys. Rev. D70, 096001 (2004); Phys. Rev. D 73, 035008 (2006); Phys. Rev. D 73, 057303 (2006); J. Phys. A 40, 6631 (2007); J. A. R. Cembranos, J. L. Diaz-Cruz and L. Prado, Phys. Rev. D 84, 083522 (2011). (30) A. Dobado and A. L. Maroto, Nucl. Phys. B 592, 203 (2001); J. A. R. Cembranos, A. Dobado and A. L. Maroto, Phys. Rev. Lett.  90, 241301 (2003); Phys. Rev. D 68, 103505 (2003); A. L. Maroto, Phys. Rev. D 69, 043509 (2004); Phys. Rev. D 69, 101304 (2004); Int. J. Mod. Phys. D13, 2275 (2004). J. A. R. Cembranos et al., JCAP 0810, 039 (2008). (31) J. A. R. Cembranos, A. de la Cruz-Dombriz, V. Gammaldi, A.L. Maroto, Phys. Rev. D 85, 043505 (2012). (32) E. Komatsu et al. [WMAP Collaboration], ApJ. Suppl. 192 18 (2011). (33) S. Rudaz and F. W. Stecker, Astrophys. J.  325, 16 (1988); J. A. R. Cembranos, J. L. Feng, A. Rajaraman and F. Takayama, Phys. Rev. Lett.  95, 181301 (2005); J. A. R. Cembranos, J. L. Feng and L. E. Strigari, Phys. Rev. Lett.  99, 191301 (2007); Phys. Rev.  D 75, 036004 (2007); J. A. R. Cembranos and L. E. Strigari, Phys. Rev.  D 77, 123519 (2008); J. A. R. Cembranos, Phys. Rev. Lett.  102, 141301 (2009); Phys. Rev.  D 73, 064029 (2006); T. Bringmann and C. Weniger, Phys. Dark Univ.  1, 194 (2012). APPENDIX A: Fitting function parameters In the following tables, we show explicitly the parameters used in the fitting functions (read Ce10 for further details).
Hall Viscosity of Composite Fermions Songyang Pu${}^{1}$, Mikael Fremling${}^{2}$, J. K. Jain${}^{1}$ ${}^{1}$Department of Physics, 104 Davey Lab, Pennsylvania State University, University Park, Pennsylvania 16802, USA. ${}^{2}$Institute for Theoretical Physics, Center for Extreme Matter and Emergent Phenomena, Utrecht University, Princetonplein 5, 3584 CC Utrecht, the Netherlands (November 25, 2020) Abstract Hall viscosity, also known as the Lorentz shear modulus, has been proposed as a topological property of a quantum Hall fluid. Using a recent formulation of the composite fermion theory on the torus, we evaluate the Hall viscosities for a large number of fractional quantum Hall states at filling factors of the form $\nu=n/(2pn\pm 1)$, where $n$ and $p$ are integers, from the explicit wave functions for these states. The calculated Hall viscosities $\eta^{A}$ agree with the expression $\eta^{A}=(\hbar/4){\cal S}\rho$, where $\rho$ is the density and ${\cal S}=2p\pm n$ is the “shift” in the spherical geometry. We discuss the role of modular invariance of the wave functions, of the center-of-mass momentum, and also of the lowest-Landau-level projection. Finally, we show that the Hall viscosity for $\nu={n\over 2pn+1}$ may be derived analytically from the microscopic wave functions, provided that the overall normalization factor satisfies a certain behavior in the thermodynamic limit. This derivation should be applicable to a class of states in the parton construction, which are products of integer quantum Hall states with magnetic fields pointing in the same direction. I Introduction The extreme precision of the quantization of the Hall resistance in integer quantum Hall effect Klitzing et al. (1980) led to a topological interpretation in terms of Chern numbers Thouless et al. (1982). The fractional quantum Hall effect Tsui et al. (1982), which emerges as a result of strong interactions, also motivated quantities that have a topological origin. These include the fractional charge for the excitations Laughlin (1983) and the vorticity of composite fermions that reflects through an effective magnetic field Jain (1989a, 1990, 2007). According to a general topological classification based on Chern-Simons theory Wen and Zee (1992a, b); Wen (1995), the fractional quantum Hall ground states are characterized not only by the electromagnetic response, e.g. the Hall conductance, but also by geometrical response, which can also be topological. An example of this is the Hall viscosity, the topic of this article. To understand the Hall viscosity as a geometrical response, let us consider applying a small deformation to a fluid. The small deformation is represented by the strain tensor $u_{ij}$ and strain-rate tensor $\dot{u}_{ij}={du_{ij}\over dt}$, with $u_{ij}=(\partial_{i}u_{j}+\partial_{j}u_{i})/2$ where $u_{i}(\mbox{\boldmath$r$})$ is the displacement at $r$ in the $i$th direction. The stress tensor $\sigma_{ij}$ induced by this small deformation is given by: $$\sigma_{ij}=\sum_{k,l}\lambda_{ijkl}u_{kl}+\sum_{k,l}\eta_{ijkl}\dot{u}_{kl}.$$ (1) Of the two rank-four tensors that describe the response to the deformation, the first one $\lambda_{ijkl}$ is called the elastic modulus tensor, which measures the fluid’s resistance to elastic deformation. The viscosity tensor $\eta_{ijkl}$ describes the fluid’s resistance to being deformed at a given rate. Because both the stress tensor and strain-rate tensor are symmetric under the exchange $i\leftrightarrow j$ or $k\leftrightarrow l$, the viscosity tensor also has the symmetry property $\eta_{ijkl}=\eta_{jikl}=\eta_{ijlk}$. Considering the exchange $ij\leftrightarrow kl$, the viscosity tensor can be further decomposed into a symmetric component $\eta^{S}_{ijkl}=(\eta_{ijkl}+\eta_{klij})/2$ and an antisymmetric component $\eta^{A}_{ijkl}=(\eta_{ijkl}-\eta_{klij})/2$. The symmetric component is associated with the energy dissipation of the deformation, while the antisymmetric component is nondissipative. As a result of the Onsager relation in thermal physics, the antisymmetric component survives only when the deformation is irreversible, i.e. the time-reversal symmetry is broken, such as in a quantum Hall system. In a two-dimensional isotropic system, the antisymmetric component is given by Avron et al. (1995) $$\eta^{A}=\eta^{A}_{1112}=\eta^{A}_{1222}.$$ (2) It is called the Hall viscosity in a quantum Hall system, and is also referred to as the Lorentz shear modulus Tokatly and Vignale (2007, 2009). The Hall viscosity is a local bulk property of a fluid, and should therefore be present (and measurable) independent of the geometry. However, for its theoretical evaluation it has proven particularly fruitful to consider a system with periodic boundary conditions, i.e. a torus. One may see this by noting that by adiabatically changing the torus geometry, one effectively simulates a uniform infinitesimal strain rate in the fluid. Avron, Seiler and Zograf Avron et al. (1995) (ASZ) considered the fluid on a torus defined by two sides $L_{1}$ and $L_{2}=L_{1}\tau$ in the complex plane. Here $\tau=\tau_{1}+i\tau_{2}$ is the modular parameter that specifies the geometry of the torus. In what follows, we choose $L_{1}$ to be real and $\tau_{2}>0$, in which case the area of the torus is $V=L_{1}^{2}\tau_{2}$. The number of flux quanta passing through the torus, $N_{\phi}$, is quantized to be an integer. (Here the flux quantum is defined as $\phi_{0}=h/e$.) The filling factor is defined as $\nu=N/N_{\phi}$, where $N$ is the number of electrons. It was shown in Ref. Avron et al., 1995 that the antisymmetric Hall viscosity $\eta^{A}$ is related to the Berry curvature $\mathcal{F}_{\tau_{1},\tau_{2}}$ in $\tau$ space (i.e. the space of torus geometries) as $$\eta^{A}=-{\hbar\tau_{2}^{2}\over V}\mathcal{F}_{\tau_{1},\tau_{2}},$$ (3) where $$\mathcal{F}_{\tau_{1},\tau_{2}}=-2{\rm Im}\bigg{\langle}{\partial\Psi\over% \partial\tau_{1}}\bigg{|}{\partial\Psi\over\partial\tau_{2}}\bigg{\rangle}.$$ (4) The wave function $\Psi$ is a function of $\tau_{1}$, $\tau_{2}$ and particle coordinates $z_{i}=x_{i}+iy_{i}=L_{1}\theta_{i1}+L_{2}\theta_{i2}$, where $\theta_{i1},\theta_{i2}\in[0,1)$ are called the reduced coordinates. The particles’ physical coordinates $x_{i}$ and $y_{i}$ change with the deformation of the geometry but the reduced coordinates do not; the partial derivatives of $\tau_{1}$ and $\tau_{2}$ in Eq. 4 are evaluated while keeping the reduced coordinates $\theta_{i1}$ and $\theta_{i2}$ invariant. Ref. Avron et al., 1995 further showed that in the lowest Landau level (LLL), the Berry curvature for a single particle in any orbital is ${1\over 4\tau_{2}^{2}}$, implying that the Hall viscosity for a filled LLL is ${\hbar\over 4}{N\over V}$. Note that ${1\over\tau_{2}^{2}}d\tau_{1}\wedge d\tau_{2}$ is the volume form in $\tau$ space, implying that the Berry curvature is a constant (${1\over 4}$) times the factor of volume form. ASZ further showed Avron et al. (1995) that the integral of $\mathcal{F}_{\tau_{1},\tau_{2}}$ in Eq. 4 in the fundamental domain of the $\tau$ plane gives the Hall viscosity, just as the integration of Berry curvature in the 1st Brillouin zone of $k$ space returns the Hall conductivity. (Roughly speaking, the fundamental domain is the subset of all points in the $\tau$ plane which are not related by modular transformations Gunning and BRUMER (1962). More details on the modular transformations are given in Sec. II and Appendix A.) Lévay Lévay (1995) showed that the Berry curvature for a single particle in the $n$th Landau level (with $n=0$ for LLL) is given by $$\mathcal{F}_{\tau_{1},\tau_{2}}=-{1\over 2}\left(n+{1\over 2}\right){1\over% \tau_{2}^{2}}.$$ (5) The Hall viscosity for an integer number of filled Landau levels (LLs) can then be straightforwardly derived by adding the contributions of all single particles (which is valid for a Slater determinant state) to give $$\eta^{A}_{n}=n{\hbar\over 4}{N\over V}.$$ (6) The calculation of the Hall viscosity becomes more complicated for fractional quantum Hall (FQH) states, which do not have a single Slater determinant form. For the Laughlin state at filling ${1\over m}$Laughlin (1983), the Hall viscosity was shown to be $\eta^{A}=m{\hbar\over 4}{N\over V}$ by Tokatly and Vignale Tokatly and Vignale (2007, 2009), and by Read Read (2009) by exploiting the plasma analogy. Read Read (2009) also showed that for FQH states that can be expressed as conformal-blocks in a conformal field theory, for which a generalized plasma analogy exists, the Hall viscosity satisfies $$\eta^{A}=\mathcal{S}{\hbar\over 4}{N\over V}.$$ (7) Here $\mathcal{S}$ is the so-called “shift”, i.e. the offset of flux quanta needed to form a ground state on a sphere: $$\mathcal{S}={N\over\nu}-N_{\phi}$$ (8) The shift in the spherical geometry is a manifestation of the orbital spin, introduced by Wen and Zee Wen and Zee (1992a) to describe the coupling between the orbital motion and the curvature of space. This topological quantum number is quantized and robust within a topological phase, and can thus distinguish between different topological phases. The Hall viscosity is by extension believed to be a topological quantum number for a given FQH state. In some sense, it is a manifestation of the orbital spin through a transport coefficient. Read and Rezayi Read and Rezayi (2011) discussed the connection between Hall viscosity and the orbital spin i.e. Eq. 7, along with the robustness of Hall viscosity within a topological phase, by noting that the commutator of distinct shear operations is a rotation. The Hall viscosity of the Laughlin and the paired Hall states has also been extracted by Lapa et al. Lapa and Hughes (2018); Lapa et al. (2018) from the matrix models for these states. Cho, You and Fradkin Cho et al. (2014) showed that Eq. 7 can be derived for the Jain states from the effective Chern-Simons field theory. In this paper, our aim is to obtain the Hall viscosity of the Jain states at filling fractions $$\nu={n\over 2pn\pm 1},$$ (9) directly from the explicit microscopic wave functions. According to Eq. 7, we expect $$\eta^{A}={\hbar N\mathcal{S}\over 4V}={\hbar N(2p\pm n)\over 4V}\;.$$ (10) An important step in this direction was taken by Fremling, Hansson and Surosa Fremling et al. (2014), who obtained the Hall viscosity for the 2/5 state using both the exact Coulomb wave functions, and a trial wave function obtained from conformal field theory (CFT) correlation functions. While approaching Eq. 7 with increasing system size, the results showed significant variation with the aspect ratio $\tau_{2}$, and also with the number of particles, presumably because the calculations were limited to small systems (10 particles or fewer). The present work has two objectives. First, a recent work Pu et al. (2017) has demonstrated how to construct the Jain wave functions on the torus for large systems, which should allow a treatment of many fractions of the form $\nu=n/(2pn+1)$ and also an estimation of the thermodynamic limit. Our explicit numerical evaluation of the Hall viscosity for many FQH states produces values consistent with Eq. 10 in the thermodynamic limit. The second objective for our work is to seek an analytical evaluation of the Hall viscosity for general FQH states, starting from the microscopic wave function. The unprojected wave function for the state at $\nu=n/(2pn+1)$ is given by $Z\Psi_{n}\Psi_{1}^{2p}$, where $\Psi_{n}$ is the normalized wave function of $n$ filled Landau levels and $Z$ is an overall normalization factor. We prove that the Hall viscosity of this wave function is equal to the sum of the Hall viscosities of the individual factors, producing Eq. 10, provided that we make an assumption regarding the behavior of $Z$ in the thermodynamic limit. We further show that the Hall viscosity of the LLL projected wave function $Z^{\prime}{P_{\rm LLL}}\Psi_{n}\Psi_{1}^{2p}$ (where ${P_{\rm LLL}}$ is the LLL projection operator) is also given by the sum of the Hall viscosities of the individual factors, provided that we make an assumption regarding the behavior of $Z^{\prime}$ in the thermodynamic limit. Numerical calculations provide support to the assumption regarding the normalization factors. We discuss the modular covariance of the Jain states, which is an important requirement from legitimate wave functions in the torus geometry as well as crucial for the evaluation of the Hall viscosity. We also find that projection of the wave function onto a well defined center-of-mass momentum sector produces a better quantized Hall viscosity for finite $N$, although in the thermodynamic limit the quantized value is obtained independent of center-of-mass momentum projection. In Appendices B and C we discuss this issue and show that momentum projection is a sub-leading effect in the thermodynamic limit. We briefly mention proposals for measuring the Hall viscosity experimentally. Haldane Haldane (2009) suggested that it is directly related to the stress caused by an inhomogeneous electric field. Following this direction, Hoyos and Son Hoyos and Son (2012) and Bradlyn, Goldstein and Read Bradlyn et al. (2012) have shown the Hall viscosity can be extracted from the wave vector dependent contribution to the Hall conductance. Ref. Delacrétaz and Gromov (2017); Pellegrino et al. (2017) proposed that the Hall viscosity can be extracted in a pipe-flow setup by measurement of the electroscalar potential close to the point contacts where current is injected. A similar observable effect caused by Hall viscosity has also been proposed by Ref. Scaffidi et al. (2017) by using hydrodynamic electronic transport in mesoscopic samples under magnetic fields. Such proposed ideas have been applied to graphene, for non-quantizing magnetic fields, and its Hall viscosity was measured by probing local electrostatic potentials in the region where the electron current is non-uniform Berdyugin et al. (2019). It showed good agreement with semi-classical theory of hydrodynamic electron fluids Alekseev (2016); Pellegrino et al. (2017); Steinberg (1958). The negative magnetoresistance and suppression of Hall resistivity were observed as a manifestation of Hall viscosity. However, these experiments were not in a quantum Hall effect regime. Similar experiment was performed on GaAs quantum wells in the classical regime, and a pronounced, negative Hall viscosity at low magnetic field was found as a signature of Hall viscosity Gusev et al. (2018). Starting from hydrodynamics, Ref. Ganeshan and Abanov (2017) proposed another measurement protocol by taking the difference of torques acting on the Hall fluid under two opposite magnetic fields. The plan for the remainder of the article is as follows: In Sec. II we introduce the wave functions we use to evaluate Hall viscosities. These were originally constructed in Ref. Pu et al., 2017, but here we use a slightly different (though equivalent) form, using the so-called $\tau$-gauge, which is crucial for our analytical proof. (Both the symmetric and the $\tau$ gauges are equally good for Monte Carlo evaluations.) In Sec. III, we prove that the Hall viscosity at $\nu={n\over 2pn+1}$ is given by Eq. 10, provided that we assume that the overall normalization factor of the product wave functions has a certain behavior in the thermodynamic limit; this assumption is tested numerically. In Sec. IV, we numerically evaluate the Hall viscosities at fillings $\nu=2/5$, $3/7$, $2/9$, $2/3$, $3/5$ and $2/7$ and find that the thermodynamic limit of the result is consistent with Eq. 10. We also find that both the projected and the unprojected wave functions produce the same Hall viscosity, supporting the notion that it is a topological quantity. II Composite Fermions on the Torus II.1 Modular covariance of wave functions As mentioned in the previous section, the torus is topologically equivalent to a periodic lattice spanned by a parallelogram. There are an infinite number of choices of basis vectors that span the same lattice. New basis vectors can be obtained from old basis vectors by the transformation $\bigl{(}\begin{smallmatrix}L_{2}^{\prime}\\ L_{1}^{\prime}\end{smallmatrix}\bigr{)}=\bigl{(}\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\bigr{)}\bigl{(}\begin{smallmatrix}L_{2}\\ L_{1}\end{smallmatrix}\bigr{)}$ where $a,b,c,d\in\mathbb{Z}$, $ad-bc=1$. In terms of the modular parameter $\tau=L_{2}/L_{1}$, this corresponds to the transformation $\tau\to\tau^{\prime}=(a\tau+b)/(c\tau+d)$. Since changing the signs of all elements does not produce a new transformation, the matrices $\bigl{(}\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\bigr{)}$ form the $SL(2,\mathbb{Z})/\mathbb{Z}_{2}$ group. This group is spanned by two modular transformations $T:\tau\rightarrow\tau+1$ and $S:\tau\rightarrow-{1/\tau}$, which, in the matrix representation, correspond to $T=\bigl{(}\begin{smallmatrix}1&1\\ 0&1\end{smallmatrix}\bigr{)}$ and $S=\bigl{(}\begin{smallmatrix}0&1\\ -1&0\end{smallmatrix}\bigr{)}$. We will define the “physical” coordinates $(x,y)$ (also expressed as the complex number $z=x+iy$) with reference to the Cartesian axis. These coordinates span the whole complex plane, thus do not depend on the choice of the lattice vectors, i.e. remain unaltered upon a modular transformation. The wave functions in general depend on the modular parameter $\tau$, and thus are not necessarily invariant under modular transformations $\tau\rightarrow\tau+1$ and $\tau\rightarrow-1/\tau$. However, because a modular transformation commutes with the Hamiltonian (which is invariant under a modular transformation), all sets of degenerate states, and by extension their observables, must be closed under modular transformations. That is, a modular transformation may only mix states that are degenerate in energy. Degenerate sets of wave functions that satisfy this property are said to be modular covariant. There are $2pn\pm 1$ degenerate ground state wave functions at $\nu=n/(2pn\pm 1)$. The set of exact ground state wave functions is of course closed under modular transformation. However, a given trial wave function does not necessarily satisfy the property of modular covariance. Explicit calculations in Ref. Fremling et al. (2014) have demonstrated that the Hall viscosity is sensitive to modular covariance of the wave function, and a well defined value is obtained only for wave functions that are modular covariant. It is therefore important to ensure that the wave function we are using to calculate the Hall viscosity is modular covariant. The modular covariance of the Jain composite-fermion (CF) wave functions can be demonstrated quite straightforwardly in the following fashion: Fremling (2019) First of all, since the wave function $\Psi_{n}$ of $n$ filled LLs is non-degenerate, it is modular invariant, i.e. it remains unaltered under modular transformations. (For one filled LL, this can be proved by construction of single particle coherent wave functions, using Haldane’s modified sigma functions, that are modular invariant Fremling (2019).) It therefore follows that the unprojected product wave function $\Psi^{\rm unproj}_{n\over 2pn\pm 1}\sim\Psi_{n}\Psi_{1}^{2p}$ is also modular invariant. It further follows that LLL projected wave function $\Psi_{n\over 2pn\pm 1}=P_{\rm LLL}\Psi^{\rm unproj}_{n\over 2pn\pm 1}$ is modular invariant as well. This is easiest to see when $P_{\rm LLL}$ is defined as what is known as the direct projection. This $P_{\rm LLL}$ is explicitly modular invariant, as evident from its definition ${{P_{\rm LLL}}}=\prod_{i=1}^{N}\prod_{n=1}^{\infty}(1-a_{i}^{\dagger}a_{i}/n)$. Here $a_{i}$ and $a_{i}^{\dagger}$ are the LL lowering and raising operators (defined below), with $a_{i}^{\dagger}a_{i}$ measuring the LL index of the $i$th particle. The situation is more subtle for a different projection, introduced by Pu, Wu and Jain (PWJ) Pu et al. (2017) as the torus generalization of the Jain-Kamilla projection Jain and Kamilla (1997a, b). The PWJ projection will be used below in our calculation below, as it allows treatment of large systems in the torus geometry. The PWJ projection cannot be represented by an operator, but it is possible to show that the PWJ projected wave functions are also modular covariant; we refer to the work by Fremling Fremling (2019) for a detailed proof. Interestingly the condition for modular covariance of the wave function is the same as that for the validity of the PWJ projection, namely that the states are proper states, where proper states correspond to configurations for which there are no unoccupied CF-orbitals directly beneath an occupied CF-orbital Pu et al. (2017). (The PWJ projection does not preserve the quasi-periodic boundary conditions for non-proper states.) Let us next address the fact that the ground state is not unique but there are $2pn\pm 1$ degenerate ground states at $\nu=n/(2pn\pm 1)$, which can be chosen as eigenstates of the center of mass momentum $M$. As shown in the Appendix B, these can be obtained as $\Psi_{n\over 2pn\pm 1}^{(M)}=\mathcal{P}_{M}\Psi_{n\over 2pn\pm 1}$ where $\mathcal{P}_{M}$ is the momentum projection operator. Under a modular transformation, we get Fremling (2019) $$\displaystyle\Psi_{n\over 2pn\pm 1}^{(M)}=\mathcal{P}_{M}\Psi_{n\over 2pn\pm 1}$$ $$\displaystyle\quad\quad\rightarrow\mathcal{P}^{\prime}_{M}\Psi_{n\over 2pn\pm 1% }=\sum_{M,M^{\prime}}K_{M,M^{\prime}}\Psi_{n\over 2pn\pm 1}^{(M^{\prime})},$$ (11) where $K_{M,M^{\prime}}$ is a unitary matrix. In other words, the set of $2pn\pm 1$ degenerate ground states $\{\Psi_{n\over 2pn\pm 1}^{(M)}\}$ is closed under modular transformation. This proves the modular covariance of the Jain CF wave functions. Finally, let us consider a translationally invariant physical operator $\hat{O}$, which must be independent of the center of mass momentum, i.e. must satisfy $\langle\Psi^{(M)}|\hat{O}|\Psi^{(M^{\prime})}\rangle=O\delta_{M,M^{\prime}}$. It follows that the expectation value $O$ remains invariant under modular transformation provided that $\Psi^{(M)}$ transforms according to Eq. 11. The Hall viscosity, given by $\hat{\eta}\propto\overleftarrow{\partial}_{\tau_{1}}\tau_{2}^{2}% \overrightarrow{\partial}_{\tau_{2}}$, is modular invariant. That is demonstrated in Appendix A. We note that the modular covariance of the wave functions obtained in the parton construction Jain (1989b) also follows as above, because these wave functions are also products of integer quantum Hall states. II.2 Wave functions in the “$\tau$ gauge” A proper gauge choice can be important. The wave functions for general FQH states at $\nu=n/(2pn\pm 1)$ in Ref. Pu et al. (2017) and for the CF Fermi sea in Refs. Shao et al., 2015; Wang et al., 2019; Geraedts et al., 2018; Pu et al., 2018 were first constructed in the symmetric gauge. This was crucial for implementing LLL projection using the PWJ method. Fremling Fremling (2019) demonstrated that these wave functions are modular covariant, i.e. satisfy Eq. 11. Here we use the Jacobi theta functions with rational characteristics, given byMumford (2007) $$\vartheta\left[\begin{array}[]{c}{{\displaystyle a}}\\ {\displaystyle b}\end{array}\right]\left(z\middle|\tau\right)=\sum_{n=-\infty}% ^{\infty}e^{i\pi\left(n+a\right)^{2}\tau}e^{i2\pi\left(n+a\right)\left(z+b% \right)}.$$ (12) The special function has the periodicity properties: $$\displaystyle\vartheta\left[\begin{array}[]{c}{{\displaystyle a}}\\ {\displaystyle b}\end{array}\right]\left(z+1\middle|\tau\right)=e^{i2\pi a}% \vartheta\left[\begin{array}[]{c}{{\displaystyle a}}\\ {\displaystyle b}\end{array}\right]\left(z\middle|\tau\right)$$ $$\displaystyle\vartheta\left[\begin{array}[]{c}{{\displaystyle a}}\\ {\displaystyle b}\end{array}\right]\left(z+\tau\middle|\tau\right)=e^{-i\pi[% \tau+2(z+b)]}\vartheta\left[\begin{array}[]{c}{{\displaystyle a}}\\ {\displaystyle b}\end{array}\right]\left(z\middle|\tau\right)$$ (13) and the only zero of $\vartheta\left[\begin{array}[]{c}{{\displaystyle a}}\\ {\displaystyle b}\end{array}\right]\left(z\middle|\tau\right)$ inside the principal region lies at $z_{0}=(a+{1\over 2})\tau+b+{1\over 2}$. Recently, Haldane has proposed Haldane (2018) another building block called the “modified Weierstrass sigma function,” whose advantage is that it only depends on the lattice $\Lambda=\left\{nL_{1}+mL_{2}|n,m\in\mathbb{Z}\right\}$, rather than a specific $\tau$, and is thus explicitly modular covariant. We show in appendix D how to reformulate the Jain wave functions in terms of the modified Weierstrass sigma function. It turns out that for the analytic derivation of the Hall viscosity, it is most useful to use the “$\tau$ gauge” and to continue using the Jacobi theta functions as the building blocks. We now describe the details of the $\tau$ gauge. The vector potential for the $\tau$-gauge is defined as $$(A_{x},A_{y})=B\left(y,-{\tau_{1}\over\tau_{2}}y\right),$$ (14) in physical coordinates, corresponding to a magnetic field $\mbox{\boldmath$B$}=-B\hat{\mbox{\boldmath$z$}}$. The vector potential in the reduced coordinates is given by $(A_{1},A_{2})=(L_{1}A_{x},L_{1}(\tau_{1}A_{x}+\tau_{2}A_{y}))=(2\pi N_{\phi}B% \theta_{2},0)$. In Eq. 4, when deforming $\tau$, one must also move the $x$ and $y$ coordinates to account for the new geometry, which in turn triggers a gauge transformation relating the new coordinates to the old ones. One of the advantages of the $\tau$ gauge is that, in terms of reduced coordinates, it is $\tau$ independent, and thus no additional gauge transformation is needed when $\tau$ is varied. This feature is also shared by the symmetric gauge, but, as will be demonstrated in section Sec. III, the $\tau$ gauge is still more advantageous for analytic purposes. The quasi-periodic boundary conditions are defined as: $$t(L_{i})\psi(z,\bar{z})=e^{i\phi_{i}}\psi(z,\bar{z})\quad i=1,2.$$ (15) The magnetic translation operator $t(\xi)$ is given by $$t\left(\alpha L_{1}+\beta L_{2}\right)=e^{\alpha\partial_{1}+\beta\partial_{2}% +i2\pi\beta N_{\phi}\theta_{1}},$$ (16) where $\partial_{j}\equiv{\partial\over{\partial\theta_{j}}}$. The number of flux quanta through the torus is fixed to be an integer $N_{\phi}={V\over 2\pi l^{2}}$, to ensure the commutation relation $\left[t(L_{1}),t(L_{2})\right]=0$. We now reformulate the wave functions of Ref. Pu et al. (2017) in $\tau$ gauge. We write the single-particle orbital in the LLL with periodic boundary conditions according to Eq. 15 as: $$\displaystyle\psi^{(k)}_{0}(z,\bar{z})$$ $$\displaystyle=$$ $$\displaystyle\mathcal{N}e^{i\pi\tau N_{\phi}\theta_{2}^{2}}f^{(k)}_{0}(z),$$ (17) $$\displaystyle f^{(k)}_{0}(z)$$ $$\displaystyle=$$ $$\displaystyle\vartheta\left[\begin{array}[]{c}{{\scriptstyle{k\over N_{\phi}}+% {\phi_{1}\over 2\pi}}}\\ {\scriptstyle-{\phi_{2}\over 2\pi}}\end{array}\right]\left(N_{\phi}z\over L_{1% }\middle|N_{\phi}\tau\right),$$ (18) where the subscript “0” refers to the LLL and the normalization factor $\mathcal{N}=1/\sqrt{\ell_{B}L_{1}\sqrt{\pi}}=\left(2N_{\phi}/\tau_{2}\right)^{% 1\over 4}/L_{1}$ is with respect to the physical coordinates. The normalization with respect to the reduced coordinates carries an extra volume factor and is $\mathcal{N}=\left(2\tau_{2}N_{\phi}\right)^{1\over 4}$. Notably, $f_{0}^{(k)}$ is holomorphic in $z$. This form of single orbital function has been used by Lévay in Ref. Lévay (1995). Eq. 18 has its $N_{\phi}$ zeros at $z_{m}=L_{1}\tau\left({k\over N_{\phi}}+{\phi_{1}\over 2\pi}+{1\over 2}\right)+% {L_{1}\over N_{\phi}}\left(m+{1\over 2}-{\phi_{2}\over 2\pi}\right)$, with $m=0,1,\dots N_{\phi}-1$. The superscript $k=0,1,2\dots N_{\phi}-1$ represents momentum under magnetic translations by $t(L_{1}/N_{\phi})$: $$t\left(L_{1}/N_{\phi}\right)\psi_{0}^{(k)}(z,\bar{z})=e^{i{{\phi_{1}+2\pi}k% \over N_{\phi}}}\psi_{0}^{(k)}(z,\bar{z}),$$ (19) while $t\left(L_{2}/N_{\phi}\right)$ changes $\psi_{0}^{(k)}(z,\bar{z})$ to $\psi_{0}^{(k+1)}(z,\bar{z})$ as $$t\left(L_{2}/N_{\phi}\right)\psi_{0}^{(k)}(z,\bar{z})\propto\psi_{0}^{(k+1)}(z% ,\bar{z}).$$ (20) From here on we will assume $\phi_{1,2}=0$ unless otherwise specified. (We note that the single particle orbital in Eq. 18 is slightly different from and more compact than the form used in Ref. Pu et al. (2017), where it was necessary to assign zeros artificially.) The single particle wave function in the $n$th LL is given by $$\displaystyle\psi_{n}^{(k)}(z,\bar{z})$$ (21) $$\displaystyle=$$ $$\displaystyle\frac{\left(a^{\dagger}\right)^{n}}{\sqrt{n!}}\psi_{0}^{(k)}$$ $$\displaystyle=$$ $$\displaystyle{1\over\sqrt{n!}}\left[-\sqrt{2}\left(\ell_{B}\partial_{z}-{\bar{% \tau}L_{1}\theta_{2}\over 2\ell_{B}}\right)\right]^{n}\psi_{0}^{(k)}$$ $$\displaystyle=$$ $$\displaystyle{1\over\sqrt{n!}}\mathcal{N}e^{i\pi\tau N_{\phi}\theta_{2}^{2}}% \left[-\sqrt{2}\ell_{B}\partial_{z}-{i\sqrt{2}L_{1}\theta_{2}\tau_{2}\over\ell% _{B}}\right]^{n}f^{(k)}_{0}(z)$$ $$\displaystyle=$$ $$\displaystyle{1\over\sqrt{n!}}\mathcal{N}e^{i\pi\tau N_{\phi}\theta_{2}^{2}}% \sum_{t\in\mathbb{Z}+\frac{k}{N_{\phi}}}e^{i\pi N_{\phi}\tau t^{2}}e^{i2\pi N_% {\phi}t\frac{z}{L_{1}}}\left[-{i\over\sqrt{2}}\left(2\lambda-{\partial_{% \lambda}}\right)\right]^{n}\cdot 1$$ $$\displaystyle=$$ $$\displaystyle e^{i\pi N_{\phi}\tau\theta_{2}^{2}}f_{n}^{(k)}(z,\bar{z}),$$ where we define $\lambda\equiv{\tau_{2}L_{1}\over\ell_{B}}(\theta_{2}+t)$, and $f_{n}^{(k)}(z,\bar{z})$ is given by $$f_{n}^{(k)}(z,\bar{z})=\mathcal{N}_{n}\sum_{t\in\mathbb{Z}+\frac{k}{N_{\phi}}}% e^{i\pi N_{\phi}\tau t^{2}}e^{i2\pi N_{\phi}t\frac{z}{L_{1}}}H_{n}\left(\frac{% \tau_{2}L_{1}}{\ell_{B}}\left(\theta_{2}+t\right)\right),$$ (22) with $\mathcal{N}_{n}=\frac{1}{\sqrt{2^{n}n!\ell_{B}L_{1}\sqrt{\pi}}}$. The wave function of $m$ filled LLs is just the slater determinant of occupied single particle orbitals: $$\Psi_{m}[z_{i},\bar{z}_{i}]=e^{i\pi\tau N_{\phi}^{\star}\sum_{j=1}^{N}\theta_{% 2,j}^{2}}{1\over\sqrt{N}}\chi_{m}[f_{i}(z_{j},\bar{z}_{j})].$$ (23) where $N_{\phi}^{\star}=N/m$ is the number of flux quanta through the torus, and $\chi_{m}[{f}_{i}(z_{j})]$ is the determinant of $m$ filled LLs: $$\chi_{m}[f_{i}(z_{j},\bar{z}_{j})]=\begin{vmatrix}f_{0}^{(0)}(z_{1})&f_{0}^{(0% )}(z_{2})&\ldots&f_{0}^{(0)}(z_{N})\\ f_{0}^{(1)}(z_{1})&f_{0}^{(1)}(z_{2})&\ldots&f_{0}^{(1)}(z_{N})\\ \vdots&\vdots&\vdots\\ f_{0}^{(N_{\phi}^{\star}-1)}(z_{1})&f_{0}^{(N_{\phi}^{\star}-1)}(z_{2})&\ldots% &f_{0}^{(N_{\phi}^{\star}-1)}(z_{N})\\ f_{1}^{(0)}(z_{1},\bar{z}_{1})&f_{1}^{(0)}(z_{2},\bar{z}_{2})&\ldots&f_{1}^{(0% )}(z_{N},\bar{z}_{N})\\ f_{1}^{(1)}(z_{1},\bar{z}_{1})&f_{1}^{(1)}(z_{2},\bar{z}_{2})&\ldots&f_{1}^{(1% )}(z_{N},\bar{z}_{N})\\ \vdots&\vdots&\vdots\\ f_{m-1}^{(N_{\phi}^{\star}-1)}(z_{1},\bar{z}_{1})&f_{m-1}^{(N_{\phi}^{\star}-1% )}(z_{2},\bar{z}_{2})&\ldots&f_{m-1}^{(N_{\phi}^{\star}-1)}(z_{N},\bar{z}_{N})% \\ \end{vmatrix}.$$ (24) Here, $f_{i}$ represents $f_{n_{i}}^{(k_{i})}$, i.e. the subscript $i$ of $f_{i}$ collectively denotes the quantum numbers $\{n_{i},k_{i}\}$, where $n_{i}$ is the LL index and $k_{i}$ is the momentum quantum number. (Note that the subscript of the many particle wave function $\Psi_{\nu}$ or $\chi_{n}$ refers to the filling factor, whereas the subscript of the single particle wave function, such as $\psi^{(k)}_{n}$ or $f^{(k)}_{n}$, refers to the LL index.) We use the convention that the LL index takes values $n=0,1,\cdots$, with $n=0$ corresponding to the LLL, while the filling factor has $m=1,2,\cdots$; we hope that the meaning is clear from the context. Specifically, the LLL wave function can also be written in a Laughlin form: $$\displaystyle\Psi_{1}[z_{i},\bar{z}_{i}]$$ $$\displaystyle=$$ $$\displaystyle\mathcal{N}_{1}e^{i\pi\tau N\sum_{i}\theta_{2,i}^{2}}\vartheta% \left[\begin{array}[]{c}{{\scriptstyle{N-1\over 2}}}\\ {\scriptstyle{N-1\over 2}}\end{array}\right]\left(Z\over L_{1}\middle|\tau\right)$$ (25) $$\displaystyle\quad\times\prod_{i<j}\vartheta\left[\begin{array}[]{c}{{% \scriptstyle{\frac{1}{2}}}}\\ {\scriptstyle{\frac{1}{2}}}\end{array}\right]\left(z_{i}-z_{j}\over L_{1}% \middle|\tau\right),$$ where $Z=\sum_{i}z_{i}$, and $\mathcal{N}_{1}=\frac{\tau_{2}^{\frac{N}{4}}\eta\left(\tau\right)^{-\frac{N% \left(N-3\right)}{2}-1}}{\sqrt{N!}\left(2N\pi^{2}\right)^{\frac{N}{4}}}$ Fremling (2016). The general composite fermion wave functions before projection to LLL are written as $$\Psi^{\rm unproj}_{n\over 2pn+1}=\Psi_{n}\Psi_{1}^{2p},\;\Psi^{\rm unproj}_{n% \over 2pn-1}=\Psi_{n}^{\star}\Psi_{1}^{2p},$$ (26) where we omit the normalization factors. A particularly nice feature of working with the reduced coordinates $\theta_{1}$ and $\theta_{2}$ is that the magnetic length never enters into any expression, other than the normalization factor. This means that when wave functions are multiplied, no explicit rescaling of the magnetic length is needed to preserve the boundary conditions. Displaying explicitly the exponential factors, we have $$\Psi^{\rm unproj}_{n\over 2pn+1}=e^{i\pi\tau N_{\phi}\sum_{i}\theta_{2,i}^{2}}% \chi_{n}[{f}_{i}(z_{j})]\left(\vartheta\left[\begin{array}[]{c}{{\scriptstyle{% N-1\over 2}}}\\ {\scriptstyle{N-1\over 2}}\end{array}\right]\left(Z/L_{1}\middle|\tau\right)% \prod_{i<j}\vartheta\left[\begin{array}[]{c}{{\scriptstyle\frac{1}{2}}}\\ {\scriptstyle\frac{1}{2}}\end{array}\right]\left((z_{i}-z_{j})/L_{1}\middle|% \tau\right)\right)^{2p},$$ (27) $$\Psi^{\rm unproj}_{n\over 2pn-1}=e^{\left(i\pi\tau N_{\phi}-2\pi|N_{\phi}^{% \star}|\tau_{2}\right)\sum_{i}\theta_{2,i}^{2}}(\chi_{n}[{f}_{i}(z_{j})])^{% \star}\left(\vartheta\left[\begin{array}[]{c}{{\scriptstyle{N-1\over 2}}}\\ {\scriptstyle{N-1\over 2}}\end{array}\right]\left(Z/L_{1}\middle|\tau\right)% \prod_{i<j}\vartheta\left[\begin{array}[]{c}{{\scriptstyle\frac{1}{2}}}\\ {\scriptstyle\frac{1}{2}}\end{array}\right]\left((z_{i}-z_{j})/L_{1}\middle|% \tau\right)\right)^{2p},$$ (28) where $\chi_{n}[{f}_{i}(z_{j})]$ is the determinant of $n$ filled LLs at the effective flux quanta $N^{*}_{\phi}=\pm N/n$ defined in Eq. 24. Note that in Eq. 27, in the exponential factor the terms from different factors combine as $\tau N_{\phi}^{*}+\tau 2pN=\tau N_{\phi}$, where we have used that for $\nu=1$ we have $N_{\phi}=N$. On the other hand, in Eq. 28, which refers to the reverse flux states at $\nu=n/(2pn-1)$, these terms combine as $-\bar{\tau}|N_{\phi}^{*}|+\tau 2pN=\tau N_{\phi}+2i\tau_{2}|N_{\phi}^{*}|$, with $N_{\phi}=2pN-|N_{\phi}^{*}|$. The LLL projection of these wave functions requires LLL projection of products of single particle wave functions of the type $\psi_{n}\psi$, where $\psi$ is some LLL wave function. Following Ref. Pu et al., 2017 and Fremling, 2019, the LLL projection is given by $${P_{\rm LLL}}\psi_{n}\psi_{0}=\left(2i\ell_{B}\over N_{\phi}^{\star}\right)^{n% }\frac{\mathcal{N}_{n}}{\mathcal{N}_{0}}e^{-i\pi\tau N_{\phi}\theta_{2}^{2}}% \hat{f}_{n}f,$$ (29) where $\hat{f}_{n}$ now is an operator acting on $f$, which is the holomorphic part of $\psi$. In general this operator has the form $\hat{f}_{n}=\hat{D}^{n}f_{0}$, where $$\hat{D}=N_{\phi}^{\star}\hat{\partial}_{z}-\left(N_{\phi}-N_{\phi}^{\star}% \right)\tilde{\partial}_{z},$$ (30) has two different types of derivatives: $\tilde{\partial}_{z}$ and $\hat{\partial}_{z}$. The first, $\tilde{\partial}_{z}$ is understood to act only on $f_{0}$ and thus has the property $\tilde{\partial}_{z}f_{0}f=f\tilde{\partial}_{z}f_{0}$. The second, $\hat{\partial}_{z}$ does not act on $f_{0}$ at all and can be defined as $\hat{\partial}_{z}f_{0}f=f_{0}\hat{\partial}_{z}f$. Written out explicitly, for the $n=1$ Landau level we have $$\hat{f}_{1}^{(k)}(z)\propto(N_{\phi}^{*}-N_{\phi})\frac{\partial f_{0}^{(k)}(z% )}{\partial z}+N_{\phi}^{*}f_{0}^{(k)}(z)\frac{\partial}{\partial z},$$ (31) which has exactly the same form as Eq. 54 in Ref. Pu et al. (2017). It is now straightforward to apply the modified PWJ projection Jain and Kamilla (1997a, b) as shown in Ref. Pu et al. (2017). For the $\nu={n\over 2pn+1}$ states, the full LLL wave function is written as: $$\Psi_{n\over 2pn+1}[z_{i},\bar{z_{i}}]=e^{i\pi\tau N_{\phi}\sum_{i}\theta_{2,i% }^{2}}\mathcal{N}_{1}^{2p}F_{1}^{2p}(Z)\chi_{n}[\hat{g}_{i}(z_{j})J_{j}^{p}],$$ (32) $${\chi_{n}}[\hat{g}_{i}(z_{j})J_{j}^{p}]=\begin{vmatrix}\hat{g}_{0}^{(0)}(z_{1}% )J_{1}^{p}&\ldots&\hat{g}_{0}^{(0)}(z_{N})J_{N}^{p}\\ \vdots&\vdots&\vdots\\ \hat{g}_{1}^{(0)}(z_{1})J_{1}^{p}&\ldots&\hat{g}_{1}^{(0)}(z_{N})J_{N}^{p}\\ \vdots&\vdots&\vdots\\ \end{vmatrix},$$ (33) where $F_{1}(Z)=\vartheta\left[\begin{array}[]{c}{{\scriptstyle{N-1\over 2}}}\\ {\scriptstyle{N-1\over 2}}\end{array}\right]\left(Z/L_{1}\middle|\tau\right)$, $J_{i}=\prod_{j(j\neq i)}\vartheta(z_{ij})$ and $\vartheta(z_{ij})=\vartheta\left[\begin{array}[]{c}{{\scriptstyle\frac{1}{2}}}% \\ {\scriptstyle\frac{1}{2}}\end{array}\right]\left((z_{i}-z_{j})/L_{1}\middle|% \tau\right)$. The $\hat{g}_{n}^{(k)}(z_{i})$ is obtained from $\hat{f}_{n}^{(k)}(z_{i})$ by making the replacement $\partial/\partial z_{i}\rightarrow{\color[rgb]{1,0,0}{2}}\partial/\partial z_{i}$ for all derivatives acting on $J^{p}_{i}$. This amounts to changing $\tilde{\partial}_{z}\to{\color[rgb]{1,0,0}{2}}\tilde{\partial}_{z}$ in equation Eq. 30. For the LLL, $\hat{g}_{0}^{(k)}(z_{i})=f_{0}^{(k)}(z_{i})$. For the 1st and 2nd Landau levels, we explicitly have $$\hat{g}_{1}^{(k)}(z)\propto(N_{\phi}^{*}-N_{\phi})\frac{\partial f_{0}^{(k)}(z% )}{\partial z}+N_{\phi}^{*}f_{0}^{(k)}(z){\color[rgb]{1,0,0}{2}}\frac{\partial% }{\partial z},$$ (34) $$\hat{g}_{2}^{(k)}(z)\propto(N_{\phi}-N_{\phi}^{*})^{2}{\partial^{2}f_{0}^{(k)}% (z)\over\partial z^{2}}-2N_{\phi}^{*}(N_{\phi}-N_{\phi}^{*}){\partial f_{0}^{(% k)}(z)\over\partial z}{\color[rgb]{1,0,0}{2}}{\partial\over\partial z}+N_{\phi% }^{*2}f_{0}^{(k)}\left({\color[rgb]{1,0,0}{2}}{\partial\over\partial z}\right)% ^{2},$$ (35) and projection involving yet higher LLs can be derived analogously. We note that it is crucial for the projection that the composite fermion state be a proper state, i.e. that there are no vacant $\Lambda$-level orbitals directly underneath any occupied $\Lambda$-level orbital. If this condition is not met, periodic boundary conditions are not necessarily preserved. III Hall viscosity from microscopic Wave functions: analytical approach The Hall viscosity is conjectured to be related to the shift in the spherical geometry. For the Jain wave functions, the shift can be derived straightforwardly in two steps. First one can show that the shift for the unprojected wave function $\Psi_{n}\Psi_{1}^{2p}$ is equal to the sum of the shifts of the individual factors. From the result that the shift for $\Psi_{n}$ is $\mathcal{S}=n$, we obtain the result that the shift for the product is $\mathcal{S}=n+2p$. The second step is to show that the shift is preserved when the wave function is projected into the LLL. This is obviously the case, because the LLL projection in the spherical geometry keeps the system in the same Hilbert space. This suggests a possible route to deriving the Hall viscosity for the FQH states at $\nu=n/(2pn+1)$, following the same two steps. We show in this section that we can accomplish the two steps provided that we assume a certain property for the normalization factor, which is justified by numerical calculation. While the shift on the sphere is an exact property even for finite systems, the Hall viscosity of the FQH states varies with $N$, becoming equal to the expression in Eq. 10 only in the thermodynamic limit. III.1 Hall viscosity for the unprojected wave functions The $\tau$ gauge is most favorable for an analytical proof due to the following result: Theorem: The normalized integer quantum Hall state $\Psi_{n}$ satisfies, in the $\tau$ gauge, the following property: $$(\partial_{\tau_{2}})_{\tau}\Psi_{n}={Nn\over 4\tau_{2}}\Psi_{n},$$ (36) where we treat, formally, $\tau$ and $\tau_{2}$ as the two independent variables (which amounts to replacing $\bar{\tau}\rightarrow\tau-2i\tau_{2}$). Proof: The single particle orbital in the $\tau$ gauge, given by Eq. 22, may be written as $$f_{n}^{(i)}=\mathcal{N}_{n}\sum_{k\in\mathbb{Z}+{i\over N_{\phi}}}h_{n,k}(\tau% _{2})\zeta_{k}(\tau),$$ (37) where $i$ is the momentum index, $n$ is the LL index, $\mathcal{N}_{n}={1\over\sqrt{2^{n}n!\sqrt{\pi}L_{1}\ell_{B}}}$, $\zeta_{k}(\tau)=e^{i\pi N_{\phi}k(\tau k+2(\theta_{1}+\theta_{2}\tau))}$ and we define $h_{n,k}(\tau_{2})=H_{n}({\tau_{2}L_{1}\over\ell_{B}}(\theta_{2}+k))$. With some algebra, where we remember that $\tau_{2}L/\ell_{B}\propto\sqrt{\tau_{2}}$ and $\partial_{\tau_{2}}\mathcal{N}_{n}=\frac{1}{4\tau_{2}}\mathcal{N}_{n}$, together with the relation $H^{\prime}_{n}(x)=2nH_{n-1}(x)$, we get: $$(\partial_{\tau_{2}})_{\tau}f_{n}^{(i)}={1+2n\over 4\tau_{2}}f_{n}^{(i)}+{n(n-% 1)\over\tau_{2}}{\mathcal{N}_{n}\over\mathcal{N}_{n-2}}f_{n}^{(i)}.$$ (38) To obtain this result, it is important to remember that the area $V=\tau_{2}L_{1}^{2}$ is held fixed under the variation of $\tau_{2}$, such that $\partial_{\tau_{2}}\frac{\tau_{2}L_{1}}{\ell_{B}}\left(\theta_{2}+k\right)=% \partial_{\tau_{2}}\sqrt{\tau_{2}V/\ell_{B}^{2}}\left(\theta_{2}+k\right)=% \frac{1}{2\tau_{2}}\sqrt{\tau_{2}V/\ell_{B}^{2}}\left(\theta_{2}+k\right)=% \frac{L_{1}}{2\ell_{B}}\left(\theta_{2}+k\right)$. Therefore, when $(\partial_{\tau_{2}})_{\tau}$ acts on the wave function for $n$-filled LLs $\Psi_{n}$, we have $$(\partial_{\tau_{2}})_{\tau}\Psi_{n}=e^{i\pi\tau N_{\phi}^{\star}\sum_{i}% \theta_{2,i}^{2}}{1\over\sqrt{N}}(\partial_{\tau_{2}})_{\tau}\chi_{n},$$ (39) $$(\partial_{\tau_{2}})_{\tau}\chi_{n}=(\partial_{\tau_{2}})_{\tau}\mathcal{A}% \prod_{j=1}^{N}f_{n_{j}}^{(i_{j})}=\mathcal{A}\sum_{j}^{N}\left[{1+2n_{j}\over 4% \tau_{2}}f_{n_{j}}^{(i_{j})}+{n_{j}(n_{j}-1)\over\tau_{2}}{\mathcal{N}_{n_{j}}% \over\mathcal{N}_{n_{j}-2}}f_{n_{j}-2}^{(i_{j})}\right]\prod_{j^{\prime}\neq j% }^{N}f_{n_{j^{\prime}}}^{(i_{j^{\prime}})}={Nn\over 4\tau_{2}}\chi_{n}.$$ (40) The last equality follows because the second term in the square brackets vanishes due to antisymmetrization. Q.E.D. This result can be used to derive the Hall viscosity straightforwardly, as follows. Theorem: If a normalized wave function $\Psi$ satisfies $$(\partial_{\tau_{2}})_{\tau}\Psi={N\mathcal{S}\over 4\tau_{2}}\Psi,$$ (41) then its Hall viscosity is given by Eq. 7. For the special case of $\Psi_{n}$, Eq. 36 implies $\eta^{A}={\hbar Nn\over 4V}$. Proof: The Hall viscosity is related to the Berry curvature in Eq. 4, which can be expressed as $$\mathcal{F}_{\tau_{1},\tau_{2}}=\partial_{\tau_{1}}A_{2}-\partial_{\tau_{2}}A_% {1},$$ (42) where the Berry connections are defined as $$A_{j}=i\langle\Psi|\partial_{\tau_{j}}|\Psi\rangle,$$ (43) with $\tau_{1}$ and $\tau_{2}$ chosen as independent variables. These can be expressed as $$\displaystyle A_{1}$$ $$\displaystyle=$$ $$\displaystyle A_{\tau}+A_{\bar{\tau}},$$ $$\displaystyle A_{2}$$ $$\displaystyle=$$ $$\displaystyle i(A_{\tau}-A_{\bar{\tau}}).$$ (44) where $A_{\tau}=i\langle\Psi|(\partial_{\tau})_{\bar{\tau}}|\Psi\rangle$ and $A_{\bar{\tau}}=i\langle\Psi|(\partial_{\bar{\tau}})_{\tau}|\Psi\rangle$, with $\tau$ and $\bar{\tau}$ chosen as independent variables. With the help of Eq. 41, it is straightforward to derive the Berry connections as: $$\displaystyle A_{\tau}$$ $$\displaystyle=$$ $$\displaystyle i\langle\Psi|\partial_{\tau}|\Psi\rangle=-{1\over 2}\langle\left% ({\partial\over\partial\tau_{2}}\right)_{\tau}\Psi|\Psi\rangle=-{N\mathcal{S}% \over 8\tau_{2}},$$ $$\displaystyle A_{\bar{\tau}}$$ $$\displaystyle=$$ $$\displaystyle i\langle\Psi|\partial_{\bar{\tau}}|\Psi\rangle=-{1\over 2}% \langle\Psi|\left({\partial\over\partial\tau_{2}}\right)_{\tau}\Psi\rangle=-{N% \mathcal{S}\over 8\tau_{2}}.$$ (45) Eq. III.1 gives $A_{1}=-{N\mathcal{S}\over 4\tau_{2}}$ and $A_{2}=0$, which implies $$\displaystyle\mathcal{F}_{\tau_{1},\tau_{2}}=\partial_{\tau_{1}}A_{2}-\partial% _{\tau_{2}}A_{1}=-{N\mathcal{S}\over 4\tau_{2}^{2}},$$ (46) finally producing $\eta^{A}={\hbar N\mathcal{S}\over 4V}$. Q.E.D. We next prove the following theorem. Theorem: The Hall viscosity for the unprojected wave function $$\Psi_{n\over 2pn+1}^{\rm unproj}=Z\mathcal{P}_{M}\Psi_{n}\Psi_{1}^{2p}$$ (47) is given, in the thermodynamic limit, by $$\lim_{N\rightarrow\infty}\eta^{A}={\hbar N(n+2p)\over 4V},$$ (48) provided we assume that the normalization factor $Z$ satisfies the condition $$\lim_{N\rightarrow\infty}{1\over N}\left({\partial\over\partial\tau_{2}}\right% )_{\tau}\ln Z=0.$$ (49) Here, we note that $\Psi_{n}$ and $\Psi_{1}$ in Eq. 47 are already taken as normalized, and $Z$ is the additional factor needed for the normalization of $\Psi_{n\over 2pn+1}^{\rm unproj}$. $\mathcal{P}_{M}$ projects the wave function into a definite momentum sector. Proof: First of all, it is a straightforward exercise to show (see Appendix E) that in $\tau$-gauge we have $$[(\partial_{\tau_{2}})_{\tau},\mathcal{P}_{M}]=0.$$ (50) With Eq. 36 it then follows: $$(\partial_{\tau_{2}})_{\tau}\Psi_{n\over 2pn+1}^{\rm unproj}=\left[\left({% \partial\over\partial\tau_{2}}\right)_{\tau}\ln Z\right]\Psi_{n\over 2pn+1}^{% \rm unproj}+Z\mathcal{P}_{M}\big{[}\big{(}(\partial_{\tau_{2}})_{\tau}\Psi_{n}% \big{)}\Psi_{1}^{2p}+\Psi_{n}\big{(}(\partial_{\tau_{2}})_{\tau}\Psi_{1}^{2p}% \big{)}\big{]}\Rightarrow{(n+2p)N\over 4\tau_{2}}\Psi_{n\over 2pn+1}^{\rm unproj}$$ (51) where in the last step we have taken the limit $N\rightarrow\infty$ and retained the dominant term. The Hall viscosity in Eq. 48 follows according to our previous theorem. Q.E.D. The derivation depends on the assumption given by Eq. 49. The following considerations indicate that Eq. 49 is valid: $\bullet$ We evaluate the $\tau$ derivative of $Z$ numerically for $\nu=2/5$ and show that it satisfies Eq. 49; the results are shown in Appendix F. $\bullet$ In the next section we evaluate the Hall viscosity directly for many unprojected Jain wave functions and find that they satisfy Eq. 48 in the thermodynamic limit. $\bullet$ Eq. 49 is known to be true for the Laughlin states through the use of the plasma analogy Read (2009); Tokatly and Vignale (2009). However, an analogous plasma analogy for Jain states is likely to be much more complicated and remains an open question. $\bullet$ In Appendix F we explicitly evaluate the contribution of the $\left({\partial\over\partial\tau_{2}}\right)_{\tau}\ln Z$ term in Eq. 51 to the Hall viscosity. We find that while it provides a correction for finite $N$, the correction vanishes with increasing $N$. A comment regarding an interplay between Hall viscosity and momentum projection is in order. The commutator $[(\partial_{\tau_{2}})_{\tau},\mathcal{P}_{M}]=0$ does not guarantee that the wave functions with and without momentum projection have the same Hall viscosity, since the normalization factors can be different. In fact, for finite systems, they do have different Hall viscosities, since the decomposition into the various momentum states is itself $\tau_{2}$ dependent (see e.g. Appendix B). This $\tau_{2}$ dependence may introduce extra berry phases and thus alter the Hall viscosity. In Appendix C we however show that this change in the viscosity is sub-leading in $N$ and does therefore not contribute in the thermodynamic limit. We finally note that the above considerations apply to wave functions obtained from the parton construction Jain (1989b), which are products of integer quantum Hall wave functions. We expect that the Hall viscosity of the wave function $\Psi_{\nu}=\prod_{\lambda=1}^{m}\Psi_{n_{\lambda}}$, with $\nu^{-1}=\sum_{\lambda=1}^{m}n^{-1}_{\lambda}$, is given by Eq. 7 with $\mathcal{S}=\sum_{\lambda=1}^{m}n_{\lambda}$. We note that the above proof only applies to product states where each factor has magnetic field pointing in the same direction. Generalization to reverse-flux attached states at $\nu=n/(2pn-1)$ or to states of parton theory with negative values of $n_{\lambda}$ remains an open problem. III.2 Hall viscosity for the LLL projected wave functions Next we project the wave functions to the LLL. In this case we have: Theorem: The Hall viscosity of the normalized projected wave function $$\Psi_{n\over 2pn+1}=Z^{\prime}{P_{\rm LLL}}\mathcal{P}_{M}\Psi_{n}\Psi_{1}^{2p},$$ (52) is given by Eq. 48, provided the normalization factor $Z^{\prime}$ ($Z^{\prime}\neq Z$) satisfies the condition $$\lim_{N\rightarrow\infty}{1\over N}\left({\partial\over\partial\tau_{2}}\right% )_{\tau}\ln Z^{\prime}=0.$$ (53) Again, $\Psi_{n}$ and $\Psi_{1}$ are taken as normalized, and ${P_{\rm LLL}}$ denotes LLL projection. Proof: As explained in Sec. II.2, the “direct” projection is accomplished by replacement $\chi_{n}\equiv\chi_{n}\big{(}f_{n}^{(i)}\big{)}\to\hat{\chi}_{n}\equiv\chi_{n}% \big{(}\hat{D}^{n}f_{0}^{(i)}\big{)}$ in the unprojected wave function. We now prove: $$[\big{(}\partial_{\tau_{2}}\big{)}_{\tau},\hat{\chi}_{n}]={Nn\over 4\tau_{2}}% \hat{\chi}_{n}.$$ (54) With this result, the Hall viscosity of the LLL-projected wave functions in Eq. 52 can be obtained in the same fashion as for the unprojected wave function. Let us first consider $(\partial_{\tau_{2}})_{\tau}\hat{D}^{n}f_{0}^{(i)}$. We first note that $\left[\left(\partial_{\tau_{2}}\right)_{\tau},\partial_{z}\right]={1\over\tau_% {2}}\big{(}{1\over 2}\partial_{z}+\partial_{\bar{z}}\big{)}$ since $z$ depends on $\tau_{1}$ and $\tau_{2}$. We then get: $$\big{(}\partial_{\tau_{2}}\big{)}_{\tau}\hat{D}=\hat{D}\big{(}\partial_{\tau_{% 2}}\big{)}_{\tau}+{1\over 2\tau_{2}}\hat{D}+{1\over\tau_{2}}\big{[}N_{\phi}^{% \star}\hat{\partial}_{\bar{z}}-(N_{\phi}-N_{\phi}^{\star})\tilde{\partial}_{% \bar{z}}\big{]}.$$ (55) As $\hat{D}$ only acts on LLL wave functions, which consequentially are analytical in $z$, the last term on the right-hand side of Eq. 55 may be omitted. Thereby, we effectively have: $$(\partial_{\tau_{2}})_{\tau}\hat{D}^{n}f_{0}^{(i)}={1+2n\over 4\tau_{2}}\hat{D% }^{n}f_{0}^{(i)}+\hat{D}^{n}f_{0}^{(i)}(\partial_{\tau_{2}})_{\tau}.$$ (56) Eq. 54 then follows because: $$\displaystyle\big{(}\partial_{\tau_{2}}\big{)}_{\tau}\hat{\chi}_{n}$$ $$\displaystyle=$$ $$\displaystyle\left(\partial_{\tau_{2}}\right)_{\tau}\chi_{n}\big{(}\hat{D}^{n}% f_{0}^{(i)}\big{)}=(\partial_{\tau_{2}})_{\tau}\mathcal{A}\prod_{j=1}^{N}\hat{% D}^{n_{j}}f_{0}^{(i_{j})}$$ (57) $$\displaystyle=$$ $$\displaystyle\mathcal{A}\Big{\{}\sum_{j}^{N}{1+2n_{j}\over 4\tau_{2}}\prod_{j^% {\prime}}^{N}\hat{D}^{n_{j^{\prime}}}f_{0}^{(i_{j^{\prime}})}$$ $$\displaystyle\quad+\prod_{j^{\prime}}^{N}\hat{D}^{n_{j^{\prime}}}f_{0}^{(i_{j^% {\prime}})}\big{(}\partial_{\tau_{2}}\big{)}_{\tau}\Big{\}}$$ $$\displaystyle=$$ $$\displaystyle{Nn\over 4\tau_{2}}\hat{\chi}_{n}+\hat{\chi}_{n}\big{(}\partial_{% \tau_{2}}\big{)}_{\tau}.$$ Assuming Eq. 53, we now have, in the limit $N\rightarrow\infty$, $$\displaystyle(\partial_{\tau_{2}})_{\tau}\Psi_{n\over 2pn+1}$$ $$\displaystyle=$$ $$\displaystyle Z^{\prime}\mathcal{P}_{M}e^{i\pi\tau N_{\phi}\sum_{i}\theta_{2,i% }^{2}}(\partial_{\tau_{2}})_{\tau}\big{\{}\hat{\chi}_{n}\chi_{1}^{2p}\big{\}}$$ (58) $$\displaystyle=$$ $$\displaystyle Z^{\prime}\mathcal{P}_{M}e^{i\pi\tau N_{\phi}\sum_{i}\theta_{2,i% }^{2}}\big{\{}{Nn\over 4\tau_{2}}\hat{\chi}_{n}\chi_{1}^{2p}$$ $$\displaystyle+2p\hat{\chi}_{n}\chi_{1}^{2p-1}\big{[}(\partial_{\tau_{2}})_{% \tau}\chi_{1}\big{]}\big{\}}$$ $$\displaystyle=$$ $$\displaystyle{(n+2p)N\over 4\tau_{2}}\Psi_{n\over 2pn+1}.$$ Q.E.D. The above proof also proceeds analogously for the PWJ projected wave functions. In the PWJ projection in Eg. 32, $\chi_{1}$ is decomposed into two parts: the center-of-mass part $F_{1}(Z)$ (which includes the normalization factor $\mathcal{N}_{1}$) and the Jastrow factors. Noting that $\left(\partial_{\tau_{2}}\right)_{\tau}\chi_{1}={N\over 4\tau_{2}}\chi_{1}$ and the Jastrow factors $J_{i}$s are analytical functions of $\tau$, the center-of-mass part must be an eigenfunction of $\left(\partial_{\tau_{2}}\right)_{\tau}$ with eigenvalue ${N\over 4\tau_{2}}$. The Jastrow factors can be incorporated into $\hat{\chi}_{n}$ with the change $\hat{D}\rightarrow N_{\phi}^{\star}\hat{\partial}_{z}-2\left(N_{\phi}-N_{\phi}% ^{\star}\right)\tilde{\partial}_{z}$. Since, mutatis mutandis, Eq. 55 still holds for the new $\hat{D}$, we have $\left(\partial_{\tau_{2}}\right)_{\tau}\chi_{n}\left[\hat{g}_{i}(z_{j})J_{j}^{% p}\right]={Nn\over 4\tau_{2}}\chi_{n}\left[\hat{g}_{i}(z_{j})J_{j}^{p}\right]$. As a final result, Eq. 58 is still valid after PWJ projection. The proof relies on the assumption in Eq. 53. For the LLL projected wave functions it is non-trivial to calculate $Z^{\prime}$ numerically. However, we evaluate in the next section the Hall viscosity for the projected wave functions for several fractions and find that it converges to Eq.  48 in the thermodynamic limit, which suggests that Eq. 53 is valid. IV Numerical evaluations of the Hall viscosity We now use the Monte Carlo method to numerically evaluate the Hall viscosity defined in Eq. 3 for the wave functions constructed in Sec. II. To calculate the inner product $\big{\langle}{\partial\Psi\over\partial\tau_{1}}\big{|}{\partial\Psi\over% \partial\tau_{2}}\big{\rangle}$, we make the approximation ${\partial\Psi\over\partial\tau_{1}}\approx{\Psi(\tau_{1}+\delta,\tau_{2})-\Psi% (\tau_{1}-\delta,\tau_{2})\over 2\delta}$ and ${\partial\Psi\over\partial\tau_{2}}\approx{\Psi(\tau_{1},\tau_{2}+\tau_{2}% \delta)-\Psi(\tau_{1},\tau_{2}-\tau_{2}\delta)\over 2{\tau_{2}\delta}}$, with $\delta\sim 10^{-3}$. The function $|\Psi(\tau_{1},\tau_{2})|^{2}$ is chosen as the weight function for the calculation of all inner products. We also make use of the “lattice Monte Carlo” trick introduced by Wang et al. Wang et al. (2019). In Fig. 1 we show the Hall viscosities for various fillings calculated using Eq. 3. We use both LLL-projected and unprojected wave functions to evaluate Hall viscosities at fillings $2/5$, $3/7$ and $2/9$ for the states with positive flux attachment. For fillings $2/3$, $3/5$ and $2/7$, we only use the unprojected wave functions to evaluate Hall viscosities, since the PWJ projection for reverse flux sates on the torus is much more cumbersome to implement. There are many noteworthy conclusions that can be drawn from our study. $\bullet$ The most important conclusion from our study is that the Hall viscosities converge to the expected value in Eq. 7 for sufficiently large systems. We find that the convergence is achieved already in systems that contain on the order of ten composite fermions. Furthermore, the Hall viscosities have the expected values for both the projected and the unprojected wave functions, supporting the notion that they are a topological property of the state, independent of microscopic details. $\bullet$ The Hall viscosities have significant finite size corrections for small $N$. As shown in the previous section, a quantized Hall viscosity would be obtained for the unprojected wave functions if the overall normalization factor had satisfied Eq. 49 for arbitrary $N$. Our numerical results show that this equation is satisfied only in the thermodynamic limit. $\bullet$ The Hall viscosities of the LLL projected and unprojected wave functions are surprisingly close (though not exactly equal), indicating that the LLL projection does not significantly alter the Hall viscosity even for small $N$. The small deviation between them vanishes as the system size increases, as shown in the inset of Fig. 1. $\bullet$ The convergence of the viscosity is achieved faster for $\nu=2/5$ than for $3/7$ and $2/9$. We attribute it to a longer correlation length for the latter two, as measured, for example from the size of the quasiparticle or the quasihole excitations. As the correlation length increases, the system needs to be larger in order for a particle to “forget” that it resides on a finite geometry. Indeed, comparison with earlier works Read and Rezayi (2011); Fremling (2016) shows that the Hall viscosity for $1/3$ converges even faster than that for $2/5$. $\bullet$ For all of the above calculations we have assumed a square torus. We have also studied the dependence on the corner angle as well as the aspect ratio. Fig. 2 shows the Hall viscosities for different values of $\tau$ along the unit circle at $\tau=e^{i\phi}$ for $N=21$ particles. The variations with $\phi$ are very small, showing again that the Hall viscosity of the projected and projected wave functions is independent of the geometry of the torus provided that the system is large enough. $\bullet$ To further test its robustness, we scan the Hall viscosity in the $\tau$ plane for the 16 particle system at $\nu=2/5$. The result is shown in Fig. 3. The Hall viscosities at different $\tau_{1}$ are represented by different labels, and the horizontal axis represents $\ln(\tau_{2})$. The Hall viscosities are given by Eq. 10 for a fairly large region around $\tau=i$, but begins to show corrections when $\tau$ deviates too far from $\tau=i$. As expected from the modular covariance of the wave functions, the points connected by modular transformations $\tau\to\tau+1$ (whose symbols have the same color) give exactly the same Hall viscosities. The curve at $\tau_{1}=0$ is symmetric under the transformation $\tau_{2}\to{1\over\tau_{2}}$, as also expected from modular covariance. $\bullet$ In Fig. 4, we compute the Hall viscosity in the thin torus limit $\tau=i\tau_{2}$ as $\tau_{2}>1$. We consider the 2/5 Jain state for two systems with $N=10$ and $N=20$ particles. We plot the Hall viscosity as a function of $L_{1}$ rather than $\tau_{2}$ to highlight the fact that the $L_{1}$-dependence of the viscosity is independent of the number of particles. This independence has been noted earlier in Ref. Fremling, 2016 for the Laughlin state, and also coincides with earlier work on the cylinder geometryZaletel et al. (2013). One can understand this size independence by considering that the Hall viscosity should be a local characteristic of the Hall fluid. As such, if one places the fluid on an infinite cylinder or a torus makes little difference, provided that the torus is long enough. Thus the fluid will only be sensitive to the shorter circumference $L_{1}$ (the shorter edge) of the torus/cylinder, and deviations of the viscosity is expected to occur only when $L_{1}$ is comparable to or smaller than the correlation length of the fluid. It is also noteworthy that in the thin-torus limit, the Hall viscosity is seen to approach $\mathcal{S}=1$. This is expected because in the thin torus limit, the 2/5 wave function reduces to a single Slater determinant, and all LLL slater determinants have the same viscosity, $\eta^{A}={\hbar\over 4}{N\over V}$. This is true of all FQH states that reduce to a single Slater determinant in the thin torus limit. $\bullet$ We have noted above that our results are likely also applicable to the wave functions in the parton construction, which are products of integer quantum Hall states. It is possible to further generalize the class of wave functions that produce the quantized Hall viscosity. We consider the wave function $\Psi_{1}|\Psi_{1}|^{\alpha}$ as a trial wave function for $\nu=1$. This wave function has the same topological structure as $\Psi_{1}$, occurs at the same shift in the spherical geometry, and may represent the physics of LL mixing due to interactions. We find that the Hall viscosity approaches the value $\eta^{A}={\hbar\over 4}{N\over V}$ in the thermodynamic limit independent of the value of $\alpha$, as shown in Fig. 5. V conclusion In this work, we have evaluated the Hall viscosities of the Jain states at many filling factors of the form $\nu=n/(2pn\pm 1)$, specifically for $\nu=2/5$, $3/7$, $2/9$, $2/3$, $3/5$ and $2/7$. For this purpose we use LLL wave functions constructed in Ref. Pu et al. (2017), but appropriately re-expressed in $\tau$-gauge. The numerical results agree with Eq. 7 proposed by Read Read (2009), which relates the Hall viscosity to the orbital spin or the shift in the spherical geometry. We also find that the Hall viscosities for the unprojected and projected wave functions are the same in the thermodynamic limit, supporting the notion that the Hall viscosity is a topological property of a FQH state. Additionally, the product form of our wave functions suggests a possible analytical derivation for the Hall viscosity of a large class of FQH states, analogous to the derivation of the shift in the spherical geometry. We show that the Hall viscosity of the unprojected and projected wave functions can be derived analytically provided we make an assumption regarding the thermodynamic behavior of an overall normalization factor $Z$. This assumption has been confirmed by detailed computer calculations. Acknowledgements. The work at Penn State (S.P. and J.K.J.) was supported in part by the U. S. Department of Energy, Office of Basic Energy Sciences, under Grant No. DE-SC0005042. The work at Utrecht (M.F.) is part of the D-ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). The numerical calculations were performed using Advanced CyberInfrastructure computational resources provided by The Institute for CyberScience at The Pennsylvania State University. We are grateful to F. D. M. Haldane, Péter Lévay, Johnathan Schirmer, and D. T. Son for their help and advice. Appendix A Proof of the modular invariance of Hall viscosity for CF wave functions In this appendix, we prove that the Hall viscosity defined in Eq. 3 is modular invariant provided that Eq. 11 is satisfied. The proof for the modular transformation $\tau\rightarrow\tau+1$ is trivial, since the wave function is invariant Fremling (2019). Below we show the statement is also true for $\tau\rightarrow-{1\over\tau}$. In Ref. Fremling (2019) it is shown that under this transformation, Eq. 11 holds with $K_{M,M^{\prime}}={1\over\sqrt{2pn\pm 1}}e^{i2\pi{nMM^{\prime}\over 2pn\pm 1}}$. We then note Hall viscosity is independent of then center-of-mass momentum, since: $$\displaystyle\big{\langle}{\partial\Psi^{(M+1)}\over\partial\tau_{1}}\big{|}{% \partial\Psi^{(M+1)}\over\partial\tau_{2}}\big{\rangle}$$ (59) $$\displaystyle=$$ $$\displaystyle\big{\langle}{\partial t_{CM}({L_{2}\over N_{\phi}})^{\dagger}% \Psi^{(M)}\over\partial\tau_{1}}\big{|}{\partial t_{CM}({L_{2}\over N_{\phi}})% \Psi^{(M)}\over\partial\tau_{2}}\big{\rangle}$$ $$\displaystyle=$$ $$\displaystyle\big{\langle}{\partial\Psi^{(M)}\over\partial\tau_{1}}\big{|}t_{% CM}({L_{2}\over N_{\phi}})^{\dagger}t_{CM}({L_{2}\over N_{\phi}})\big{|}{% \partial\Psi^{(M)}\over\partial\tau_{2}}\big{\rangle}$$ $$\displaystyle=$$ $$\displaystyle\big{\langle}{\partial\Psi^{(M)}\over\partial\tau_{1}}\big{|}{% \partial\Psi^{(M)}\over\partial\tau_{2}}\big{\rangle}$$ From the first line to second line, we used the factor that $t_{CM}({L_{2}\over N_{\phi}})$ is independent of $\tau_{1}$ and $\tau_{2}$, as shown in Appendix E. Similarly, because $t_{CM}({L_{1}\over N_{\phi}})$ is also independent of $\tau_{1}$ and $\tau_{2}$ it is straightforward to see $\big{\langle}{\partial\Psi^{(M^{\prime})}\over\partial\tau_{1}}\big{|}{% \partial\Psi^{(M)}\over\partial\tau_{2}}\big{\rangle}=\delta_{M,M^{\prime}}% \big{\langle}{\partial\Psi^{(M)}\over\partial\tau_{1}}\big{|}{\partial\Psi^{(M% )}\over\partial\tau_{2}}\big{\rangle}$. Under the transformation $\tau\rightarrow-{1\over\tau}$, we have $\tau_{1}^{\prime}=-{\tau_{1}\over|\tau|^{2}}$ and $\tau_{2}^{\prime}={\tau_{2}\over|\tau|^{2}}$. The Hall viscosity in Eq. 3 thus transforms like $$\displaystyle\eta^{A}$$ $$\displaystyle\rightarrow$$ $$\displaystyle{2\hbar\tau_{2}^{\prime 2}\over V}\sum_{n}|K_{mn}|^{2}{\rm Im}% \big{\langle}{\partial\Psi^{(M)}\over\partial\tau^{\prime}_{1}}\big{|}{% \partial\Psi^{(M)}\over\partial\tau^{\prime}_{2}}\big{\rangle}$$ (60) $$\displaystyle=$$ $$\displaystyle{2\hbar\tau_{2}^{\prime 2}\over V}{\rm Im}\big{\langle}{\partial% \Psi^{(M)}\over\partial\tau^{\prime}_{1}}\big{|}{\partial\Psi^{(M)}\over% \partial\tau^{\prime}_{2}}\big{\rangle}.$$ Given that $$\displaystyle{\partial\Psi^{(M)}\over\partial\tau^{\prime}_{1}}={\tau^{\prime 2% }_{1}-\tau^{\prime 2}_{2}\over(\tau^{\prime 2}_{1}+\tau^{\prime 2}_{2})^{2}}{% \partial\Psi^{(M)}\over\partial\tau_{1}}-{2\tau^{\prime}_{1}\tau^{\prime}_{2}% \over(\tau^{\prime 2}_{1}+\tau^{\prime 2}_{2})^{2}}{\partial\Psi^{(M)}\over% \partial\tau_{2}}$$ $$\displaystyle{\partial\Psi^{(M)}\over\partial\tau^{\prime}_{2}}={\tau^{\prime 2% }_{1}-\tau^{\prime 2}_{2}\over(\tau^{\prime 2}_{1}+\tau^{\prime 2}_{2})^{2}}{% \partial\Psi^{(M)}\over\partial\tau_{2}}+{2\tau^{\prime}_{1}\tau^{\prime}_{2}% \over(\tau^{\prime 2}_{1}+\tau^{\prime 2}_{2})^{2}}{\partial\Psi^{(M)}\over% \partial\tau_{1}},$$ it follows that $$\displaystyle\eta^{A}$$ $$\displaystyle\rightarrow$$ $$\displaystyle{2\hbar\tau_{2}^{\prime 2}\over V}{1\over(\tau^{\prime 2}_{1}+% \tau^{\prime 2}_{2})^{2}}{\rm Im}\big{\langle}{\partial\Psi^{(M)}\over\partial% \tau_{1}}\big{|}{\partial\Psi^{(M)}\over\partial\tau_{2}}\big{\rangle}$$ (62) $$\displaystyle=$$ $$\displaystyle{2\hbar\tau_{2}^{\prime 2}\over V}{\tau_{2}^{2}\over\tau^{\prime 2% }_{2}}{\rm Im}\big{\langle}{\partial\Psi^{(M)}\over\partial\tau_{1}}\big{|}{% \partial\Psi^{(M)}\over\partial\tau_{2}}\big{\rangle}$$ $$\displaystyle=$$ $$\displaystyle\eta^{A}.$$ This shows explicitly that the Hall viscosity is modular invariant for CF wave functions. Appendix B The momentum components of the CF state It is noted in the main text that $\Psi_{n\over 2pn+1}$ is not a momentum eigenstate, but a superposition of $2pn+1$ different momentum eigenstates, i.e. $\Psi_{n\over 2pn+1}=\sum_{m=0}^{2p}\alpha_{m}\Psi_{n\over 2pn+1}^{(m)}$, in which $\Psi_{n\over 2pn+1}^{(m)}$ are momentum eigenstates. It turns out that the coefficients $\alpha_{m}$ have dependence on both $\tau$ and $\tau_{2}$, which, in turn, gives rise to an extra berry curvature. In this appendix and the next, using the CF formulation of the Laughlin state as an explicit example, we show how this extra Berry curvature comes about and also that it is sub-leading in the thermodynamic limit. We thus take as starting point the two formulations of the Laughlin state at $\nu=1/q$: The first is the momentum projected state $$\displaystyle\Psi_{1/q}^{\left(p\right)}\left(z\right)=\mathcal{N}\left(\tau% \right)e^{i\pi\tau N_{\phi}\sum_{i}y_{i}^{2}}$$ $$\displaystyle\quad\quad\quad\times\prod_{i<j}\vartheta\left[\begin{array}[]{c}% {{\scriptstyle 1/2}}\\ {\scriptstyle 1/2}\end{array}\right]\left(\frac{z_{ij}}{L_{1}}\middle|\tau% \right)^{q}{F}_{1/q}^{(p)}\left(\frac{Z}{L_{1}},\tau\right),$$ (63) where the center of mass is $${F}_{1/q}^{(p)}\left(Z,\tau\right)=\vartheta\left[\begin{array}[]{c}{{% \scriptstyle\frac{p}{q}+q\frac{N-1}{2}}}\\ {\scriptstyle q\frac{N-1}{2}}\end{array}\right]\left(qZ\middle|q\tau\right),$$ and the supposed normalization is $\mathcal{N}\left(\tau\right)\propto\tau_{2}^{\frac{qN}{4}}$. The second state version is obtained by multiplying $q$ copies of the $\nu=1$ wave funcitons, $\Phi_{1/q}\left(z\right)=\left[\Psi_{1}\left(z\right)\right]^{q}$. The quotient between these two functions is $$\frac{\Psi_{1/q}^{\left(p\right)}\left(z\right)}{\Phi_{1/q}\left(z\right)}% \propto\frac{\vartheta\left[\begin{array}[]{c}{{\scriptstyle\frac{p}{q}+q\frac% {N-1}{2}}}\\ {\scriptstyle q\frac{N-1}{2}}\end{array}\right]\left(qZ\middle|q\tau\right)}{% \left[\vartheta\left[\begin{array}[]{c}{{\scriptstyle\frac{N-1}{2}}}\\ {\scriptstyle\frac{N-1}{2}}\end{array}\right]\left(Z\middle|\tau\right)\right]% ^{q}}.$$ (64) Thus, the only difference lies in the center of mass representations. For simplicity we will assume that $N$ is odd such that $(N-1)/2$ is an integer (which avoids carrying around cumbersome factors of one half). We may now expand $\left[\vartheta\left[\begin{array}[]{c}{{\scriptstyle 0}}\\ {\scriptstyle 0}\end{array}\right]\left(Z\middle|\tau\right)\right]^{q}$ in terms of $\sum_{p}\alpha_{p}\vartheta\left[\begin{array}[]{c}{{\scriptstyle\frac{p}{q}}}% \\ {\scriptstyle 0}\end{array}\right]\left(qZ\middle|q\tau\right)$, where we seek $\alpha_{p}$. We may extract $\alpha_{p}$ by expanding the two wave functions mode by mode. After some algebra one finds $$\displaystyle\alpha_{K}$$ $$\displaystyle=$$ $$\displaystyle e^{-i\pi\frac{1}{q}\tau K^{2}}\sum_{\underset{k_{1},\ldots,k_{q}% }{{}_{\sum_{j}k_{j}=K}}}e^{i\pi\tau\sum_{j}k_{j}^{2}}$$ (65) $$\displaystyle=$$ $$\displaystyle\sum_{\underset{\tilde{k}_{1},\ldots,\tilde{k}_{q}}{{}_{\sum_{j}k% _{j}=K}}}e^{i\pi\tau\sum_{j}\tilde{k}_{j}^{2}}.$$ The sums are over the $k_{1},\ldots,k_{q}\in\mathbb{Z}$, constrained such that $\sum_{j}k_{j}=K$. We further use the abbreviation $\tilde{k}_{j}=k_{j}-\frac{K}{q}$, to show explicitly that $\alpha_{K}=\alpha_{K+q}$, and that $\alpha_{K}=\alpha_{-K}$. From Ref. Fremling (2016) we may identify $\alpha_{K}=Z_{K}^{\left(q\right)}$ where $$Z_{k}^{\left(N\right)}=\sum_{t=1}^{N-1}Z_{t}^{\left(N-1\right)}\vartheta\left[% \begin{array}[]{c}{{\scriptstyle\frac{t}{N-1}-\frac{k}{N}}}\\ {\scriptstyle 0}\end{array}\right]\left(0\middle|N\left(N-1\right)\tau\right),$$ (66) is defined recursively with $Z_{t}^{\left(1\right)}=1$. The properly normalized $\alpha_{s}$, such that $\sum_{s}\left|\alpha_{s}\right|^{2}=1$, is thus $\alpha_{s}=\frac{Z_{s}^{\left(q\right)}}{\sqrt{W^{\left(q\right)}}}$ where $W^{\left(q\right)}=\sum_{s=1}^{q}\left|Z_{s}^{\left(q\right)}\right|^{2}$. The absolute value of $\alpha_{s}$ in shown in Fig. 6, for $\tau=i\tau_{2}$ and in the entire $\tau$-plane, for $q=2,3$. From this we see that in the thin torus limit, the CF state is almost in a momentum eigenstate, since $|\alpha_{s}^{\left(q\right)}|^{2}=\delta_{s,0}$. However, in the opposite thin torus limit ($\tau_{2}\to 0$), the state is in an equal superposition, with $|\alpha_{s}^{\left(q\right)}|^{2}=\frac{1}{q}$, Appendix C The Berry curvature from Momentum Mixing In this Appendix, we compute the correction to Hall viscosity due to mixing of different momentum eigenstates, and show that it vanishes in the thermodynamic limit. We write the wave function at $\nu={n\over 2pn+1}$ as $\Psi=\sum_{s}\alpha_{s}\Psi^{(s)}$ (in this section we omit the filling factor as subscript, and use $s$ for momentum index), where $\sum_{s}\left|\alpha_{s}\right|^{2}=1$, and $\Psi^{(s)}$ are the momentum projected components. We further assume that Eq. 51 holds for the momentum projected states, namely that $$\partial_{\bar{\tau}}\Psi^{(s)}=i{P\over 2\tau_{2}}\Psi^{(s)},\quad\quad% \partial_{\tau}\Psi^{(s)\star}=-i{P\over 2\tau_{2}}\Psi^{(s)\star},$$ (67) where $P={(n+2p)N\over 4}$. Computing the Berry connection for the Fourier expanded states give: $$\displaystyle A_{\bar{\tau}}$$ $$\displaystyle=i\big{\langle}\Psi\big{|}\partial_{\bar{\tau}}\Psi\big{\rangle}=% i\big{\langle}\Psi\big{|}\partial_{\bar{\tau}}\sum_{s}\alpha_{s}\Psi^{(s)}\big% {\rangle}$$ $$\displaystyle=i\big{\langle}\Psi\big{|}\sum_{s}\left(\partial_{\bar{\tau}}% \alpha_{s}\right)\Psi^{(s)}\big{\rangle}+i\overbrace{\big{\langle}\Psi\big{|}% \sum_{s}\alpha_{s}\left(\partial_{\bar{\tau}}\Psi^{(s)}\right)\big{\rangle}}^{% \frac{iP}{2\tau_{2}}},$$ where the last term is simply $\frac{iP}{2\tau_{2}}$ since $\partial_{\bar{\tau}}$ preserves the momentum label. Expanding the the bra of the first term yields $$\displaystyle A_{\bar{\tau}}$$ $$\displaystyle=i\sum_{s^{\prime}}\alpha_{s^{\prime}}^{\star}\sum_{s}\left(% \partial_{\bar{\tau}}\alpha_{s}\right)\overbrace{\big{\langle}\Psi^{(s)^{% \prime}}\big{|}\Psi^{(s)}\big{\rangle}}^{\delta_{s,s^{\prime}}}-\frac{P}{2\tau% _{2}}$$ $$\displaystyle=i\sum_{s}\alpha_{s}^{\star}\left(\partial_{\bar{\tau}}\alpha_{s}% \right)-\frac{P}{2\tau_{2}}.$$ Similarly, $$A_{\tau}^{\left(q\right)}=i\sum_{s}\alpha_{s}\left(\partial_{\tau}\alpha_{s}^{% \star}\right)-\frac{P}{2\tau_{2}},$$ (68) directly follows. The berry curvature is thus $$\displaystyle\mathcal{F}_{\tau,\bar{\tau}}$$ $$\displaystyle=\partial_{\tau}A_{\bar{\tau}}-\partial_{\bar{\tau}}A_{\tau}$$ $$\displaystyle=\partial_{\tau}\left(i\sum_{s}\alpha_{s}^{\star}\left(\partial_{% \bar{\tau}}\alpha_{s}\right)-\frac{P}{2\tau_{2}}\right)$$ $$\displaystyle\quad-\partial_{\bar{\tau}}\left(i\sum_{s}\alpha_{s}\left(% \partial_{\tau}\alpha_{s}^{\star}\right)-\frac{P}{2\tau_{2}}\right)$$ $$\displaystyle=i\sum_{s}\left[\alpha_{s}^{\star}\left(\partial_{\tau}\partial_{% \bar{\tau}}\alpha_{s}\right)-\alpha_{s}\left(\partial_{\bar{\tau}}\partial_{% \tau}\alpha_{s}^{\star}\right)\right]-\frac{iP}{2\tau_{2}^{2}}.$$ Using $$\displaystyle\partial_{\tau}$$ $$\displaystyle=\left(\partial_{\tau}\tau_{1}\right)\partial_{\tau_{1}}+\left(% \partial_{\tau}\tau_{2}\right)\partial_{\tau_{2}}=\frac{1}{2}\partial_{\tau_{1% }}+\frac{1}{2i}\partial_{\tau_{2}},$$ $$\displaystyle\partial_{\bar{\tau}}$$ $$\displaystyle=\left(\partial_{\bar{\tau}}\tau_{1}\right)\partial_{\tau_{1}}+% \left(\partial_{\bar{\tau}}\tau_{2}\right)\partial_{\tau_{2}}=\frac{1}{2}% \partial_{\tau_{1}}-\frac{1}{2i}\partial_{\tau_{2}},$$ we can write $\partial_{\tau}\partial_{\bar{\tau}}=\frac{1}{4}\partial_{\tau_{1}}^{2}+\frac{% 1}{4}\partial_{\tau_{2}}^{2}=\frac{1}{4}\nabla^{2}$, so that $$\displaystyle\mathcal{F}_{\tau,\bar{\tau}}$$ $$\displaystyle=$$ $$\displaystyle i\frac{1}{4}\sum_{s}\left[\alpha_{s}^{\star}\left(\nabla^{2}% \alpha_{s}\right)-\alpha_{s}\left(\nabla^{2}\alpha_{s}^{\star}\right)\right]-% \frac{iP}{2\tau_{2}^{2}}$$ (69) $$\displaystyle=$$ $$\displaystyle\frac{1}{8}\sum_{s}{\rm Im}\left[\alpha_{s}^{\star}\left(\nabla^{% 2}\alpha_{s}\right)\right]-\frac{iP}{2\tau_{2}^{2}}.$$ $$\displaystyle\mathcal{F}_{\tau_{1},{\tau_{2}}}$$ $$\displaystyle=$$ $$\displaystyle-2i\mathcal{F}_{\tau,\bar{\tau}}$$ (70) $$\displaystyle=$$ $$\displaystyle-\frac{i}{4}\sum_{s}{\rm Im}\left[\alpha_{s}^{\star}\left(\nabla^% {2}\alpha_{s}\right)\right]-\frac{P}{\tau_{2}^{2}}.$$ We see here that the first term in Eq. 70 is of order one, while the second term is proportional to $N$. This means that in the thermodynamic limit $N\to\infty$, the first term is sub-leading and can be dropped. This is why, for large systems, it does not really matter whether the state is a momentum eigenstate or not. Appendix D Formulation of the wave functions with Haldane’s modified Weierstrass Sigma function In the current article as well as our previous work Pu et al. (2017), we express the wave functions for the $\nu=n/(2pn\pm 1)$ states in terms of $\vartheta$ functions. For completeness, we show how we can formulate the wave functions using Haldane’s modified Weierstrass sigma functions Haldane (2018), which are defined as: $$\tilde{\sigma}(z,\Lambda)={\rm{exp}}\left(-{1\over 2}\gamma_{2}(\Lambda)z^{2}% \right)z\prod_{L_{m,n}\neq 0}\left(1-{z\over L_{m,n}}\right){\rm exp}\left({z% \over L_{m,n}}+{z^{2}\over 2L_{m,n}^{2}}\right),$$ (71) where $$\displaystyle\gamma_{2}(\Lambda)$$ $$\displaystyle=$$ $$\displaystyle\Gamma_{2}(\Lambda)-{\pi\bar{L}_{1}\over A(\Lambda)L_{1}}$$ $$\displaystyle=$$ $$\displaystyle\sum_{L_{m,n}\neq 0}{1\over L_{m,n}^{2}}-{\pi\bar{L}_{1}\over A(% \Lambda)L_{1}}.$$ As the definition contains all lattice vectors $L_{m,n}=mL_{1}+nL_{2}$, $\tilde{\sigma}(z,\Lambda)$ is independent of the basis or the modular parameter $\tau$. The modified Weierstrass sigma function has the following periodic property: $${\tilde{\sigma}(z+L_{i})\over\tilde{\sigma}(z)}=-{\rm{exp}}\left({\pi\over V}% \bar{L}_{i}(z+L_{i}/2)\right)\quad i=1,2,$$ (73) where $V$ is the volume of the unit cell. Given the symmetry between the $L_{1}$ and $L_{2}$ directions, this building block is especially useful for the symmetric gauge. If we choose $L_{1}$ to be real, the modified Weierstrass sigma function can be converted into the theta function as $$\tilde{\sigma}(z,\Lambda)\propto\rm{exp}\left({z^{2}\over 4N_{\phi}\ell_{B}^{2% }}\right)\vartheta\left[\begin{array}[]{c}{{\scriptstyle{1\over 2}}}\\ {\scriptstyle{1\over 2}}\end{array}\right]\left(z\over L_{1}\middle|\tau\right).$$ (74) We use $\propto$ rather than $=$ because we have omitted a constant factor. In this section we choose the symmetric gauge $\mbox{\boldmath$A$}=\frac{1}{2}B\mbox{\boldmath$r$}\times\hat{\mbox{\boldmath$% z$}}$, which generates a magnetic field $\mbox{\boldmath$B$}=-B\hat{\mbox{\boldmath$z$}}$. The quasi-periodic boundary conditions are still given by Eq. 15. The magnetic translation operator $t(\xi)$ in symmetric gauge is given by $$t(\xi)=e^{-\frac{i}{2\ell_{B}^{2}}\hat{\mbox{\boldmath$z$}}\cdot(\mbox{% \boldmath$\xi$}\times\mbox{\boldmath$r$})}T(\xi),$$ (75) where $T(\xi)$ is the normal translation operator: $$T(\xi)h(z,\bar{z})=h(z+\xi,\bar{z}+\bar{\xi}).$$ (76) We now reformulate the PWJ projected wave functions of Ref. Pu et al. (2017) in terms of $\tilde{\sigma}$. We write the single-particle orbital in the LLL as Haldane (2018): $$\psi^{(n)}_{0}(z,\bar{z})=e^{-{|z|^{2}\over 4\ell_{B}^{2}}}f^{(n)}_{0}(z)\quad n% =0,1,2\dots N_{\phi}-1,$$ (77) The subscript is LL index and superscript is magnetic momentum index, just as in main text. The holomorphic part: $$f^{(n)}_{0}(z)=e^{ik^{(n)}z}\prod_{\mu=1}^{N_{\phi}}\tilde{\sigma}(z-w_{\mu}^{% (n)}),$$ (78) depends in turn on the momentum vector $k^{(n)}$ and the positions $w_{\mu}^{(n)}$ of the $N_{\phi}$ zeroes of the sigma function. These are given by: $$k^{(n)}=-{iL_{1}\over\ell_{B}^{2}}\left({\phi_{2}-\phi_{1}\bar{\tau}\over 4\pi N% _{\phi}}-{1-\bar{\tau}\over 4}-{n\bar{\tau}\over 2N_{\phi}}\right),$$ (79) $$w_{\mu}^{(n)}=L_{1}\left({\phi_{2}-\phi_{1}\tau\over 2\pi N_{\phi}}+{\tau\over 2% }+{\mu-n\tau\over N_{\phi}}-1-{1\over 2N_{\phi}}\right).$$ (80) The $\psi_{0}^{(n)}(z,\bar{z})$’s also satisfy Eq. 19 and Eq. 20. Note that in the the above definition we have introduced in an explicit $tau$-dependence for both $k^{(n)}$ and $w_{\mu}^{(n)}$. Thus, in the Haldane formulation we trade theta function, which depends explicitly on $\tau$, for a sigma function that implicitly still depends on $\tau$ through the positions of the zeros. The single particle wave function in the second LL is given by: $$\displaystyle\psi_{1}^{(n)}(z,\bar{z})$$ $$\displaystyle=$$ $$\displaystyle a^{\dagger}\psi_{0}^{(n)}(z,\bar{z})$$ $$\displaystyle=$$ $$\displaystyle\left({\bar{z}\over 2\sqrt{2}\ell_{B}}-\sqrt{2}\ell_{B}{\partial% \over\partial z}\right)e^{-{|z|^{2}\over 4\ell_{B}^{2}}}f_{0}^{(n)}(z)$$ $$\displaystyle=$$ $$\displaystyle e^{-{|z|^{2}\over 4\ell_{B}^{2}}}\left({\bar{z}\over\sqrt{2}\ell% _{B}}-\sqrt{2}\ell_{B}{\partial\over\partial z}\right)f_{0}^{(n)}(z)$$ $$\displaystyle\equiv$$ $$\displaystyle e^{-{|z|^{2}\over 4\ell_{B}^{2}}}a_{f}^{\dagger}f_{0}^{(n)}(z),$$ where $a_{f}^{\dagger}=\left({\bar{z}\over\sqrt{2}\ell_{B}}-\sqrt{2}\ell_{B}{\partial% \over\partial z}\right)$ only acts on the holomorphic part $f_{0}^{(n)}$. Wave functions in higher LLs can similarly be constructed. The wave functions of integer filled LLs are just the slater determinants of occupied single particle orbitals: $$\Psi_{n}[z_{i},\bar{z}_{i}]=e^{-\sum_{i}|z_{i}|^{2}\over 4\ell_{B}^{2}}{\rm det% }[f_{m}^{(n)}(z_{i},\bar{z}_{i})],$$ (82) analogous to Eq. 23 in the main text. The Laughlin form for the filled LLL is: $$\Psi_{1}[z_{i},\bar{z}_{i}]=e^{-{\sum_{i}|z_{i}|^{2}\over 4\ell_{B}^{2}}}F_{1}% (Z)\prod_{i<j}^{N}\tilde{\sigma}(z_{i}-z_{j}),$$ (83) with $$\displaystyle F_{1}(Z)$$ $$\displaystyle=$$ $$\displaystyle e^{iKZ}\tilde{\sigma}(Z-W),$$ (84) $$\displaystyle K$$ $$\displaystyle=$$ $$\displaystyle-{iL_{1}\over 4\pi N\ell_{B}^{2}}\left(\pi N+\phi_{2}+(\pi N-\phi% _{1})\bar{\tau}\right),$$ $$\displaystyle W$$ $$\displaystyle=$$ $$\displaystyle{L_{1}\over 2\pi}\left(\pi N+\phi_{2}+(\pi N-\phi_{1})\tau\right)$$ Again, notice the $\tau$ dependence on the zeroes of the wave function. The wave functions for $\nu=n/(2pn\pm 1)$ before projection to LLL are written as: $$\displaystyle\Psi^{\rm unproj}_{n\over 2pn+1}$$ $$\displaystyle=$$ $$\displaystyle e^{-\frac{\sum_{i}(|z_{i}|^{2})}{4\ell_{B}^{2}}}\chi_{n}[{f}_{i}% (z_{j})]$$ (85) $$\displaystyle\left(F_{1}(Z)\prod_{i<j}^{N}\tilde{\sigma}(z_{i}-z_{j})\right)^{% 2p},$$ $$\displaystyle\Psi^{\rm unproj}_{n\over 2pn-1}$$ $$\displaystyle=$$ $$\displaystyle e^{-(1+{2\over 2pn-1})\frac{\sum_{i}(|z_{i}|^{2})}{4\ell_{B}^{2}% }}(\chi_{n}[{f}_{i}(z_{j})])^{*}$$ (86) $$\displaystyle\left(F_{1}(Z)\prod_{i<j}^{N}\tilde{\sigma}(z_{i}-z_{j})\right)^{% 2p},$$ where $\chi_{n}[{f}_{i}(z_{j})]$ has the same form as in Eq. 24. Just as in the main text there is an unusual factor in Eq. 86, originating from the relation $N_{\phi}=N_{\phi}^{*}+2pN=-|N_{\phi}^{*}|+2pN$ for $\nu={n\over 2pn-1}$. The LLL projection of these wave functions requires LLL projection of product of single particle wave functions of the type $\psi_{n}\psi^{\prime}_{0}$. Following Ref. Pu et al. (2017), the LLL projection is given by $$P_{\rm LLL}\psi_{n}\psi^{\prime}_{0}=e^{-|z|^{2}\over 4\ell_{B}^{2}}\hat{f}_{n% }f_{0}^{\prime},$$ (87) where e.g. $$\hat{f}_{1}(z_{i})=\sqrt{2}\ell_{B}^{*}\left[{\ell_{B}^{2}-\ell_{B}^{*2}\over% \ell_{B}^{*2}}{\partial f_{0}\over\partial z_{i}}+{\ell_{B}^{2}\over\ell_{B}^{% *2}}f_{0}{\partial\over\partial z_{i}}\right],$$ (88) which has exactly the same form as Eq. 54 in Ref. Pu et al. (2017). It is now straightforward to apply the PWJ projection Jain and Kamilla (1997a, b) as shown in Ref. Pu et al. (2017). For the Jain $\nu={n\over 2pn+1}$ states, the LLL wave function is: $$\Psi_{n\over 2pn+1}[z_{i},\bar{z_{i}}]=e^{-\frac{\sum_{i}(|z_{i}|^{2})}{4l^{2}% }}F_{1}^{2p}(Z){\chi}[\hat{g}_{i}(z_{j})J_{j}^{p}],$$ (89) where $J_{i}=\prod_{j(j\neq i)}\tilde{\sigma}\left(z_{i}-z_{j}\right)$ and ${\chi}[\hat{g}_{i}(z_{j})J_{j}^{p}]$ is defined as in Eq. 33. The $\hat{g}_{m}^{(n)}(z_{i})$ is obtained from $\hat{f}_{m}^{(n)}(z_{i})$ by making the replacement $\partial/\partial z_{i}\rightarrow 2\partial/\partial z_{i}$ for all derivatives acting on $J^{p}_{i}$. For the LLL, $\hat{g}_{0}^{(n)}(z_{i})=f_{0}^{(n)}(z_{i})$. For the 1st and 2nd Landau levels, we have $$\hat{g}_{1}^{(n)}(z)=-\frac{N_{\phi}-N_{\phi}^{*}}{N_{\phi}}\frac{\partial f_{% 0}^{(n)}(z)}{\partial z}+\frac{N_{\phi}^{*}}{N_{\phi}}f_{0}^{(n)}(z)2\frac{% \partial}{\partial z},$$ (90) and $$\hat{g}_{2}^{(n)}(z)={(N_{\phi}-N_{\phi}^{*})^{2}\over N_{\phi}^{2}}{\partial^% {2}f_{0}^{(n)}(z)\over\partial z^{2}}-{2N_{\phi}^{*}(N_{\phi}-N_{\phi}^{*})% \over N_{\phi}^{2}}{\partial f_{0}^{(n)}(z)\over\partial z}2{\partial\over% \partial z}+{N_{\phi}^{*2}\over N_{\phi}^{2}}\big{(}2{\partial\over\partial z}% \big{)}^{2}.$$ (91) Appendix E Proof of Eq. 50 In this appendix we prove Eq. 50. The momentum projection operator is given byPu et al. (2017) $$\mathcal{P}_{M}={1\over\sqrt{q}}\sum_{k=1}^{q}\left[e^{-i2\pi{M\over q}}t_{CM}% \big{(}{L_{1}\over N_{\phi}}\big{)}\right]^{k},$$ (92) where $q=2pn+1$ for the CF states. It projects a $\Psi_{n\over 2pn+1}$ to eigenstates of $t_{CM}\big{(}{L_{1}\over N_{\phi}}\big{)}$: $$t_{CM}\big{(}{L_{1}\over N_{\phi}}\big{)}\mathcal{P}_{M}\Psi_{n\over 2pn+1}=e^% {i2\pi{M\over 2pn+1}}\mathcal{P}_{M}\Psi_{n\over 2pn+1}.$$ (93) $\mathcal{P}_{M}$ commutes with $(\partial_{\tau_{2}})_{\tau}$ because $\big{[}(\partial_{\tau_{2}})_{\tau},t_{CM}\big{(}{L_{1}\over N_{\phi}})\big{]}=0$. This can be seen by noting that in the current gauge we have $$\displaystyle t_{CM}\big{(}{L_{1}\over N_{\phi}}\big{)}$$ $$\displaystyle=$$ $$\displaystyle\prod_{i=1}^{N}e^{{1\over N_{\phi}}{\partial\over\partial{\theta_% {i1}}}},$$ (94) $$\displaystyle t_{CM}\big{(}{L_{2}\over N_{\phi}}\big{)}$$ $$\displaystyle=$$ $$\displaystyle\prod_{i=1}^{N}e^{{1\over N_{\phi}}\big{(}{\partial\over\partial{% \theta_{i2}}}+i2\pi N_{\phi}\theta_{i1}\big{)}}.$$ (95) These are explicitly independent of $\tau_{1}$ and $\tau_{2}$, and hence commute with $\partial_{\tau_{2}}$. Therefore, Eq. 50 does hold. Here we emphasize that $\mathcal{P}_{M}$ is not modular invariant, as noted in the main text, even though there is no $\tau$ in its definition. The reason is that the reduced coordinates used in the definition of $\mathcal{P}_{M}$ do change under modular transformations. For instance, under $\tau\to-{1\over\tau}$, to keep the physical coordinates $(x,y)$ fixed, the reduced coordinates transform as $(\theta_{1},\theta_{2})\to(-\theta_{2},\theta_{1})$. Appendix F Direct evaluation of the correction term Our analytical derivation of the Hall viscosity of the unprojected Jain wave functions is based on the assumption that the overall normalization factor satisfies Eq. 49. For a numerical test, we first write $${1\over N}\left({\partial\ln Z\over\partial\tau_{2}}\right)_{\tau}=-i{1\over N% }\left({\partial\ln Z\over\partial\tau_{1}}\right)_{\tau_{2}}+{1\over N}\left(% {\partial\ln Z\over\partial\tau_{2}}\right)_{\tau_{1}}.$$ (96) We numerically evaluate the two terms on the right hand side for $\nu=2/5$. The results, displayed in Fig. 7, show that both ${1\over N}\left({\partial\ln Z\over\partial\tau_{1}}\right)_{\tau_{2}}$ and ${1\over N}\left({\partial\ln Z\over\partial\tau_{2}}\right)_{\tau_{1}}$ vanish in the thermodynamic limit. We can explicitly evaluate the contribution of this term to Hall viscosity. Without assuming the condition in Eq. 49, Eq. 51 becomes: $$(\partial_{\tau_{2}})_{\tau}\Psi_{n\over 2pn+1}^{\rm unproj}=\left[(\partial_{% \tau_{2}})_{\tau}\ln Z+{(n+2p)N\over 4\tau_{2}}\right]\Psi_{n\over 2pn+1}^{\rm unproj}$$ (97) Let us denote $$\gamma\equiv(\partial_{\tau_{2}})_{\tau}\ln Z=\left({\partial\ln Z\over% \partial\tau_{2}}\right)_{\tau_{1}}-i\left({\partial\ln Z\over\partial\tau_{1}% }\right)_{\tau_{2}}.$$ (98) Following the calculation in Sec. III, we get: $$A_{\tau}=-{1\over 2}\gamma^{*}-{1\over 2}{N(n+2p)\over 4\tau_{2}},$$ (99) $$A_{\bar{\tau}}=-{1\over 2}\gamma-{1\over 2}{N(n+2p)\over 4\tau_{2}},$$ (100) $$A_{1}=A_{\tau}+A_{\bar{\tau}}=-{\rm Re}\gamma-{N(n+2p)\over 4\tau_{2}},$$ (101) and $$A_{1}=i(A_{\tau}+A_{\bar{\tau}})=-{\rm Im}\gamma.$$ (102) Finally, the Berry curvature is: $$\displaystyle\mathcal{F}_{\tau_{1},\tau_{2}}$$ $$\displaystyle=$$ $$\displaystyle N\left[-{n+2p\over 4\tau_{2}^{2}}+{1\over N}\left(-\partial_{% \tau_{1}}{\rm Im}\gamma+\partial_{\tau_{2}}{\rm Re}\gamma\right)\right]$$ (103) $$\displaystyle=$$ $$\displaystyle N\left[-{n+2p\over 4\tau_{2}^{2}}+{1\over N}\left((\partial_{% \tau_{1}})^{2}\ln Z+(\partial_{\tau_{2}})^{2}\ln Z\right)\right]$$ $$\displaystyle=$$ $$\displaystyle-N{n+2p+\Delta\over 4\tau_{2}^{2}},$$ where $$\displaystyle\Delta$$ $$\displaystyle\equiv$$ $$\displaystyle-{4\tau_{2}^{2}\over N}\left((\partial_{\tau_{1}})^{2}\ln Z+(% \partial_{\tau_{2}})^{2}\ln Z\right)$$ (104) $$\displaystyle=$$ $$\displaystyle{4\eta\over\hbar n}-(n+2p).$$ In Fig. 8, we show the evaluation of $\Delta$ for wave functions both before and after momentum projection. (They have different overall normalization factors $Z$.) We also show, as a sanity check, the actual difference ${4\eta\over\hbar n}-4$ using the results from Fig. 1. As shown in Fig. 8, the correction term converges to zero quickly with increasing system sizes, especially for the normalization factor after momentum projection. The correction $\Delta$ also agree with the difference between ${4\eta\over\hbar n}$ which is obtained in Fig. 1 and the quantized shift $\mathcal{S}=4$ for $\nu={2\over 5}$. References Klitzing et al. (1980) K. v. Klitzing, G. Dorda, and M. Pepper, Phys. Rev. Lett. 45, 494 (1980), URL http://link.aps.org/doi/10.1103/PhysRevLett.45.494. Thouless et al. (1982) D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Phys. Rev. Lett. 49, 405 (1982), URL http://link.aps.org/doi/10.1103/PhysRevLett.49.405. Tsui et al. (1982) D. C. Tsui, H. L. Stormer, and A. C. Gossard, Phys. Rev. Lett. 48, 1559 (1982), URL http://link.aps.org/doi/10.1103/PhysRevLett.48.1559. Laughlin (1983) R. B. Laughlin, Phys. Rev. Lett. 50, 1395 (1983), URL http://link.aps.org/doi/10.1103/PhysRevLett.50.1395. Jain (1989a) J. K. Jain, Phys. Rev. Lett. 63, 199 (1989a), URL http://link.aps.org/doi/10.1103/PhysRevLett.63.199. Jain (1990) J. K. Jain, Phys. Rev. B 41, 7653 (1990). Jain (2007) J. K. Jain, Composite Fermions (Cambridge University Press, New York, US, 2007). Wen and Zee (1992a) X. G. Wen and A. Zee, Phys. Rev. Lett. 69, 953 (1992a), URL http://link.aps.org/doi/10.1103/PhysRevLett.69.953. Wen and Zee (1992b) X. G. Wen and A. Zee, Phys. Rev. B 46, 2290 (1992b), URL https://link.aps.org/doi/10.1103/PhysRevB.46.2290. Wen (1995) X.-G. Wen, Advances in Physics 44, 405 (1995), eprint http://www.tandfonline.com/doi/pdf/10.1080/00018739500101566, URL http://www.tandfonline.com/doi/abs/10.1080/00018739500101566. Avron et al. (1995) J. E. Avron, R. Seiler, and P. G. Zograf, Phys. Rev. Lett. 75, 697 (1995), URL https://link.aps.org/doi/10.1103/PhysRevLett.75.697. Tokatly and Vignale (2007) I. V. Tokatly and G. Vignale, Phys. Rev. B 76, 161305(R) (2007), URL https://link.aps.org/doi/10.1103/PhysRevB.76.161305. Tokatly and Vignale (2009) I. V. Tokatly and G. Vignale, Journal of Physics: Condensed Matter 21, 275603 (2009), URL https://doi.org/10.1088/0953-8984/21/27/275603. Gunning and BRUMER (1962) R. C. Gunning and A. BRUMER, Lectures on Modular Forms. (AM-48) (Princeton University Press, 1962), ISBN 9780691079950, URL http://www.jstor.org/stable/j.ctt1b7x81f. Lévay (1995) P. Lévay, Journal of Mathematical Physics 36, 2792 (1995), eprint https://doi.org/10.1063/1.531066, URL https://doi.org/10.1063/1.531066. Read (2009) N. Read, Phys. Rev. B 79, 045308 (2009), URL http://link.aps.org/doi/10.1103/PhysRevB.79.045308. Read and Rezayi (2011) N. Read and E. H. Rezayi, Phys. Rev. B 84, 085316 (2011), URL https://link.aps.org/doi/10.1103/PhysRevB.84.085316. Lapa and Hughes (2018) M. F. Lapa and T. L. Hughes, Physical Review B 97, 205122 (2018), eprint 1802.10100. Lapa et al. (2018) M. F. Lapa, C. Turner, T. L. Hughes, and D. Tong, Physical Review B 98, 075133 (2018), eprint 1805.05319. Cho et al. (2014) G. Y. Cho, Y. You, and E. Fradkin, Phys. Rev. B 90, 115139 (2014), URL https://link.aps.org/doi/10.1103/PhysRevB.90.115139. Fremling et al. (2014) M. Fremling, T. H. Hansson, and J. Suorsa, Phys. Rev. B 89, 125303 (2014), URL https://link.aps.org/doi/10.1103/PhysRevB.89.125303. Pu et al. (2017) S. Pu, Y.-H. Wu, and J. K. Jain, Phys. Rev. B 96, 195302 (2017), URL https://link.aps.org/doi/10.1103/PhysRevB.96.195302. Haldane (2009) F. D. M. Haldane, ArXiv e-prints (2009), eprint 0906.1854. Hoyos and Son (2012) C. Hoyos and D. T. Son, Phys. Rev. Lett. 108, 066805 (2012), URL https://link.aps.org/doi/10.1103/PhysRevLett.108.066805. Bradlyn et al. (2012) B. Bradlyn, M. Goldstein, and N. Read, Phys. Rev. B 86, 245309 (2012), URL https://link.aps.org/doi/10.1103/PhysRevB.86.245309. Delacrétaz and Gromov (2017) L. V. Delacrétaz and A. Gromov, Phys. Rev. Lett. 119, 226602 (2017), URL https://link.aps.org/doi/10.1103/PhysRevLett.119.226602. Pellegrino et al. (2017) F. M. D. Pellegrino, I. Torre, and M. Polini, Phys. Rev. B 96, 195401 (2017), URL https://link.aps.org/doi/10.1103/PhysRevB.96.195401. Scaffidi et al. (2017) T. Scaffidi, N. Nandi, B. Schmidt, A. P. Mackenzie, and J. E. Moore, Phys. Rev. Lett. 118, 226601 (2017), URL https://link.aps.org/doi/10.1103/PhysRevLett.118.226601. Berdyugin et al. (2019) A. I. Berdyugin, S. G. Xu, F. M. D. Pellegrino, R. Krishna Kumar, A. Principi, I. Torre, M. Ben Shalom, T. Taniguchi, K. Watanabe, I. V. Grigorieva, et al., Science 364, 162 (2019), ISSN 0036-8075, eprint https://science.sciencemag.org/content/364/6436/162.full.pdf, URL https://science.sciencemag.org/content/364/6436/162. Alekseev (2016) P. S. Alekseev, Phys. Rev. Lett. 117, 166601 (2016), URL https://link.aps.org/doi/10.1103/PhysRevLett.117.166601. Steinberg (1958) M. S. Steinberg, Phys. Rev. 109, 1486 (1958), URL https://link.aps.org/doi/10.1103/PhysRev.109.1486. Gusev et al. (2018) G. M. Gusev, A. D. Levin, E. V. Levinson, and A. K. Bakarov, Phys. Rev. B 98, 161303(R) (2018), URL https://link.aps.org/doi/10.1103/PhysRevB.98.161303. Ganeshan and Abanov (2017) S. Ganeshan and A. G. Abanov, Phys. Rev. Fluids 2, 094101 (2017), URL https://link.aps.org/doi/10.1103/PhysRevFluids.2.094101. Fremling (2019) M. Fremling, Phys. Rev. B 99, 075126 (2019), URL https://link.aps.org/doi/10.1103/PhysRevB.99.075126. Jain and Kamilla (1997a) J. K. Jain and R. K. Kamilla, Int. J. Mod. Phys. B 11, 2621 (1997a). Jain and Kamilla (1997b) J. K. Jain and R. K. Kamilla, Phys. Rev. B 55, R4895 (1997b), URL http://link.aps.org/doi/10.1103/PhysRevB.55.R4895. Jain (1989b) J. K. Jain, Phys. Rev. B 40, 8079 (1989b), URL http://link.aps.org/doi/10.1103/PhysRevB.40.8079. Shao et al. (2015) J. Shao, E.-A. Kim, F. D. M. Haldane, and E. H. Rezayi, Phys. Rev. Lett. 114, 206402 (2015), URL http://link.aps.org/doi/10.1103/PhysRevLett.114.206402. Wang et al. (2019) J. Wang, S. D. Geraedts, E. H. Rezayi, and F. D. M. Haldane, Phys. Rev. B 99, 125123 (2019), URL https://link.aps.org/doi/10.1103/PhysRevB.99.125123. Geraedts et al. (2018) S. D. Geraedts, J. Wang, E. H. Rezayi, and F. D. M. Haldane, Phys. Rev. Lett. 121, 147202 (2018), URL https://link.aps.org/doi/10.1103/PhysRevLett.121.147202. Pu et al. (2018) S. Pu, M. Fremling, and J. K. Jain, Phys. Rev. B 98, 075304 (2018), URL https://link.aps.org/doi/10.1103/PhysRevB.98.075304. Mumford (2007) D. Mumford, Tata Lectures on Theta Vols. \@slowromancapi@ and \@slowromancapii@ (Birkh$\ddot{a}$user Boston, 2007), ISBN 9780817645779, URL https://link.springer.com/book/10.1007/978-0-8176-4577-9. Haldane (2018) F. D. M. Haldane, Journal of Mathematical Physics 59, 071901 (2018), eprint https://doi.org/10.1063/1.5042618, URL https://doi.org/10.1063/1.5042618. Fremling (2016) M. Fremling, Journal of Physics A: Mathematical and Theoretical 50, 015201 (2016), URL https://doi.org/10.1088/1751-8113/50/1/015201. Zaletel et al. (2013) M. P. Zaletel, R. S. K. Mong, and F. Pollmann, Phys. Rev. Lett. 110, 236801 (2013), URL https://link.aps.org/doi/10.1103/PhysRevLett.110.236801.
FINITE SIZE SCALING OF THE CHALKER-CODDINGTON MODEL KEITH SLEVIN Department of Physics, Graduate School of Science, Osaka University Machikaneyama 1-1, Toyonaka, Osaka 560-0043, Japan    TOMI OHTSUKI Department of Physics, Sophia University Kioi-cho 7-1, Chiyoda-ku, Tokyo 102-8554, Japan Abstract In Ref. \refciteslevin09, we reported an estimate of the critical exponent for the divergence of the localization length at the quantum Hall transition that is significantly larger than those reported in the previous published work of other authors. In this paper, we update our finite size scaling analysis of the Chalker-Coddington model and suggest the origin of the previous underestimate by other authors. We also compare our results with the predictions of Lütken and Ross.[2] keywords: quantum Hall effect; Chalker-Coddington model; critical exponent. \catchline{history}\ccode PACS numbers: 73.43.$\pm$f, 71.30.+h 1 Introduction When a strong magnetic field is applied perpendicular to an ideal two dimensional electron gas the kinetic energy of the electrons is quantized according to the formula $E_{n}=(n+1/2)\hbar\omega$. Here, n is a non-negative integer and $\omega$ is the cyclotron frequency $\omega=eB/m$. These Landau levels are highly degenerate and the density of states becomes a series of equally spaced delta functions. This degeneracy is broken by disorder and the Landau levels are broadened into Landau bands. Most of the electron states are Anderson localized with the exception of the states at the center of the Landau level where the localization length $\xi$ has a power law divergence described by a critical exponent $\nu$ $$\xi\sim\left|E-E_{c}\right|^{-\nu}\;.$$ (1) When the Fermi level is in a region of localized states, the Hall conductance is quantized in integer multiples of $e^{2}/h$. This effect is known as the quantum Hall effect.[3, 4] Transitions between consecutive quantized values occur when the Fermi level passes through the center of a Landau band. This is a quantum phase transition. It is characterized by a two critical exponents. One is the critical exponent $\nu$ mentioned above and the other is the dynamic exponent $z$, which describes temperature dependence. The quantum Hall transition has been the subject of careful experimental study. The inverse of the product of the critical and dynamic exponents, $\kappa=1/\nu z$, has been measured very precisely (see Table 1). The value of this product appears to be quite universal; the same value has been obtained in measurements in an Al${}_{x}$Ga${}_{1-x}$As heterostructure[5] and in graphene.[6] The main problem is that an independent measurement of the dynamic exponent is needed to disentangle the values of the two exponents and, unfortunately, this has been measured much less precisely. The quantum Hall transition has also been the subject of numerous numerical studies in models of non-interacting electrons. In earlier work, a consensus was reached (see Table 1) that $\nu\approx 2.4$ in apparent agreement with experiment. However, in 2009, we published[1] a numerical analysis of the Chalker-Coddington model in which we found a value of the exponent that was about 10% larger; a result which has since been confirmed by other authors (see Table 1). There now seems to be a consensus that the previous numerical work underestimated the exponent and the apparent agreement with experiment was a coincidence of errors. Below we discuss the reason for the previous underestimate of the exponent. Another issue concerns the value of the dynamic exponent. For models of non-interacting electrons the dynamic exponent is known exactly, $z=2$. However, this value cannot be compared directly with the experiment. Burmistrov et al.[17] have emphasized the distinction between the different dynamical exponents that occur in the problem. In the experiment of Li et al.[5] it seems clear that the dynamic exponent that is being measured describes the divergence of the phase coherence length on approaching zero temperature $$\ell_{\varphi}\sim T^{-z}\;.$$ (2) It also seems reasonably safe to suppose that electron-electron interactions are the source of the electron dephasing. This does not mean however that electron-electron interaction are relevant in the renormalization group (RG) sense and that the quantum Hall transition is described by a fixed point in a theory of interacting electrons. A clear discussion of this can be found in Ref. \refciteburmistrov11. Also, as described in Ref. \refcitepruisken10, it is thought that short range interactions are irrelevant in the RG sense and that only long range Coulomb interaction are relevant and would drive the system to a different interacting fixed point. It has been pointed out to us by Alexei Tsvelik, that the value of the exponent we have found for the Chalker-Coddington model is very close to that predicted by Lütken and Ross. In a series of papers (Ref. \refcitelutken07 and references therein) these authors have argued that modular symmetry strongly constrains the possible critical theories of the quantum Hall transition. While we cannot claim to understand the details of the theory of Lütken and Ross, we attempt below to compare some of their key predictions with the results of our finite size scaling analysis. 2 Method We calculated the Lyapunov exponents of the product of the transfer matrices for the Chalker-Coddington model.[7] This model describes electron localization in a two dimensional electron gas subject to a very strong perpendicular magnetic field. The basic assumption is that the random potential is smooth on the scale of the magnetic length. The electron wavefunctions are then concentrated on equipotentials of the random potential with tunneling between equipotentials at saddle points of the potential. In the Chalker-Coddington model this system is modeled by a network of nodes and links. A parameter $x$, which is essentially the energy of the electrons measured in units of the Landau band width relative to the center of the Landau band, fixes the tunneling probability at the nodes. A random phase distributed uniformly on $[0,2\pi)$ is attached to each link to reflect the random length of the contours of the potential. For further details we refer the reader to the original article of Chalker and Coddington[7] and to the more recent review by Kramer et al.[19] We considered the transfer matrix product associated with a quasi-one dimensional geometry with $N$ nodes in the transverse direction and $L$ nodes in the longitudinal direction.111For the detailed formulae see Ref. \refciteslevin09. The Lyapunov exponents of this random matrix product were estimated using the standard method.[20, 21] The Lyapunov exponents are defined by taking the limit $L\rightarrow\infty$. By truncating the matrix product at a finite $L$, an estimate of the Lyapunov exponents was obtained. The sample to sample fluctuations of this estimate decrease with the inverse of the square root of $L$. We performed a single simulation for each pair of $x$ and $N$ and truncated the transfer matrix product at a value of $L$ that allowed estimation of the smallest positive Lyapunov exponent $\gamma$ with a precision of $0.03\%$, except for the largest values of $N=192$ and $256$ where the precision was relaxed to either $0.05\%$ or $0.1\%$. To ensure that simulations for different pairs of $x$ and $N$ were independent, we used the Mersenne Twister pseudo-random number generator MT2203 of Matsumoto et al.[22] provided in the Intel Math Kernel Library. All the simulations used a common seed. Independence was ensured by the use of a unique stream number for each simulation. We imposed periodic boundary conditions in the transverse direction for which choice the Lyapunov exponents are even functions of $x$. It is known that there is a critical point at the center of the Landau band, $x=0$, and that, when the Fermi energy is driven through this point, the transition between Hall plateaux occurs. To extract estimates of critical exponent and other quantities we used finite size scaling.222This method was first applied to Anderson localization at about the same time by Pichard and Sarma[23, 24] and by MacKinnon and Kramer[21, 25]. See Ref. \refciteamit05 for a pedagogical discussion. In this method, the behavior of the dimensionless quantity $$\Gamma\left(x,N\right)=\gamma N\;,$$ (3) is analyzed as a function of both $x$ and $N$. In the absence of any corrections to scaling we would expect this behavior to be described by the following finite size scaling law $$\Gamma=F\left(N^{2\alpha}x^{2}\right)\;,$$ (4) where $F$ is an a priori unknown but universal scaling function and $$\alpha=1/\nu\;.$$ (5) Note that we have imposed the condition that $\Gamma$ must be an even function of $x$. The actual behavior of $\Gamma$ as a function of $x$ for different $N$ is shown in Fig. 1 and, in more detail around $x=0$, in Fig. 2. (The lines in the figures are polynomial fits. They will be discussed below.) According to Eq. (4) curves for different $N$ should have a common crossing point at $x=0$. However, it is clear from Fig. 2 that this is only approximately correct and that $\Gamma$ is not exactly independent of $N$ at $x=0$ but varies by a several percent over the range of $N$ studied. These corrections to scaling arise because of the presence of irrelevant scaling variables. These are variables with negative scaling exponents. Their effect is negligible for large $N$ but their presence may lead to significant corrections at small $N$. This is consistent with what we see in Fig. 2. To take account of corrections to scaling, we follow Ref. \refcitehuckestein94b and Ref. \refciteslevin99a and generalize the finite size scaling law $$\Gamma=F\left(N^{2\alpha}v_{0}(x),N^{y_{1}}v_{1}(x),N^{y_{2}}v_{2}(x),\cdots% \right)\;.$$ (6) In Eq. (6), $v_{0}$ is the relevant scaling variable and $v_{1},v_{2},\cdots$ are the irrelevant scaling variables. The associated exponents $y_{1},y_{2},\cdots$ are negative. The inclusion of irrelevant corrections permits the residual $N$ dependence at $x=0$ seen in Fig. 2 to be modeled. This form also allows for additional corrections due to non-linearities of the scaling variables as functions of $x$. To impose the condition that $\Gamma$ must be an even function of $x$ we restrict all the scaling variables to be even functions of $x$. In addition, since the critical point is at $x=0$, we impose the condition that $v_{0}(0)=0$. To fit the data, the function $F$ is expanded as a Taylor series in all its arguments, and similarly the scaling variables. The coefficients in the Taylor series, together with the various exponents, play the role of fitting parameters.333Some extra conditions on the coefficients must be imposed to ensure that the model to be fitted to the data is unambiguous. The orders of truncation of the Taylor series are chosen sufficiently large to obtain an acceptable fit of the data (as measured using the $\chi^{2}$-statistic and the goodness of fit probability). We have attempted this procedure with both one and two irrelevant corrections. Unfortunately, a stable fit of the data has eluded us. We have found that several fits of the data are possible. However, these fits do not yield mutually consistent estimates of the critical exponent. 3 Rudimentary finite size scaling To circumvent the difficulties described in the previous section we resorted to a less sophisticated approach in which we abandoned the attempt to fit all the data in a single step. Instead, we fitted the data for each $N$ independently to an even polynomial of $x$. For each $N$ the order of the polynomial was chosen just large enough to give an acceptable goodness of fit. For $N=4$, a quadratic was sufficient, while for $N=256$ a sixth order polynomial was required. From these polynomials we estimated the curvature $C$ of $\Gamma$ at $x=0$. The results are plotted in Fig. 3. The precision of the estimation of the curvature varies between $0.07\%$ and $0.28\%$. According to (4) the curvature at $x=0$ should vary with $N$ as a power law, $$C\equiv\left.\frac{d^{2}\Gamma}{dx^{2}}\right|_{x=0}\propto N^{2\alpha}\;.$$ (7) This, of course, neglects corrections to scaling due to irrelevant variables and, indeed, a straight line fit to all the data does not yield an acceptable goodness of fit. However, if data for $N\leq 48$ are excluded, acceptable goodness of fits are obtained. The estimates of the critical exponent obtained in this way are tabulated in Table 3. In Fig. 3 we have also plotted two lines. One is a solid line that corresponds to the straight line fit for $64\leq N\leq 256$. The second is a dashed line with a slope corresponding to the estimate of Huckestein[27] of the critical exponent and passing through the datum for the curvature at $N=4$. While the main effect of the irrelevant corrections is the $N$ dependent shift in the ordinate that is clearly visible in Fig.2, a smaller but not negligible effect on the curvature is also apparent in Fig.3. In our opinion, this is the reason why the critical exponent was underestimated in previous work. The precision of the numerical data was insufficient, and the range of $N$ considered too small, for the irrelevant correction to be properly taken into account. 4 Comparison with the predictions of Lütken and Ross The predictions of the theory of Lütken and Ross that can be compared with the present work are for the critical exponent and the leading irrelevant exponent. Their theory contains a single unknown parameter, the central charge $c$. In terms of this parameter they predict that the critical exponent is444We noticed an error (a factor of 2) in Ref. \refcitelutken07. We thank Graham Ross for confirming this. $$\nu=\frac{10.42050633345819\cdots}{c}\;.$$ (8) In addition they predict that the leading irrelevant exponent and the critical exponent are related by $$y=-\alpha=-1/\nu\;.$$ (9) While we are not aware of any justification for this, assuming $c=4$ gives $$\nu=2.60512\cdots$$ (10) The most precise estimate in Table 3 is $$\nu=2.607\pm 0.004\;.$$ (11) In fact, all of the estimates in Table 3 are consistent with the Lütken and Ross value. Turning to the irrelevant exponent, the situation is, unfortunately, much less clear. In Table 4 we tabulate some previous estimates of the irrelevant exponent. The Lütken and Ross prediction is $$y=-0.383859\cdots$$ (12) The estimate of Huckestein[27] is consistent with this but subsequent estimates by Wang et al.[29] are somewhat ambiguous. In our opinion, further confirmation is needed before reaching a conclusion. As mentioned above an unambiguous fit using Eq. (6) has not proved possible. The best we can do at present is to check the consistency of the Lütken and Ross values with our data by fixing both $\nu$ and $y$ to these values when fitting. A series of such fits are tabulated in Table 4. In the fits only a single irrelevant variable is assumed and non-linearities in the scaling variables are ignored, i.e. $$v_{0}=v_{02}x^{2}\;\;,\;\;v_{1}=v_{10}\;.$$ (13) The scaling function is expanded as a Taylor series to order $n_{0}$ in the relevant field and order $n_{1}$ in the irrelevant field. One of the important quantities that can be estimated in this way is $\Gamma_{c}$, which is defined by $$\Gamma_{c}=\lim_{N\rightarrow\infty}\Gamma\left(x=0,N\right)=F\left(0,0,\ldots% \right)\;.$$ (14) The quantity is significant because, if the quantum Hall critical theory has conformal symmetry, it is related to the multi-fractal exponent $\alpha_{0}$ that occurs in the multi-fractal analysis of the wavefunction distribution at the critical point by $$\Gamma_{c}=\pi\left(\alpha_{0}-2\right)\;.$$ (15) Some estimates of $\Gamma_{c}$ obtained using published estimates of $\alpha_{0}$ and Eq. (15) are tabulated in Table 4. Our numerical estimate of $\Gamma_{c}$ is not completely consistent with those in the table. The reason for this is not yet clear. 5 Discussion The agreement between our estimate for the critical exponent and prediction of Lütken and Ross is tantalizing but is it accidental, or does it have a deeper significance? A more precise numerical estimate of the irrelevant exponent is clearly highly desirable. In addition, the prediction of Lütken and Ross for the flow diagram in the $\left(\sigma_{xy},\sigma_{xx}\right)$ plane should also be amenable to numerical verification. We also need to know if a central charge $c=4$ is physically justified. Quite apart from whether or not the Lütken and Ross theory is exact for non-interacting electrons, the important question remains of clarifying the role of the electron-electron interactions in the observed critical behavior at the quantum Hall transition. More work along the lines of Ref. \refcitehuckestein99 might be very helpful in this regard. Acknowledgments We would like to thank Alexei Tsvelik for bringing the work of Lütken and Ross to our attention. This work was supported by Grant-in-Aid 23540376 and Korean WCU program Project No. R31-2008-000-10059-0. References [1] K. Slevin and T. Ohtsuki, Physical Review B 80, 041304(R) (2009). [2] C. A. Lutken and G. G. Ross, Physics Letters B 653, 363 (2007). [3] K. v. Klitzing, G. Dorda, and M. Pepper, Physical Review Letters 45, 494 (1980). [4] D. Yoshioka, The quantum Hall effect, Springer series in solid-state sciences, (Springer, Berlin ; New York, 2002). [5] W. Li et al., Physical Review Letters 102, 216801 (2009). [6] A. J. M. Giesbers et al., Physical Review B (Condensed Matter and Materials Physics) 80, 241411 (2009). [7] J. T. Chalker and P. D. Coddington, Journal of Physics C: Solid State Physics 21, 2665 (1988). [8] B. Huckestein and B. Kramer, Physical Review Letters 64, 1437 (1990). [9] B. Mieck, EPL (Europhysics Letters) 13, 453 (1990). [10] B. Huckestein, EPL (Europhysics Letters) 20, 451 (1992). [11] Y. Huo and R. N. Bhatt, Physical Review Letters 68, 1375 (1992). [12] D.-H. Lee and Z. Wang, Philosophical Magazine Letters 73, 145 (1996). [13] P. Cain, R. A. Romer, and M. E. Raikh, Physical Review B 67, 075307 (2003). [14] H. Obuse et al., Physical Review B 82, 035309 (2010). [15] J. P. Dahlhaus, J. M. Edge, J. Tworzydlo, and C. W. J. Beenakker, Physical Review B 84, 115133 (2011). [16] M. Amado et al., Physical Review Letters 107, 066402 (2011). [17] I. S. Burmistrov et al., Annals of Physics 326, 1457 (2011). [18] A. M. M. Pruisken, International Journal of Modern Physics B 24, 1895 (2010). [19] B. Kramer, T. Ohtsuki, and S. Kettemann, Physics Reports 417, 211 (2005). [20] I. Shimada and T. Nagashima, Progress of Theoretical Physics 61, 1605 (1979). [21] A. MacKinnon and B. Kramer, Zeitschrift für Physik B Condensed Matter 53, 1 (1983). [22] M. Matsumoto and T. Nishimura, in Monte Carlo and Quasi-Monte Carlo Methods 1998, edited by H. Niederreiter and J. Spanier (Springer, Berlin ; New York, 1999), pp. 56–69. [23] J. L. Pichard and G. Sarma, Journal of Physics C: Solid State Physics L127 (1981). [24] J. L. Pichard and G. Sarma, Journal of Physics C: Solid State Physics L617 (1981). [25] A. MacKinnon and B. Kramer, Physical Review Letters 47, 1546 (1981). [26] D. J. Amit and V. Martin-Mayor, Field theory, the renormalization group, and critical phenomena, 3rd ed. ed. (World Scientific, New Jersey ; London, 2005). [27] B. Huckestein, Physical Review Letters 72, 1080 (1994). [28] K. Slevin and T. Ohtsuki, Physical Review Letters 82, 382 (1999). [29] X. Wang, Q. Li, and C. M. Soukoulis, Physical Review B 58, 3576 (1998). [30] H. Obuse et al., Physical Review Letters 101, 116802 (2008). [31] F. Evers, A. Mildenberger, and A. D. Mirlin, Physical Review Letters 101, 116803 (2008). [32] B. Huckestein and M. Backhaus, Physical Review Letters 82, 5100 (1999).
Thermoelectric signatures of order-parameter symmetries in iron-based superconducting tunnel junctions Claudio Guarcello 0000-0002-3683-2509 Corresponding author: cguarcello@unisa.it Dipartimento di Fisica “E.R. Caianiello”, Università di Salerno, Via Giovanni Paolo II, 132, I-84084 Fisciano (SA), Italy INFN, Sezione di Napoli Gruppo Collegato di Salerno, Complesso Universitario di Monte S. Angelo, I-80126 Napoli, Italy    Alessandro Braggio 0000-0003-2119-1160 alessandro.braggio@nano.cnr.it NEST, Istituto Nanoscienze-CNR and Scuola Normale Superiore, Piazza San Silvestro 12, I-56127 Pisa, Italy    Francesco Giazotto 0000-0002-1571-137X francesco.giazotto@nano.cnr.it NEST, Istituto Nanoscienze-CNR and Scuola Normale Superiore, Piazza San Silvestro 12, I-56127 Pisa, Italy    Roberta Citro 0000-0002-3896-4759 rocitro@unisa.it Dipartimento di Fisica “E.R. Caianiello”, Università di Salerno, Via Giovanni Paolo II, 132, I-84084 Fisciano (SA), Italy INFN, Sezione di Napoli Gruppo Collegato di Salerno, Complesso Universitario di Monte S. Angelo, I-80126 Napoli, Italy CNR-SPIN c/o Universitá degli Studi di Salerno, I-84084 Fisciano (Sa), Italy Abstract Thermoelectrical properties are frequently used to characterize the materials and endow the free energy from wasted heat for useful purposes. Here, we show that linear thermoelectric effects in tunnel junctions (TJs) with Fe-based superconductors, not only address the dominance between particle and hole states, but even provide information about the superconducting order parameter symmetry. In particular, we observe that nodal order parameters present a maximal thermoelectric effect at lower temperatures than for nodeless cases. Finally, we show that superconducting TJs between iron-based and BCS superconductors could provide a thermoelectric efficiency ZT exceeding 6 with a linear Seebeck coefficient around $S\approx 800\;\mu\text{V/K}$ at a few Kelvin. These results pave the way to novel thermoelectric machines based on multi-band superconductors. Introduction $-$ Physical systems based on hybrid superconducting junctions have demonstrated a great potential for energy management issues [1, 2, 3, 4]. Recently, they also attracted interest for their unexpectedly good thermoelectric (TE) performance [5, 6, 7, 8, 9, 10, 11, 12], finding also a role in different quantum technology applications [13, 14, 15]. In a two-terminal system, a necessary condition for thermoelectricity in the linear regime, i.e., for a small voltage $\delta V$ and a small temperature bias $\delta T$, is breaking of the particle-hole (PH) symmetry. A sizable linear TE effect, much larger than that commonly found in metallic structures, has been recently reported in SC-ferromagnet tunnel junctions (TJs) [16, 5, 6], due to the spin-dependent effective breaking of PH symmetry. Here, we show a robust linear TE effect in hybrid superconducting junctions with a multiband SC, namely, an iron-based SC (FeSC). Our results, beyond proving a linear TE effect with unconventional SCs, establish TE phenomena as a potential probe of the superconducting order parameter symmetries. In fact, one of the central problems for unconventional SCs is the nature of the pairing mechanism, which is tightly connected to the order parameter symmetries [17, 18]. Pairing symmetry is known to be $d$-wave in cuprates, but is still unresolved for the FeSCs. They are unique among unconventional SCs, since different ordering phenomena are present in a multi-orbital scenario [19, 20, 21]. It was first theoretically predicted that FeAs-based high-temperature SCs have a sign change of the order parameter on the Fermi surface [19]. Experimental evidence for FeSC seems to be favourable to $s_{\pm}$ pairing, implying that the electron- and hole-like bands both develop an $s$-wave superconducting state with opposite sign in order parameter [19, 22]. Point contact Andreev reflection spectroscopy was also been applied to FeSCs to probe the order parameter symmetry [23, 24]. However, the results of these studies are not completely conclusive due to the complexity of the Andreev reflection spectra of a normal metal/ multiband-SC interface. Notably, bulk thermoelectrical measurements of FeSC have been reported for the normal state [25], although the analysis is quite intricate due to the competition of different mechanisms, such as phonon- [26] and magnon-drag phenomena [27], in a multiband setting. The linear TE properties of a TJ between a FeSC and a normal metal allow identifying the strong intrinsic PH asymmetry in the FeSC density of states (DoS) in a quite direct way, also for the superconducting phase. We will also potentially discriminate between different order parameter symmetries. Furthermore, if the normal-metal is replaced by a Bardeen–Cooper–Schrieffer (BCS) SC, we observe also astounding TE figures of merit, which may be relevant for energy harvesting applications and quantum technologies [14, 28, 29, 15]. Thermoelectrical transport $-$ In order to address the physics of the FeSC, we focus on the linear TE properties of a TJ between FeSC and a normal metal (or a SC), see Fig. 1(a). The linear-response coefficients of charge ($I$) and heat ($\dot{Q}$) currents can be expressed in terms of the Onsager matrix [30] $$\begin{pmatrix}I\\ \dot{Q}\end{pmatrix}=\begin{pmatrix}\sigma&\alpha\\ \alpha&\kappa T\\ \end{pmatrix}\begin{pmatrix}\delta V\\ \delta T/T\end{pmatrix},$$ (1) where we assumed time reversal symmetry satisfied and a small voltage (temperature) bias $\delta V$ ($\delta T$). Here, $\alpha$ is the TE coefficient, while $\sigma$ and $\kappa$ are the electric and thermal conductances, respectively. At the lowest order of tunneling, it is easy to express those linear coefficients as $$\begin{pmatrix}\sigma\\ \alpha\\ \kappa\end{pmatrix}=\frac{G_{T}}{e}\!\bigintss_{-\infty}^{\infty}\begin{pmatrix}e\\ \varepsilon\\ \varepsilon^{2}\big{/}eT\end{pmatrix}\frac{\mathcal{N}_{L}\left(\varepsilon\right)\mathcal{N}_{R}\left(\varepsilon\right)d\varepsilon}{4k_{\text{B}}T\cosh^{2}\!\left(\varepsilon/2k_{\text{B}}T\right)}$$ (2) in terms of the lead DoS, $\mathcal{N}_{j}\left(\epsilon\right)$ with $j=L,R$, where $G_{T}$ is the normal-state conductance of the junction, $-e$ is the electron charge, and $k_{\text{B}}$ is the Boltzmann constant. In our analysis, we take into account two different cases, i.e., a junction formed between a FeSC tunnel coupled with a normal lead, i.e., with $\mathcal{N}_{R}(\epsilon)=1$ being the energy-independent normalized DoS in Eq. (2), or alternatively with an $s$-wave SC, with $\mathcal{N}_{R}\left(\epsilon\right)=\left|\text{Re}\left[\frac{\epsilon+i\Gamma}{\sqrt{(\epsilon+i\Gamma)^{2}-\Delta^{2}\left(T\right)}}\right]\right|$, where $\Gamma=\gamma\Delta_{0}$ is the phenomenological Dynes parameter [31], $\Delta_{0}=1.764k_{\text{B}}T_{c}$, and $T_{c}$ is the critical temperature of the BCS SC. In this model, it is also implicitly assumed that the Josephson coupling [32, 33] between the two superconducting leads is strongly suppressed [34]. To calculate the DoS of FeSC we rely on the two-orbital, four-band tight-binding approach by Raghu et. al [1], which is the minimal model [36].The diagonalization of this tight-binding Hamiltonian model leads to eigenvalues that can be written in a compact form as $$\varepsilon_{\mathbf{k}\pm}^{d}=\sqrt{\left(\varepsilon_{\mathbf{k}\pm}^{0}\right)^{2}+\left|\vec{d}_{\mathbf{k},g}\right|^{2}}.$$ (3) Here $\varepsilon_{\mathbf{k}\pm}^{0}=\xi_{\mathbf{k}+}-\mu\pm\sqrt{\xi^{2}_{\mathbf{k}xy}+\xi^{2}_{\mathbf{k}-}}$, $\mu$ is the chemical potential, and $\vec{d}_{\mathbf{k},g}=\big{(}0,0,s_{x^{2}y^{2}}\big{)}$, where $s_{x^{2}y^{2}}\equiv s_{\pm}=\Delta_{0}^{\text{FeSC}}\cos k_{x}\cos k_{y}$, $\Delta_{0}^{\text{FeSC}}$ is the gap size, and $\left|\vec{d}_{\mathbf{k}}\right|^{2}$ represents the effective amplitude of the pairing interactions [2]. For the explicit expressions of $\xi_{\mathbf{k}\pm}$ and $\xi_{\mathbf{k}xy}$ we refer the reader to SM. We adopted the $s_{\pm}$-wave state which is the mostly accepted FeSC pairing state [38, 39]. However, the multiband character of FeSCs offers also chances for more exotic pairing states [2, 39], and thus we will also discuss TE properties with other order parameter symmetries in the following. Hereafter, we took the interorbital hopping parameter $\left|t_{1}\right|$ as a standard unit of energy, and temperature will be measured in units of $\left|t_{1}\right|/k_{B}$ [40] Finally, the FeSC total DoS turns out to be the sum of an electron-like [$+$, blue curve in Fig. 1(b)] and a hole-like [$-$, red curve in Fig. 1(b)] band contribution [41], $\mathcal{N}_{\rm{FeSC}}\left(\varepsilon\right)=\mathcal{N}_{\rm{FeSC}}^{+}\left(\varepsilon\right)+\mathcal{N}_{\rm{FeSC}}^{-}\left(\varepsilon\right)$, where $$\displaystyle\mathcal{N}_{\rm{FeSC}}^{\pm}\left(\varepsilon\right)=\sum_{\mathbf{k}}\frac{\varepsilon+\varepsilon_{\mathbf{k}\pm}^{0}}{2\varepsilon_{\mathbf{k}\pm}^{d}}\left\{\delta\left[\varepsilon_{\mathbf{k}\pm}^{d}-\varepsilon\right]-\delta\left[\varepsilon_{\mathbf{k}\pm}^{d}+\varepsilon\right]\right\}.$$ (4) Note also that the superconducting instability opens the gap symmetrically around the chemical potential, as illustrated in the inset of Fig. 1(b) (see also SM). Figures of merit $-$ In order to quantify the TE performance, it is usual to consider the Seebeck coefficient $S=-\alpha/(\sigma T)$ and thermodynamic efficiency with the dimensionless figure of merit $\text{ZT}=S^{2}\sigma T/\left[\kappa-\alpha^{2}/(\sigma T)\right]$. A large value of ZT means better thermodynamic efficiency and if it tends to infinity, the efficiency of the device tends to the Carnot limit. In the following, we show how $S$ and ZT of an FeSC-insulator-normal metal (FeSC-I-N) junction depend on the temperature, considering possible different values of $\Delta_{0}^{\text{FeSC}}$ [42].and changing the doping level, $\mu$ [43]. Figures 2(a) and (b) collect the $S(T,\Delta_{0}^{\text{FeSC}})$ and ZT$(T,\Delta_{0}^{\text{FeSC}})$ maps at the half-filling condition $\mu=1.54$. For a given $\Delta_{0}^{\text{FeSC}}$, both the Seebeck coefficient and the TE efficiency behave non-monotonically, with a clear maximum that shifts towards gradually higher temperatures as $\Delta_{0}^{\text{FeSC}}$ increases. Indeed, $\Delta_{0}^{\text{FeSC}}$ is the energy scale which mainly influences the optimal temperature that maximizes the TE effect. Furthermore, by increasing $\Delta_{0}^{\text{FeSC}}$ the subgap states reduce, correspondingly requiring higher energies to achieve the same TE effect. The use of FeSCs is a further advantage over conventional SC configurations, since it allows operation at higher temperatures that provide a larger Seebeck coefficient. Figures 2(c) and (d) show what happens when changing the doping level, keeping fixed $\Delta_{0}^{\text{FeSC}}=0.1$ [44].The FeSC DoS dependence on $\mu$ is illustrated in SM. The $S(T,\mu)$ map still reveals single-peaked profiles, but modifying the doping we observe the inversion of the Seebeck coefficient sign around $\mu\sim 0.75$, below (above) which $S>0$ ($S<0$) for the hole (electron) DoS contribution dominates. We note that the point at which the Seebeck coefficient changes sign differs from the half-filling condition. The reason of that is the lack of symmetry between particle-like and hole-like bands for an FeSC in the energy window determined by the working temperatures. In the cases discussed so far, we achieve Seebeck coefficients up to $\left|S\right|\sim 150~{}\mu\text{V/K}$ reaching also TE efficiencies of $\text{ZT}\sim 0.5$. In the case of an undoped FeSC, $\mu=1.54$, with $\Delta_{0}^{\text{FeSC}}=0.1$, i.e., the red dashed lines in Fig. 2(a-d), the highest TE efficiency is reached at $T\simeq 1.6\times 10^{-3}$, which may be a temperature around $2.8\;\text{K}$. We remark that the maximum Seebeck coefficient we obtain is several orders of magnitude larger than that usually found in metallic structures at the same temperatures. However, the thermoelectricity of the FeSC-I-N junction outperforms magnetic TJs [45] and is quite well comparable with hybrid SCs-ferromagnets TJs [46, 47]. It is noteworthy to show that, if we replace the normal metal with a BCS SC with a gap $\Delta_{0}$, the linear thermoelectricity can be further enhanced. This effect can be ascribed to the additional contribution of the conventional DoS peaks intertwined with the multi-band character of the FeSC. Thus, in Fig. 2(e) and (f) we present the $S(T,\Delta_{0})$ and ZT$(T,\Delta_{0})$ maps of a FeSC-insulator-SC (FeSC-I-S) TJ. In this case, we assume a specific FeSC with a given gap $\Delta_{0}^{\text{FeSC}}=0.1$ and we explore the TE response at different values of the BCS superconducting gap $\Delta_{0}$. A region of the $(T,\Delta_{0})$ parameter space emerges in which both Seebeck coefficient and TE efficiency increase significantly, even reaching the values $\left|S\right|\sim 870~{}\mu\text{V/K}$ and ZT$\sim 6.5$ at $(T,\Delta_{0})_{max}\simeq(0.63,5.3)\times 10^{-3}$. In natural units, these quantities correspond to $T\simeq 1.8\;\text{K}$ for a BCS SC with $T_{c}\simeq 5.2\;\text{K}$ [48]. Order parameter symmetry detection $-$ We show here that TE figures of merit are also a powerful tool for addressing order-parameter symmetry (OPS). We compare the temperature dependence of $S$ and ZT of an FeSC-I-N TJ, with $\mu=1.54$ and $\Delta_{0}^{\text{FeSC}}=0.1$, taking into account different OPSs: we cover the three possible $s$-wave symmetries, i.e., the constant-gap case $s_{0}$, $s_{x^{2}y^{2}}=\Delta_{0}^{\text{FeSC}}\cos k_{x}\cos k_{y}$, and $s_{x^{2}+y^{2}}=\Delta_{0}^{\text{FeSC}}(\cos k_{x}+\cos k_{y})/2$, and the two $d$-wave symmetries, i.e., $d_{xy}=\Delta_{0}^{\text{FeSC}}\sin k_{x}\sin k_{y}$ and $d_{x^{2}-y^{2}}=\Delta_{0}^{\text{FeSC}}(\cos k_{x}-\cos k_{y})/2$ [49].Since it will be useful later on, we recall that the $d_{x^{2}-y^{2}}$, $d_{xy}$, and $s_{x^{2}+y^{2}}$ OPSs are nodal, while the others are nodeless [2, 3]. Figure 3 demonstrates that the TE figures of merit can provide valuable clues for determining the OPS of the system. Starting from the top of this figure, panel (a) takes a closer look at the different DoSs in play, with the inset serving to highlight the low-energy region. Figures 3(b)-(c) illustrate the Seebeck coefficient $S(T)$ and the TE efficiency ZT$(T)$, both showing, in a semilog scale, bell-shaped, single-peaked profiles for each symmetry considered. It immediately stands out that the “position” of these peaks depends strongly on the OPS: indeed, for the $s_{0}$ and $s_{x^{2}y^{2}}$ cases, both $S$ and ZT are peaked roughly around $T^{\rm peak}\sim 10^{-3}$, while the other symmetries give $S$ and ZT peaks centred on temperatures an order of magnitude lower. To give realistic numbers (here the subscript distinguishes symmetries), the $S$ peak for nodeless OPS is located at $T_{x^{2}y^{2}}^{\rm peak}\simeq 3.3\;\text{K}$(blue), $T_{0}^{\rm peak}\simeq 1.4\;\text{K}$(violet), whereas for nodal cases one finds $T_{xy}^{\rm peak}\simeq 0.13\;\text{K}$(yellow), $T_{x^{2}+y^{2}}^{\rm peak}\simeq 0.12\;\text{K}$(green), and $T_{x^{2}-y^{2}}^{\rm peak}\simeq 0.10\;\text{K}$(red). To grasp this result, we recall that the energy window relevant for calculating the TE coefficients scales with temperature, i.e., $\left|\varepsilon\right|\sim T$ (see SM for more details). For instance, the energies considered in the inset of Fig. 3(a) are essentially those where one should focus if $T\sim 10^{-4}$. Here, it is evident that only the $d_{xy}$, $s_{x^{2}+y^{2}}$ and $d_{x^{2}-y^{2}}$ DoSs (i.e., those showing ZTs peaked at these temperatures) are clearly non-zero. We see that nodeless FeSC pairings present maximal thermoelectricity (absolute value of the Seebeck coefficient) at relatively high temperatures. Instead, for the nodal cases, due to the presence of low energy states in the gap, the maximal thermoelectricity is observed at much lower temperature regimes. This result is quite robust against variations of the gap amplitude and hopping parameter values, as discussed in SM. Furthermore, we also see that the first two DoSs, i.e., the yellow and green curves, are unbalanced toward the hole side ($\varepsilon<0$), unlike the third, i.e., the red curve, which appears to be slightly unbalanced toward particle side ($\varepsilon>0$). This is directly reflected in the sign of $S$, which immediately tells us the PH asymmetry of the FeSC DoS and, therefore, $S>0$ ($S<0$) in the former (latter) case. This is clearly confirmed also by looking the PH asymmetry of the DoSs on the larger energy scale considered in Fig. 3(a). It is evident, for instance, that the $s_{0}$ and $s_{x^{2}y^{2}}$ OPSs, i.e., the violet and blue curves, respectively, are unbalanced to the right, i.e., the particle contribution dominates, thereby making $S$ negative. We emphasize that, in principle, the measurement of the PH asymmetry could be addressed by directly measuring the tunneling differential conductance. However, a systematic experimental asymmetry in the bias polarization of the junction cannot be easily excluded loosing sensitivity for small PH asymmetry. The linear thermoelectricity much safely returns this information in an independent way. Yet, we stress that the thermoelectrical signatures discussed in this Letter, being associated with the quasiparticle tunneling in the junction, are not affected by any phonon- or magnon-drag effects, which instead usually influence the bulk TE properties in the normal phase [25]. Conclusions $-$ To summarize, we have demonstrated that an FeSC tunnel junction can show sizable TE efficiency, and that both the TE figure of merit and the Seebeck coefficient are found to be non-monotonic, single-peaked functions of temperature. Moreover, they can provide details on the underlying symmetry of the order parameter addressing the PH asymmetry of the DoS. In particular, the position of both the ZT and $S$ peaks allows us to clearly distinguish nodal from nodeless symmetries. Furthermore, the sign of $S$ provides further information on the asymmetry of PH, distinguishing cases where the TE efficiency is not discriminating, such as for the two $d$-waves symmetries. Our results also establish the relevance of multiband SCs for a novel generation of TE devices. As a closing remark, we observe that the proposed approach may be used for studying other quantum materials [39]. Multi-orbital pairing approaches have been widely used also to shed light in other multiband SCs, such as ruthenates and nickelates, and insights about Hund-metal. Therefore, the TE-based investigation of tunneling junctions presented in this Letter complements the actual experimental techniques, finding a fertile ground for the study of novel quantum materials. Acknowledgements. F.G. acknowledges PNRR MUR project PE0000023-NQSTI for partial financial support. F.G and A.B. acknowledge the EU’s Horizon 2020 research and innovation program under Grant Agreement No. 964398 (SUPERGATE) and No. 101057977 (SPECTRUM) for partial financial support. A.B. acknowledges the Royal Society through the International Exchanges between the UK and Italy (Grants No. IEC R2 212041). References Fornieri and Giazotto [2017] A. Fornieri and F. Giazotto, “Towards phase-coherent caloritronics in superconducting circuits,” Nat. Nanotechnol. 12, 944 (2017). Hwang and Sothmann [2020] S.-Y. Hwang and B. Sothmann, “Phase-coherent caloritronics with ordinary and topological Josephson junctions,” Eur. Phys. J. Spec. Top. 229, 683 (2020). Giazotto et al. [2006] F. Giazotto, T. T. Heikkilä, A. Luukanen, A. M. Savin,  and J. P. Pekola, “Opportunities for mesoscopics in thermometry and refrigeration: Physics and applications,” Rev. Mod. Phys. 78, 217 (2006). Muhonen et al. [2012] J. T. Muhonen, M. Meschke,  and J. P. Pekola, “Micrometre-scale refrigerators,” Rep. Prog. Phys. 75, 046501 (2012). Ozaeta et al. [2014] A. Ozaeta, P. Virtanen, F. S. Bergeret,  and T. T. Heikkilä, ‘‘Predicted very large thermoelectric effect in ferromagnet-superconductor junctions in the presence of a spin-splitting magnetic field,” Phys. Rev. Lett. 112, 057001 (2014). Kolenda et al. [2016a] S. Kolenda, M. J. Wolf,  and D. Beckmann, “Observation of thermoelectric currents in high-field superconductor-ferromagnet tunnel junctions,” Phys. Rev. Lett. 116, 097001 (2016a). Bergeret et al. [2018] F. S. Bergeret, M. Silaev, P. Virtanen,  and T. T. Heikkilä, “Colloquium: Nonequilibrium effects in superconductors with a spin-splitting field,” Rev. Mod. Phys. 90, 041001 (2018). Hussein et al. [2019] R. Hussein, M. Governale, S. Kohler, W. Belzig, F. Giazotto,  and A. Braggio, “Nonlocal thermoelectricity in a Cooper-pair splitter,” Phys. Rev. B 99, 075429 (2019). Marchegiani et al. [2020] G. Marchegiani, A. Braggio,  and F. Giazotto, “Nonlinear thermoelectricity with electron-hole symmetric systems,” Phys. Rev. Lett. 124, 106801 (2020). Blasi et al. [2020a] G. Blasi, F. Taddei, L. Arrachea, M. Carrega,  and A. Braggio, “Nonlocal thermoelectricity in a superconductor–topological-insulator–superconductor junction in contact with a normal-metal probe: Evidence for helical edge states,” Phys. Rev. Lett. 124, 227701 (2020a). Blasi et al. [2020b] G. Blasi, F. Taddei, L. Arrachea, M. Carrega,  and A. Braggio, “Nonlocal thermoelectricity in a topological Andreev interferometer,” Phys. Rev. B 102, 241302 (2020b). Germanese et al. [2022] G. Germanese, F. Paolucci, G. Marchegiani, A. Braggio,  and F. Giazotto, “Bipolar thermoelectric Josephson engine,” Nat. Nanotechnol. 17, 1084 (2022). Giazotto et al. [2015] F. Giazotto, P. Solinas, A. Braggio,  and F. S. Bergeret, “Ferromagnetic-insulator-based superconducting junctions as sensitive electron thermometers,” Phys. Rev. Appl. 4, 044016 (2015). Heikkilä et al. [2018] T. T. Heikkilä, R. Ojajärvi, I. J. Maasilta, E. Strambini, F. Giazotto,  and F. S. Bergeret, “Thermoelectric radiation detector based on superconductor-ferromagnet systems,” Phys. Rev. Appl. 10, 034053 (2018). Paolucci et al. [2023] F. Paolucci, G. Germanese, A. Braggio,  and F. Giazotto, “A highly-sensitive broadband superconducting thermoelectric single-photon detector,” arXiv preprint arXiv:2302.02933  (2023). Machon et al. [2014] P. Machon, M. Eschrig,  and W. Belzig, ‘‘Giant thermoelectric effects in a proximity-coupled superconductor–ferromagnet device,” New J. Phys. 16, 073002 (2014). Goll [2006] G. Goll, Unconventional Superconductors: Experimental Investigation of the Order-Parameter Symmetry of Unconventional Superconductors (Springer Berlin Heidelberg, Berlin, Heidelberg, 2006). Citro and Mancini [2017] R. Citro and F. Mancini, The Iron Pnictide Superconductors: An Introduction and Overview, edited by F. Mancini and R. Citro (Springer International Publishing, Cham, 2017). Mazin et al. [2008] I. I. Mazin, D. J. Singh, M. D. Johannes,  and M. H. Du, “Unconventional superconductivity with a sign reversal in the order parameter of LaFeAsO${}_{1-x}$F${}_{x}$,” Phys. Rev. Lett. 101, 057003 (2008). Wang et al. [2009] F. Wang, H. Zhai, Y. Ran, A. Vishwanath,  and D.-H. Lee, “Functional renormalization-group study of the pairing symmetry and pairing mechanism of the FeAs-based high-temperature superconductor,” Phys. Rev. Lett. 102, 047005 (2009). Guarcello and Citro [2021] C. Guarcello and R. Citro, “Progresses on topological phenomena, time-driven phase transitions, and unconventional superconductivity,” Europhys. Lett. 132, 60003 (2021). Kuroki et al. [2008] K. Kuroki, S. Onari, R. Arita, H. Usui, Y. Tanaka, H. Kontani,  and H. Aoki, ‘‘Unconventional pairing originating from the disconnected Fermi surfaces of superconducting LaFeAsO${}_{1-x}$F${}_{x}$,”  Phys. Rev. Lett. 101, 087004 (2008). Chen et al. [2008] T. Chen, Z. Tesanovic, R. Liu, X. Chen,  and C. Chien, “A BCS-like gap in the superconductor SmFeAsO${}_{0.85}$F${}_{0.15}$,” Nature 453, 1224 (2008). Daghero et al. [2012] D. Daghero, M. Tortello, G. Ummarino, V. Stepanov, F. Bernardini, M. Tropeano, M. Putti,  and R. Gonnelli, “Effects of isoelectronic Ru substitution at the Fe site on the energy gaps of optimally f-doped SmFeAsO,” Supercond. Sci. Technol. 25, 084012 (2012). Pallecchi et al. [2016] I. Pallecchi, F. Caglieris,  and M. Putti, “Thermoelectric properties of iron-based superconductors and parent compounds,” Supercond. Sci. Technol. 29, 073002 (2016). Ziman [2001] J. M. Ziman, Electrons and phonons: the theory of transport phenomena in solids (Oxford university press, 2001). Caglieris et al. [2014] F. Caglieris, A. Braggio, I. Pallecchi, A. Provino, M. Pani, G. Lamura, A. Jost, U. Zeitler, E. Galleani D’Agliano, P. Manfrinetti,  and M. Putti, “Magneto-Seebeck effect in $r\mathrm{FeAsO}$ ($r=\mathrm{rare}$ earth) compounds: Probing the magnon drag scenario,” Phys. Rev. B 90, 134421 (2014). Germanese et al. [a] G. Germanese, F. Paolucci, G. Marchegiani, A. Braggio,  and F. Giazotto, “Superconducting bipolar thermoelectric memory and method for writing a superconducting bipolar thermo-electric memory,”  I.T. Patent, 102021000032042 (2021). Germanese et al. [b] G. Germanese, F. Paolucci, A. Braggio,  and F. Giazotto, “Broadband passive superconducting thermoelectric single photon-detector,”  I.T. Patent, 102023000001854 (2023). Benenti et al. [2017] G. Benenti, G. Casati, K. Saito,  and R. Whitney, “Fundamental aspects of steady-state conversion of heat to work at the nanoscale,” Phys. Rep. 694, 1 (2017). Dynes et al. [1978] R. C. Dynes, V. Narayanamurti,  and J. P. Garno, “Direct measurement of quasiparticle-lifetime broadening in a strong-coupled superconductor,” Phys. Rev. Lett. 41, 1509 (1978). Guarcello et al. [2019a] C. Guarcello, A. Braggio, P. Solinas,  and F. Giazotto, “Nonlinear critical-current thermal response of an asymmetric Josephson tunnel junction,” Phys. Rev. Applied 11, 024002 (2019a). Guarcello et al. [2019b] C. Guarcello, A. Braggio, P. Solinas, G. P. Pepe,  and F. Giazotto, “Josephson-threshold calorimeter,” Phys. Rev. Applied 11, 054074 (2019b). Note [1] This can be done using different strategies such as increasing the barrier opacity or using SQUID-like interference, or even Fraunhofer-like suppression [9, 12]. Raghu et al. [2008] S. Raghu, X.-L. Qi, C.-X. Liu, D. J. Scalapino,  and S.-C. Zhang, “Minimal two-band model of the superconducting iron oxypnictides,” Phys. Rev. B 77, 220503 (2008). Note [2] Our goal is to understand the influence of multi-band superconductivity on TE properties, thus Raghu’s approach is sufficient, although we are aware that models with more orbitals have been also developed [51, 7].often used to investigate multi-band effects on superconductivity and magnetism in FeSC materials [53, 54] . Parish et al. [2008] M. M. Parish, J. Hu,  and B. A. Bernevig, “Experimental consequences of the $s$-wave $\text{cos}({k}_{x})\text{cos}({k}_{y})$ superconductivity in the iron pnictides,” Phys. Rev. B 78, 144514 (2008). Bang and Stewart [2017] Y. Bang and G. R. Stewart, “Superconducting properties of the $s^{\pm}$-wave state: Fe-based superconductors,” J. Phys.: Condens. Matter 29, 123003 (2017). Fernandes et al. [2022] R. M. Fernandes, A. I. Coldea, H. Ding, I. R. Fisher, P. J. Hirschfeld,  and G. Kotliar, “Iron pnictides and chalcogenides: a new paradigm for superconductivity,” Nature 601, 35 (2022). Note [3] Hereafter, we assume an interorbital hopping parameter $\left|t_{1}\right|=0.15\tmspace+{.2777em}\text{eV}$, in line with the value often used in literature, e.g., Ref. [55]. Ptok [2014] A. Ptok, ‘‘Influence of s${}_{\pm}$ symmetry on unconventional superconductivity in pnictides above the Pauli limit – two-band model study,” Eur. Phys. J. B 87, 2 (2014). Note [4] FeSCs have been found to exhibit a wide range of superconducting transition temperatures [39], albeit that the $2\Delta_{max}/(k_{B}T_{c})$ ratio (with $\Delta_{max}$ being the zero-temperature value of the largest gap) often falls within $6.0-8.5$, in contrast to the $\sim 3.5$ value of conventional BCS SCs. Note [5] In this work we are assuming a temperature-independent gap because we typically consider temperatures $T<0.4T_{c}$ since the FeSC superconductive gap shows a BCS-like temperature dependence [56]. Note [6] The value $\Delta_{0}^{\text{FeSC}}=0.1$, hereafter used in the manuscript, corresponds to $T_{c}\in[41-58]\tmspace+{.2777em}\text{K}$. Walter et al. [2011] M. Walter, J. Walowski, V. Zbarsky, M. Münzenberg, M. Schäfers, D. Ebke, G. Reiss, A. Thomas, P. Peretzki, M. Seibt, J. S. Moodera, M. Czerner, M. Bachmann,  and C. Heiliger, “Seebeck effect in magnetic tunnel junctions,” Nat. Mater. 10, 742 (2011). Kolenda et al. [2016b] S. Kolenda, M. J. Wolf,  and D. Beckmann, “Observation of thermoelectric currents in high-field superconductor-ferromagnet tunnel junctions,” Phys. Rev. Lett. 116, 097001 (2016b). González-Ruano et al. [2023] C. González-Ruano, D. Caso, J. A. Ouassou, C. Tiusan, Y. Lu, J. Linder,  and F. G. Aliev, “Observation of magnetic state dependent thermoelectricity in superconducting spin valves,” arXiv: 2301.0326  (2023). Note [7] We included in Fig. 2(e-f) a black dashed line to mark the condition $T=T_{c}$ for the left-bottom corner the non-FeSC lead ceases to be a SC and the system behaves just like a FeSC-I-N junction as in Fig.(4)(a-d) at the red dashed line. Note [8] The Raghu’s approach entails two pairing gaps, one for each orbital, $\Delta_{1,2}$, that satisfy the condition $\Delta_{1}(k_{x},k_{y})=\Delta_{2}(k_{y},k_{x})$ for all the pairing symmetries described above, except for $d_{x^{2}-y^{2}}$ giving $\Delta_{1}(k_{x},k_{y})=-\Delta_{2}(k_{x},k_{y})$ [2, 3]. In the latter case, the eigenvalues and the DoS are not reduced to the simple expressions given in Eqs. (S1\@@italiccorr)$-$(S3\@@italiccorr), but we use the general expression of the spectral function given in Ref. [2] with the eigenvalues presented in Ref [3]. Seo et al. [2008] K. Seo, B. A. Bernevig,  and J. Hu, “Pairing symmetry in a two-orbital exchange coupling model of oxypnictides,” Phys. Rev. Lett. 101, 206404 (2008). Eschrig and Koepernik [2009] H. Eschrig and K. Koepernik, “Tight-binding models for the iron-based superconductors,” Phys. Rev. B 80, 104503 (2009). Nica et al. [2017] E. M. Nica, R. Yu,  and Q. Si, “Orbital-selective pairing and superconductivity in iron selenides,” npj Quantum Mater. 2, 24 (2017). Querales Flores et al. [2016] J. Querales Flores, C. Ventura, R. Citro,  and J. Rodríguez-Nú$\tilde{\text{n}}$ez, “Temperature and doping dependence of normal state spectral properties in a two-orbital model for ferropnictides,” Phys. Lett. A 380, 1648 (2016). Cavanagh and Brydon [2021] D. C. Cavanagh and P. M. R. Brydon, “General theory of robustness against disorder in multiband superconductors,” Phys. Rev. B 104, 014503 (2021). Wang and Nevidomskyy [2015] Z. Wang and A. H. Nevidomskyy, “Orbital nematic order and interplay with magnetism in the two-orbital hubbard model,” J. Phys.: Condens. Matter 27, 225602 (2015). Jin et al. [2010] R. Jin, M. H. Pan, X. B. He, G. Li, D. Li, R. wen Peng, J. R. Thompson, B. C. Sales, A. S. Sefat, M. A. McGuire, D. Mandrus, J. F. Wendelken, V. Keppens,  and E. W. Plummer, “Electronic, magnetic and optical properties of two Fe-based superconductors and related parent compounds,” Supercond. Sci. Technol. 23, 054005 (2010). Supplemental Material: Thermoelectric signatures of order-parameter symmetries in iron-based superconducting tunnel junctions S-.1 Model for an FeSC We use the two-orbital, four-band tight-binding model developed by Raghu et. al [1] (see also Refs. [2, 3] for more details). The diagonalization of this tight-binding Hamiltonian model leads to eigenvalues that can be written in a compact form as $$\varepsilon_{\mathbf{k}\pm}^{d}=\sqrt{\left(\varepsilon_{\mathbf{k}\pm}^{0}\right)^{2}+\left|\vec{d}_{\mathbf{k},g}\right|^{2}}\qquad\qquad\text{where}\qquad\qquad\varepsilon_{\mathbf{k}\pm}^{0}=\xi_{\mathbf{k}+}-\mu\pm\sqrt{\xi^{2}_{\mathbf{k}xy}+\xi^{2}_{\mathbf{k}-}}.$$ (S1) Here $\mu$ is the chemical potential and $\vec{d}_{\mathbf{k},g}=\big{(}0,0,s_{x^{2}y^{2}}\big{)}$, where $s_{x^{2}y^{2}}=\Delta_{0}^{\text{FeSC}}\cos k_{x}\cos k_{y}$, $\Delta_{0}^{\text{FeSC}}$ is the gap size. In Eq. (S1), we take into account the kinetic energy terms $\xi_{\mathbf{k}\alpha\beta}$ of a particle with momentum $\mathbf{k}$ changing the orbital from $\beta$ to $\alpha$, given by $$\displaystyle\xi_{\mathbf{k}\pm}$$ $$\displaystyle\!=\!$$ $$\displaystyle(\xi_{\mathbf{k}xx}\pm\xi_{\mathbf{k}yy})/2$$ (S2) $$\displaystyle\xi_{\mathbf{k}xx}$$ $$\displaystyle\!=\!$$ $$\displaystyle-2t_{1}\cos(k_{x})\!-\!2t_{2}\cos(k_{y})\!-\!4t_{3}\cos(k_{x})\cos(k_{y}),$$ $$\displaystyle\xi_{\mathbf{k}yy}$$ $$\displaystyle\!=\!$$ $$\displaystyle-2t_{2}\cos(k_{x})\!-\!2t_{1}\cos(k_{y})\!-\!4t_{3}\cos(k_{x})\cos(k_{y}),$$ $$\displaystyle\xi_{\mathbf{k}xy}$$ $$\displaystyle\!=\!$$ $$\displaystyle-4t_{4}\sin(k_{x})\sin(k_{y}).$$ The coefficients $t_{1}$ and $t_{2}$ are the intraorbital nearest-neighbor hopping amplitudes, while $t_{3}$ and $t_{4}$ are the intraorbital and interorbital next-nearest-neighbor hopping amplitudes, respectively [4]. Different sets of hopping parameters correspond to different Fe-based superconductors (FeSC) (as in the manuscript, energies and temperatures are in units of $\left|t_{1}\right|$ and $\left|t_{1}\right|/k_{B}$, respectively), i.e., $(t_{1},t_{2},t_{3},t_{4})=(-1,1.3,-0.85-0.85)$ for iron pnictdes [1], or $(t_{1},t_{2},t_{3},t_{4})=(-1,1.5,-1.2-0.95)$ for iron selenides [5, 6]. In this article, we focus primarily on iron pnictdes, although, as we show in the following, the main claims about the capability of thermoelectric (TE) measurements to resolve the pairing symmetry remain valid even using the hopping parameter values for selenides (even if for this kind of FeSC more sophisticated modelling has proven to be more effective [7]). S-.2 Density of states of a FeSC For the sake of convenience, we rewrite the FeSC density of states (DoS) presented in the main text as $$\mathcal{N}_{\rm{FeSC}}^{\pm}\left(\varepsilon\right)=\frac{\eta}{\pi^{3}}\iint_{0}^{\pi}dk_{x}dk_{y}\frac{\varepsilon+\varepsilon_{\mathbf{k}\pm}^{0}}{2\varepsilon_{\mathbf{k}\pm}^{d}}\left[\frac{1}{\left(\varepsilon-\varepsilon_{\mathbf{k}\pm}^{d}\right)^{\!2}+\eta^{2}}-\frac{1}{\left(\varepsilon+\varepsilon_{\mathbf{k}\pm}^{d}\right)^{\!2}+\eta^{2}}\right],$$ (S3) assuming a Lorentzian energy broadening like i.e., $\delta\left(\varepsilon-\varepsilon_{\mathbf{k}\pm}^{d}\right)=\eta\Big{/}\left\{\pi\left[\left(\varepsilon-\varepsilon_{\mathbf{k}\pm}^{d}\right)^{2}+\eta^{2}\right]\right\}$ [2] (we set $\eta=0.01$ in line with Ref. [8]). The behavior of the DoS, $\mathcal{N}_{\rm{FeSC}}\left(\varepsilon\right)$, at different values of $\mu\in[0-2.5]$ and $\Delta_{0}^{\text{FeSC}}=0.1$ is shown in Fig. S1. We clearly observe that by changing the doping level the DoS shifts, but leaving the gap opening around $\varepsilon=0$. Moreover, as the doping increases, the DoSs gradually change from fully gap to gapless behaviour, as expected according to Ref. [2] (see insets in Fig. S1 where a zoom is reported); in fact, for a large enough doping one finds a fully gapped hole-like Fermi surface, giving a gap in the DoS at low energies, and a partially gapped electron-like Fermi surface, giving a DOS growing quasilinearly around $\varepsilon=0$. In order to illustrate this behavior, we include in the insets of Fig. S1 the DoSs $\mathcal{N}_{\rm{FeSC}}\left(\varepsilon\right)$ (solid line), $\mathcal{N}^{+}_{\rm{FeSC}}\left(\varepsilon\right)$ (dashed line), and $\mathcal{N}^{+}_{\rm{FeSC}}\left(\varepsilon\right)$ (dot-dashed line) within the energy range $\left|\varepsilon\right|\lesssim 2\Delta_{0}^{\small\text{FeSC}}$. It is evident that for $\mu=0,$ and $0.5$, see Fig. S1(a-b), the electron-like contribution does not enter in play at low energies, and in fact the hole DoS contribution dominates (as discussed in the main text) the TE response of the system. For $\mu=1$, see Fig. S1(c), the electronic component starts to play a role and for $\mu=1.5$, see Fig. S1(d), both the electrons and holes contributions effectively partake in the overall DoS and appear clearly gapped. The last two panels, for $\mu=2$ and $2.5$, demonstrate that the quasi-linear behaviour at low energies of the DoS is entirely ascribable to the electronic component, the hole being still gapped. The sign of the TE coefficient roughly reconcile with this behaviour since for hole-dominated case it is typically positive where for electron-dominated is more negative. It is also interesting to look at the behaviour of the quantity $N^{\pm}(\mu)=\pm\int_{0}^{\pm\infty}\mathcal{N}_{\rm{FeSC}}\left(\varepsilon,\mu\right)d\varepsilon$; in particular, in Fig. S2(a) we focus on the difference $N^{-}(\mu)-N^{+}(\mu)$ as a function of $\mu$. We recall that the undoped compound, having the half-filled, two electrons per site configuration, requires $\mu=1.54$, namely, the value largely used in the manuscript, at which $N^{-}-N^{+}=0$. For $\mu$ lower (higher) than the half-filling value, the positive (negative) energy portion of DoS predominates, being $N^{-}-N^{+}<0$ ($N^{-}-N^{+}>0$). A change of slope is evident for the threshold value $\mu^{th}\sim 1.15$; this is due to the fact that, increasing the doping level, the electron-like band starts to be strongly occupied (e.g., see right panels of Fig. S1). This mechanism explains why for $\mu\sim\mu^{th}$ there is a strong change in the "movement" of the TE peak with respect to the doping illustrated in Figs. 2(c) and (d) of the main text. In particular, we see that for $\mu\gtrsim\mu^{th}$ ($\mu\lesssim\mu^{th}$) the TE peak shifts towards lower (higher) temperatures increasing the electron doping. At lowest doping the thermoelectricity changes sign since it is hole dominated and electronic-like band is almost completely depleted. S-.3 TE coefficients of a tunneling barrier It is useful to express the TE coefficient, $\alpha$, and the electric and the thermal conductances, $\sigma$ and $\kappa$, $$\begin{pmatrix}\sigma\\ \alpha\\ \kappa\end{pmatrix}=\frac{G_{T}}{e}\!\bigintss_{-\infty}^{\infty}\begin{pmatrix}e\\ \varepsilon\\ \varepsilon^{2}\big{/}eT\end{pmatrix}\frac{\mathcal{N}_{L}\left(\varepsilon\right)\mathcal{N}_{R}\left(\varepsilon\right)d\varepsilon}{4k_{\text{B}}T\cosh^{2}\!\left(\varepsilon/2k_{\text{B}}T\right)},$$ (S4) in such a way as to group all terms excluding DoSs into "weight functions", that is $$x=\frac{G_{T}}{e}\int_{-\infty}^{\infty}W_{x}(\varepsilon,T)\mathcal{N}_{L}(\varepsilon)\mathcal{N}_{R}(\varepsilon)d\varepsilon\qquad\text{with}\qquad x=(\sigma,\alpha,\kappa).$$ (S5) The behavior of the weight functions, $W_{x}\left(\varepsilon,T\right)$ with $x=(\sigma,\alpha,\kappa)$, normalised to their maximum value, is shown in Fig. S2(b). We immediately note that all three functions quickly go to zero for $\left|\varepsilon\right|\gtrsim 10\;T$, thus limiting the temperature range relevant for the calculation of linear TE coefficients. We also observe that $\sigma$ and $\kappa$ are even functions of energy, unlike $\alpha$, which is odd; from this characteristic derives the dependence of the sign of the Seebeck coefficient, $S=-\alpha/(\sigma T)$, on the predominant particle/hole contribution. In Fig. S3 we show how TE coefficient, $\alpha$, and the electric and the thermal conductances, $\sigma$ and $\kappa$, vary in the case discussed in Fig. 2 of the main text (note, in Fig. S3 we are showing dimensionless normalised quantities). In particular: panels (a), (b), and (c) show $\alpha(T,\Delta_{0}^{\text{FeSC}})$, $\sigma(T,\Delta_{0}^{\text{FeSC}})$, and $\kappa(T,\Delta_{0}^{\text{FeSC}})$ at $\mu=1.54$, while panels (d), (e), and (f) show $\alpha(T,\mu)$, $\sigma(T,\mu)$, and $\kappa(T,\mu)$ at $\Delta_{0}^{\text{FeSC}}=0.1$, in the case of a FeSC-I-N junction. Instead, panels (g), (h), and (i) $\alpha(T,\Delta_{0})$, $\sigma(T,\Delta_{0})$, and $\kappa(T,\Delta_{0})$ at $\mu=1.54$ and $\Delta_{0}^{\text{FeSC}}=0.1$, in the case of a FeSC-I-S junction. This figure allows us to definitively state that, taken individually, these quantities are unable to provide the information that instead emerges clearly when they are recombined to form the Seebeck coefficient, $S$, and figure of merit, ZT, as shown in Fig. 2 of the article. Finally, we observe also that the black region in Fig. S3(d) indicates negative $\alpha$ values, giving $S>0$. S-.4 TE effect and other FeSC order parameter symmetries For the sake of completeness, we also show how the Seebeck coefficient and TE efficiency behave considering a $d_{xy}$ pairing symmetry. As can be seen from Fig. S4, the main characteristics discussed in the manuscript for $s_{\pm}$-symmetry are also evident here, with a clear shift towards lower temperatures, in line with what was discussed in Fig. 3 in the main text for a nodal case. Another quite evident difference is that the sign of $S$ is reversed from what is shown in Fig. 2 of the main article, see Figs. S4(a)-(c)-(e): this indicates that where hole contributions for $s_{\pm}$-symmetry is predominant, particle contributions now prevail (and vice versa). In the case of an FeSC-I-N junction, we also note that the threshold doping value around which the sign of $S$ changes is somewhat higher than in the case shown in the article (the switch occurs roughly at $\mu\sim 1.25$), see Fig. S4(b). Finally, we observe that the maximum values obtained for $S$ and ZT in this case are practically the same as those obtained in the case of $s_{\pm}$ pairing symmetry. However, considering an FeSC-I-S junction, the region of the parameter space where these maxima are reached is different, being the ZT peak in Fig. S4(f) located at $(T,\Delta_{0})_{max}\simeq(0.23,2.1)\times 10^{-4}$; in other words, in this case the maximum TE efficiency would be attainable for a superconducting tunnel junction at $T\simeq 40\;\text{mK}$ formed between a FeSC and a SC with $T_{c}\simeq 210\;\text{mK}$. Finally, Fig. S5 show that the TE of a tunnel junction resolves better between nodal and nodeless cases increasing the gap value, see panels (a) and (b) for $\Delta_{0}^{\text{FeSC}}=0.05$ and $0.2$, respectively. Furthermore, we tested the results also with the hopping parameters of FeSC selenides, see Fig. S5(c) for $\Delta_{0}^{\text{FeSC}}=0.1$ and $\mu=1.54$. References Raghu et al. [2008] S. Raghu, X.-L. Qi, C.-X. Liu, D. J. Scalapino,  and S.-C. Zhang, Phys. Rev. B 77, 220503 (2008). Parish et al. [2008] M. M. Parish, J. Hu,  and B. A. Bernevig, Phys. Rev. B 78, 144514 (2008). Seo et al. [2008] K. Seo, B. A. Bernevig,  and J. Hu, Phys. Rev. Lett. 101, 206404 (2008). Liu et al. [2018] G. Liu, S. Fang, X. Zheng, Z. Huang,  and H. Lin, J. Phys.: Condens. Matter 30, 445604 (2018). Yamase and Zeyher [2013] H. Yamase and R. Zeyher, Phys. Rev. B 88, 180502 (2013). Dumitrescu et al. [2016] P. T. Dumitrescu, M. Serbyn, R. T. Scalettar,  and A. Vishwanath, Phys. Rev. B 94, 155127 (2016). Nica et al. [2017] E. M. Nica, R. Yu,  and Q. Si, npj Quantum Mater. 2, 24 (2017). Ptok et al. [2020] A. Ptok, K. J. Kapcia,  and P. Piekarz, Front. Phys. 8 (2020), 10.3389/fphy.2020.00284.
Scaling Law for Three-body Collisions in Identical Fermions with $p$-wave Interactions Jun Yoshida${}^{1,2}$ j_yoshida@ils.uec.ac.jp    Taketo Saito${}^{1,2}$    Muhammad Waseem${}^{1,2}$    Keita Hattori${}^{1,2}$    Takashi Mukaiyama${}^{3}$ ${}^{1}$Department of Engineering Science, University of Electro-Communications, Tokyo 182-8585, Japan ${}^{2}$Institute for Laser Science,University of Electro-Communications, Chofugaoka, Chofu, Tokyo 182-8585, Japan ${}^{3}$Graduate School of Engineering Science, Osaka University, Machikaneyama, Toyonaka, Osaka 560-8531 Japan (November 25, 2020) Abstract We experimentally confirmed the threshold behavior and the scattering length scaling law of three-body loss coefficients in an ultracold spin-polarized gas of ${}^{6}$Li atoms near a $p$-wave Feshbach resonance. We measured the three-body loss coefficients as functions of temperature and the scattering volume, and find that the threshold law and the scattering volume scaling law hold at limited temperature and magnetic field region. The scaling behavior of the three-body loss coefficients provides useful baseline characteristics, which help locate the few-body resonances in a $p$-wave interacting identical fermions. Three-body collisions, which on occasion appears to present the largest obstacle to achieving exotic many-body quantum states in ultracold atoms Iskin1 ; Iskin2 ; rich_phase , provide us a key signature of three-body correlation. The capability of tuning inter-atomic interactions using Feshbach resonances has made significant contribution toward clarifying the relation between three-body collisional properties and scattering length in a system of ultracold atoms with $s$-wave interactions. In identical bosons with large $s$-wave scattering lengths, it has been theoretically discussed and experimentally confirmed that the three-body loss coefficient $L_{3}$ depends on the scattering length $a_{s}$ as $L_{3}\propto a_{s}^{4}$ in a large scattering length regime a4_1 ; a4_2 ; a4_3 ; a4_4 ; a4_5 ; a4_experiment ; a4_experiment2 ; a4_experiment3 ; Shotan . This scaling behavior of $L_{3}$ has provided the baseline characteristic of the three-body losses to mark out the three-body Efimov resonances efimov_experiment and four-body resonances Ferlaino in the system of identical bosons. With the three-body physics has been clarified in $s$-wave interacting systems, attention now turns to the three-body collisional properties arising from enhanced $p$-wave interactions  Esry ; Suno ; Jona . At the low collision energy limit, three-body collisions in indistinguishable fermions with $p$-wave interactions should obey the threshold law of $L_{3}\propto E^{2}$ , where $E$ is the collision energy Esry . In addition, dimensional analysis based on the threshold law assumption tells us that the three-body collision coefficients depend on the $p$-wave scattering volume to the power of $8/3$ Suno . However, this scaling law is only true when the contribution from an effective range is negligible therefore it is unclear as to which parameter range the scaling behavior actually applies. Since the first observation of the $p$-wave Feshbach resonance, extensive studies on the atomic losses ENS_loss ; JILA_loss ; MIT_loss , determination of the scattering parameters Ticknor ; Nakasuji , creation of $p$-wave molecules JILA_mol ; Fuchs ; inada ; Maier , and determination of $p$-wave contacts Luciuk have taken place. Although the systematic studies on the three-body collision coefficients have been conducted, the scaling behavior has eluded experimental confirmation to date. This is presumably because the scaling behavior only applies to the far-detuned regime from the $p$-wave Feshbach resonance, such as the limited scattering volume regime Suno , where the atomic lifetime owing to three-body loss is similar to that of a one-body atomic lifetime. Similar to $s$-wave interacting bosons, knowledge of the scaling behavior would be useful to observe the super-Efimov effect predicted in a two-dimensional spin-polarized Fermi gas with $p$-wave interactions super efimov . In this paper, we measured the three-body loss coefficients as functions of temperature and magnetic field in an ultracold spin-polarized gas of ${}^{6}$Li atoms near a $p$-wave Feshbach resonance, we experimentally confirmed the threshold behavior of $L_{3}\propto E^{2}$ and the scattering volume scaling of the three-body loss coefficient $L_{3}\propto|V_{B}|^{8/3}$, where $V_{B}$ is the scattering volume. We focused on the large-detuning magnetic field regime to suppress the resonance effect. We also worked at the low temperature regime, where the threshold behavior still holds, however the ${}^{6}$Li atoms were maintained at high temperature than Fermi temperature, so that the Gaussian spatial density profile for the atomic cloud could be assumed. One advantage of using ${}^{6}$Li as opposed to ${}^{40}$K is that the atomic loss in ${}^{6}$Li is purely caused by three-body losses in contrast to using ${}^{40}$K, because the lowest spin state of ${}^{6}$Li is not a stretched state, and therefore we can work on the Feshbach resonances in the lowest energy state, where dipolar losses do not occur JILA_mol ; Kurlov . A gas of ${}^{6}$Li atoms was prepared by all optical means as described in Nakasuji . We trapped two-component fermionic atoms of ${}^{6}$Li in $\ket{1}\equiv\ket{F=1/2,m_{F}=1/2}$ and $\ket{2}\equiv\ket{F=1/2,m_{F}=-1/2}$ states and performed evaporative cooling at the magnetic field of 300 Gauss. The gas was cooled to the temperature around $T/T_{\rm F}\simeq 1$, where $T$ is the temperature of the ${}^{6}$Li atoms and $T_{\rm F}$ is the Fermi temperature. We limited our experiment to gases with temperatures of $T/T_{\rm F}\simeq 1$ to ensure that the atomic density profile could be well described by the Gaussian function. After the evaporative cooling, we irradiated the resonant light to the atoms in $\ket{2}$ state to remove them from the trap to prepare a spin-polarized Fermi gas of atoms in the $\ket{1}$ state. The magnetic field was then scanned quickly below the $p$-wave Feshbach resonance for the atoms in the $\ket{1}$ state located at a magnetic field 159 Gauss. Next, the magnetic field was altered to the value where the loss of the atoms was measured. We calculated the number of remaining atoms after various holding time to take the decay curve of the atoms for particular magnetic fields. In our experiment, we limited the loss of atoms to less than $30\%$ of the initial number of atoms to suppress the influence of the deformation of momentum distribution. The decay of the number of atoms is described by the rate equation for the atomic density as $${\dot{n}}=-l_{3}(E,B)n^{3},$$ (1) where $n$ is the number of density and $l_{3}(E,B)$ is the three-body loss coefficient as a function of the collision energy $E$ and the magnetic field $B$. Since the trap-averaged three-body loss was measured, the average of $l_{3}(E,B)$ over $E$ assuming the Maxwell-Boltzmann distribution as Suno ; Zhenhua $$L_{3}(T,B)=\frac{1}{2(k_{\rm B}T)^{3}}\int l_{3}(E,B)E^{2}e^{-E/k_{\rm B}T}{% \rm d}E.$$ (2) The decay curve was measured three times for each experimental condition; one decay curve comprised 40 shots with various holding times. Our experimental setup had an atomic trap lifetime of 100 sec determined by the background gas collisions, and we therefore limited our measurements to the experimental conditions, which resulted in faster three-body loss rate compares with the one-body loss rate. Figure 1 shows the double logarithmic plot of the three-body loss coefficients $L_{3}$ as a function of the atomic temperature at the magnetic field detuning of 0.22 (red circles) and 0.45 Gauss (brown rhombi). $L_{3}$ increases gradually in the low temperature region and rises rapidly above a certain temperature point. According to the threshold behavior predicted by Esry et al. Esry , the three-body loss coefficient has a quadratic dependence to the collision energy. Since our measurements are limited to the temperature region where Maxwell-Boltzmann distribution applies, the thermal averaged $L_{3}$ has a threshold behavior of $L_{3}\propto T^{2}$. The black dashed lines in Fig. 1 shows the fitting result for $L_{3}\propto T^{2}$ with a pre-factor as a free parameter at the low temperature region. $L_{3}$ shows a quadratic dependence as expected from the threshold behavior. In the high temperature region, $L_{3}$ deviates from the threshold behavior quite rapidly with the eighth power even larger. The red and brown dashed vertical lines show the temperature values for the data with the same color, where $k_{\rm B}T/\Delta\mu(B-B_{\rm res})=0.1$, where $\Delta\mu$ and $B_{\rm res}$ are the relative magnetic moment of the molecular energy level to the atomic energy level and resonance magnetic field. Notably, the temperature point at which the two sets of data deviate from the threshold behavior is at the similar value of $k_{\rm B}T/\Delta\mu(B-B_{\rm res})$. This value indicates how far from the Feshbach resonance the magnetic field, sharp increase of the data relative may be related to the resonance effect. However, $k_{\rm B}T/\Delta\mu(B-B_{\rm res})=0.1$ means that the detuning is quite large. Quantitative understanding of $L_{3}$ at the high temperature region requires further study specific to this temperature regime. Next, we discuss the magnetic field dependence of the three-body loss coefficient. Figure 2 shows $L_{3}$ vs magnetic field detuning $B-B_{\rm res}$ for measurements at the temperature of 2.7(green circles), 3.9(blue triangles), and 5.7 $\mu$K (purple squares), respectively. From the large detuning to the small detuning, $L_{3}$ increases gradually and rises rapidly at certain magnetic field detunings similar to the temperature dependence of $L_{3}$ (as shown in Fig. 1). From the dimensional analysis together with the threshold behavior, $L_{3}$ is expected to vary with the $p$-wave scattering volume as $L_{3}\propto{V_{B}}^{8/3}$ Suno based on the assumption that the effective range contribution is negligible. In Fig. 2, the black dashed lines shows the fitting results of the ${V_{B}}^{8/3}$ dependence at a relatively large detuning regions for the three data sets. It is clear that the experimental results are consistent with the scattering length scaling law of $L_{3}\propto{V_{B}}^{8/3}$ at certain magnetic field detuning regions. $L_{3}$ deviates rapidly in the region relatively close to the Feshbach resonance. The green, blue and purple dashed vertical lines show the magnetic field detuning point, where $k_{\rm B}T/\Delta\mu(B-B_{\rm res})=0.1$ for the data with the same color. Notably, the three sets of data deviate from the scattering length scaling behavior at the point corresponding to $k_{\rm B}T/\Delta\mu(B-B_{\rm res})=0.1$, which is similar to the temperature dependence shown in Fig. 1. Further investigation is required for a deeper understanding of $L_{3}$ near the resonance. Now that the threshold behavior and the scattering length scaling law have been confirmed separately, we plot all the data from Fig. 1 and Fig. 2 into one figure by taking $L_{3}$ on the vertical axis and ${k_{T}}^{4}V_{B}^{8/3}$ on the horizontal axis (as shown in Fig. 3). Here, $k_{T}$ is the wave number of atoms defined by $k_{T}=\sqrt{3mk_{\rm B}T/2\hbar^{2}}$. The red circles and brown rhombi are the $L_{3}$ data of the temperature dependence from Fig. 1 and the green circles, blue triangles, and purple squares are the $L_{3}$ data of the magnetic field detuning dependencies from Fig. 2. As can be seen, the experimental data congregates in the small ${k_{T}}^{4}V_{B}^{8/3}$ region and indicates that $L_{3}$ linearly depends on ${k_{T}}^{4}V_{B}^{8/3}$. The black solid line shows the dependence of $L_{3}\propto{k_{T}}^{4}V_{B}^{8/3}$ with the pre-factor taken as a free parameter to match it with the data in the small $k^{4}V_{B}^{8/3}$ region. We mention in Fig. 1 that the data are consistent with the threshold behavior by fitting the data in the low temperature region with the quadratic function to the temperature, but at that time the pre-factors were taken as free parameters. The fact that the red circles and brown rhombi in Fig. 3 match each other in the low ${k_{T}}^{4}V_{B}^{8/3}$ region, this indicates that the difference of the pre-factors for the red circles and brown rhombi can be explained by the scattering length dependence of $V_{B}^{8/3}$. The same story applies to the $L_{3}$ data comprising green circles, blue triangles, and purple squares shown in Fig. 3. Since the three data match one another in Fig. 3, this indicates that the difference of the pre-factors, when we fit those data in Fig. 2, can be explained by the threshold behavior of $L_{3}\propto T^{2}$. The inset of Fig. 3 shows the data points selected from the data in the main figure that match the scaling law of $L_{3}\propto{k_{T}}^{4}V_{B}^{8/3}$. From the fitting of the data with the linear function, we derive that $L_{3}=A\times{k_{T}}^{4}V_{B}^{8/3}$ with $A=0.02$ ${\rm m}^{2}{\rm s}^{-1}$. The vertical lines in Fig. 3 indicate the points corresponding to $k_{B}T/\Delta\mu(B-B_{\rm res})=0.1$ for the data shown with the same color. Obviously the data deviates from the ${k_{T}}^{4}V_{B}^{8/3}$ dependence where the vertical lines are located because this is where either the threshold behavior or scattering length scaling law breaks down (as shown in Figs. 1 and 2). Considering the expression for the $p$-wave collisional phase shift, the effective range term becomes critical when $k_{\rm e}k_{T}^{2}V_{B}$ is at the order of unity. In our experiment, we derive that $k_{\rm e}k_{T}^{2}V_{B}\approx 0.1$ at the point of the vertical lines in Fig. 3. It is not surprising that the effective range term is a key factor in this parameter region, however it is not yet clear if the significant increase of $L_{3}$ at a higher ${k_{T}}^{4}V_{B}^{8/3}$ value can be explained by the contribution from the effective range term. Further study is required for the quantitative understanding of the characteristics above the scaling behavior. In conclusion, we experimentally confirmed the threshold behavior and scattering length scaling law of the three-body loss coefficients in an ultracold spin-polarized gas of ${}^{6}$Li atoms near a $p$-wave Feshbach resonance. We measured the three-body loss coefficients as functions of temperature to confirm the quadratic dependence of $L_{3}$ to the temperature in the low temperature region. We also measured the scattering volume dependence of $L_{3}$ to confirm the scattering length scaling law of $L_{3}\propto V_{B}^{8/3}$ in the small $V_{B}$ region. The knowledge gained from the scaling behavior will be useful to observe the resonances arising from few-body states, just as $L_{3}\propto a^{4}$ has made a significant contribution in finding the Efimov states in $s$-wave interacting identical bosons. We would like to thank Dr. Zhenhua Yu for fruitful discussion and Dr. Shinsuke Haze for his support throughout the experiment. This work was supported by a Grant-in-Aid for Scientific Research on Innovative Areas (Grant No. 24105006), and a Grant-in-Aid for challenging Exploratory Research (Grant No. 17K18752). MW acknowledges the support of a Japanese government scholarship (MEXT). References (1) M. Iskin and C. A. R. Sá de Melo, Phys. Rev. Lett. 96, 040402 (2006). (2) M. Iskin and C. J. Williams, Phys. Rev. A. 77, 041607 (2008). (3) V. Gurarie, L. Radzihovsky, and A. V. Andreev, Phys. Rev. Lett. 94, 230403 (2005). (4) P. O. Fedichev, M. W. Reynolds, and G. V. Shlyapnikov , Phys. Rev. Lett. 77, 2921 (1996). (5) E. Nielsen, J. H. Macek, Phys. Rev. Lett. 83, 1566 (1999). (6) B. D. Esry, C. H. Greene, and James P. Burke, Jr., Phys. Rev. Lett. 83, 1751 (1999). (7) P. F. Bedaque, Eric Braaten, and H.-W. Hammer, Phys. Rev. Lett. 85, 908 (2000). (8) E. Braaten and H.-W. Hammer, Phys. Rev. Lett. 87, 160407 (2001). (9) T. Weber, J. Herbig, M. Mark, H. -C. Nägerl, and R. Grimm, Phys. Rev. Lett. 91, 123201 (2003). (10) J. Stenger, S. Inouye, M. R. Andrews, H. -J. Miesner, D. M. Stamper-Kurn, and W. Ketterle, Phys. Rev. Lett. 82, 2422 (1999) (11) B. S. Rem, A. T. Grier, I. Ferrier-Barbut, U. Eismann, T. Langen, N. Navon, L. Khaykovich, F. Werner, D. S. Petrov, F. Chevy, and C. Salomon, Phys. Rev. Lett. 110, 163202 (2013) (12) Z. Shotan, O. Machtey, S. Kokkelmans, and L. Khaykovich, Phys. Rev. Lett. 113, 053202 (2014). (13) T. Kraemer , M. Mark , P. Waldburger , J. G. Danzl , C. Chin , B. Engeser , A. D. Lange , K. Pilch , A. Jaakkola, H.-C. Nägerl, R. Grimm, Nature 440, 04626 (2006). (14) F. Ferlaino, S. Knoop, M. Berninger, W. Harm, J. P. D’Incao, H. -C. Nägerl, and R. Grimm, Phys. Rev. Lett. 102, 140401 (2009). (15) M. Jona-Lasinio, L. Pricoupenko, and Y. Castin, Phys. Rev. A 77, 043611 (2008). (16) B. D. Esry, C. H. Greene, and H. Suno , Phys. Rev. A 65, 010705 (2001). (17) H. Suno, B. D. Esry, and Chris H. Greene, Phys. Rev. Lett. 90, 053202 (2003). (18) J. Zhang, E. G. M. van Kempen, T. Bourdel, L. Khaykovich, J. Cubizolles, F. Chevy, M. Teichmann, L. Tarruell, S. J. J. M. F. Kokkelmans, and C. Salomon, Phys. Rev. A 70, 030702 (2004). (19) C. A. Regal, C. Ticknor, J. L. Bohn, and D. S. Jin, Phys. Rev. Lett. 90, 053201 (2003). (20) C. H. Schunck, M. W. Zwierlein, C. A. Stan, S. M. F. Raupach, W. Ketterle, A. Simoni, E. Tiesinga, C. J. Williams, and P. S. Julienne, Phys. Rev. A71, 045601 (2005). (21) C. Ticknor, C. A. Regal, D. S. Jin, and J. L. Bohn, Phys. Rev. A69, 042712 (2004). (22) T. Nakasuji, J. Yoshida and T. Mukaiyama, Phys. Rev. A. 88, 012710 (2013). (23) J. P. Gaebler, J. T. Stewart, J. L. Bohn, and D. S. Jin, Phys. Rev. Lett. 98, 200403 (2007). (24) J. Fuchs, C. Ticknor, P. Dyke, G. Veeravalli, E. Kuhnle, W. Rowlands, P. Hannaford, and C. J. Vale, Phys. Rev. A77, 053616 (2008). (25) Y. Inada, M. Horikoshi, S. Nakajima, M. Kuwata-Gonokami, M. Ueda, and T. Mukaiyama, Phys. Rev. Lett. 101, 100401 (2008). (26) R. A. W. Maier, C. Marzok, C. Zimmermann, and Ph. W. Courteille Phys. Rev. A81, 064701 (2010). (27) C. Luciuk, S. Trotzky, S. Smale, Z. Yu, S. Zhang, and J. H. Thywissen, Nat. Phys. 12, 599 (2016). (28) Y. Nishida, S. Moroz, and D. T. Son, Phys. Rev. Lett. 110, 235301(2013). (29) D. V. Kurlov and G. V. Shlyapnikov, Phys. Rev. A 95, 032710 (2017). (30) Z. Yu, private communication.
Stellar populations in NGC 5128 with the VLT: evidence for recent star formation††thanks: Based on observations collected at the European Southern Observatory, Paranal, Chile, within the Observing Programmes 63.N-0229 and 65.N-0164, and in part on observations collected by the NASA/ESA Hubble Space Telescope, which is operated by AURA, Inc., under NASA contract NAS5–26555. M. Rejkuba 1European Southern Observatory, Karl-Schwartzschild-Strasse 2, D-85748 Garching, Germany E-mail: mrejkuba@eso.org, dsilva@eso.org 12Department of Astronomy, P. Universidad Católica, Casilla 306, Santiago 22, Chile E-mail: dante@astro.puc.cl 2    D. Minniti 2Department of Astronomy, P. Universidad Católica, Casilla 306, Santiago 22, Chile E-mail: dante@astro.puc.cl 2    D.R. Silva 1European Southern Observatory, Karl-Schwartzschild-Strasse 2, D-85748 Garching, Germany E-mail: mrejkuba@eso.org, dsilva@eso.org 1    T.R. Bedding 3School of Physics, University of Sydney 2006, Australia E-mail: bedding@physics.usyd.edu.au3 (Received date / Accepted date) Key Words.: Galaxies: elliptical and lenticular, cD – Galaxies: stellar content – Stars: fundamental parameters – Galaxies: individual: NGC 5128 ††offprints: M. Rejkuba, E-mail: mrejkuba@eso.org We resolve stars of the nearest giant elliptical galaxy NGC 5128 using VLT with FORS1 and ISAAC. We construct deep $U$, $V$ and $K_{s}$ color-magnitude and color-color diagrams in two different halo fields (in the halo and in the north-eastern diffuse shell). In the outer, shell field, at $\sim 14$ kpc from the center of the galaxy, there is a significant recent star formation with stars as young as 10 Myr, approximately aligned with the prominent radio and x-ray jet from the nucleus of the host AGN. Ionized gas filaments are evident in ultraviolet images near the area where neutral Hi and CO molecular gas was previously observed. The underlying stellar population of the halo of the giant elliptical is predominantly old with a very broad metallicity distribution. The presence of an extended giant branch reaching $M_{\rm bol}=-5$ mag suggests the existence of a significant intermediate-age AGB population in the halo of this galaxy. 1 Introduction Galaxies show a wide range of star formation activity and a large range of metallicities and ages in their stellar populations. Understanding the physical nature and origins of their stars is fundamental to understanding the formation and evolution of galaxies. The current knowledge of star formation histories of galaxies along the Hubble sequence is summarized by Kennicutt (kennicutt ). This knowledge is mainly based on the integrated photometric and spectroscopic studies through the predictions of populations synthesis of the integrated light of star clusters (Bica bica ) and galaxies (Bruzual & Charlot bc93 , Maraston maraston ). The predictions of population synthesis can now be tested directly not only for globular clusters (e.g. Vazdekis et al. vazdekis ), but also for nearby galaxies. Stellar evolution theory provides predictions of the features expected in color-magnitude diagrams (CMDs) for stellar populations with different ages and metallicities (Renzini & Fusi Pecci renzini88 , Chiosi et al. chiosi , Gallart gallart98 , Aparicio aparicio98 , Tolstoy tolstoy ). Coupled with the improvements in telescope sizes, detector sensitivity, field of view and spatial resolution, compared with those a decade ago, the direct observations of the stellar content of nearby galaxies is becoming one of the most active areas of extragalactic research. In the Local Group, indicators of the old stellar populations such as old main sequence turn-offs or at least the horizontal branch magnitudes are within reach of available instrumentation (e.g.  Hurley-Keller et al. hurley-keller , Gallart et al. gallart99 , Held et al. held , Rejkuba et al. rejkubaWLM ). The recent results on the studies of the Local Group galaxies have been summarized by Mateo (mateo ), van den Bergh (vdb99 ; vdb00 ) and Grebel (grebel ). The Local Group contains galaxies representative of almost all the classes, except the important giant elliptical class of galaxies. The closest giant elliptical is NGC 5128, the dominant galaxy in the Centaurus group at distance of $3.6\pm 0.3$ Mpc (Harris et al. harris99 , Soria et al. soria , Hui et al. hui , Tonry & Schechter tonry ). It has been extensively studied over the last 50 years (for an exhaustive review see Israel israel ). Its popularity is not only due to its brightness, but also to its many unusual features: (i) there is a prominent dust band containing young stars and Hii regions (Unger et al. unger , Wild & Eckart wild , Graham graham79 ); (ii) there is an active nucleus with a radio and x-ray jet, radio lobes and optical filaments (Cooper et al. cooper , Feigelson et al. feigelson , Kraft et al. kraft , Schreier et al. schreier , Clarke et al. clarke , Blanco et al. blanco , Dufour & van den Bergh dufour ); (iii) Malin et al. (malin ) discovered a large number of faint narrow shells of stars surrounding the galaxy; (iv) Schiminovich et al. (schiminovich ) detected $4\times 10^{8}$ M${}_{\odot}$ of Hi gas associated with the stellar shells, but slightly displaced outside the shells; and (v) most recently, molecular CO gas has been found associated with the Hi gas and the stellar shells (Charmandaris et al. charmandaris ). All of these are clear indications of a recent interaction with a gas–rich galaxy. The high resolution and sensitivity of WFPC2 on the Hubble Space Telescope (HST) enabled the first studies of the resolved old stellar populations in the halo of NGC 5128 (Soria et al. soria , Harris et al. harris99 , Harris & Harris harris00 , Mould et al. mould00 ). NICMOS on HST was used to resolve the stars in the near IR (Marleau et al. marleau00 ) in the same field as the optical study of Soria et al. (soria ; $\sim$9kpc south of the center of the galaxy). In these two studies, a small intermediate-age population of $\sim 5$Gyr has been found, comprising up to $\sim 10\%$ of the total stellar population in the halo. On the other hand, there are almost no intermediate–age stars in the field further out in the halo, at $\sim 20$ kpc from the galaxy center (Harris et al. harris99 ) nor at $\sim 31$ kpc (Harris & Harris harris00 ). The comparison of the two results may indicate the presence of a gradient in the stellar population in the halo, as suggested by Marleau et al. (marleau00 ). However, the small field of view of HST puts serious limitations to the conclusions in the cases where the strong gradients in galaxy populations exist (see, for example, the case of Local Group dwarf galaxies like Leo I (Gallart et al. gallart99 , Held et al. held ) or WLM (Minniti & Zijlstra mz96 ; mz97 )). We present here a wide wavelength range photometry of the resolved stellar populations in NGC 5128 obtained from the ground with Very Large Telescope (VLT) in $U$, $V$ and $K_{s}$ band. The deep and high-resolution VLT imaging, coupled with the much larger field of view than HST, enables us to address the questions of the gradients in stellar populations in the halo of this giant elliptical galaxy. The proximity of NGC 5128 provides an unusual opportunity for a direct study of shell stars. We use infrared-optical colour-magnitude diagrams of the shell to study the ages and metallicities of the stars belonging the cannibalized galaxy. 2 The Data The observations were carried out with the ESO Very Large Telescope (VLT) at Paranal Observatory, Chile. They consist of optical (Bessel $U$- and $V$-band) and near-IR ($K_{s}$-band) images. We observed two fields in the halo of NGC 5128. Field 1 ($\alpha_{2000}=13^{h}26^{m}23.5^{s}$, $\delta_{2000}=-42^{\circ}52\arcmin 00\arcsec$; Fig. 1) was centered on the prominent N-E shell, $\sim 14\arcmin$ away from the center of the galaxy. Field 2 ($\alpha_{2000}=13^{h}25^{m}26^{s}$, $\delta_{2000}=-43^{\circ}10\arcmin 00\arcsec$; Fig. 2) was chosen to overlap with the HST field of Soria et al. (soria ) and lies at a distance of $\sim 9\arcmin$ from the center of the galaxy. The optical data were obtained on 1999 July 11 and 12, while the $K_{s}$-band images cover several epochs and will be used to search for long-period variable stars. All of the observations were secured in the service mode. Calibration data, bias, dark and flat-field images as well as photometric standards from Landolt (in optical; landolt ) and Persson et al. (IR; persson ) catalogues, were supplied by the ESO calibration plan. The observations are summarized in Table 1. 2.1 Optical Observations and Data Reduction Two 15-minute exposures in both $U$ and $V$ were acquired for each of the two fields in NGC 5128 using FORS1 (FOcal Reducer and low dispersion Spectrograph) on VLT Unit Telescope 1 (Antu). The FORS1 detector is a 2048$\times$2048 Tektronix CCD, thinned and anti-reflection coated. The pixel size is 24$\times$24 $\mu$m. The field of view is $6\aas@@fstack{\prime}8\times 6\aas@@fstack{\prime}8$ and the scale is $0\aas@@fstack{\prime\prime}2/$pixel. For service observations in direct-imaging mode, the FORS1 CCD is read out in four-port read-out mode. We used the ESO pipeline reductions which are based on MIDAS package specially developed to reduce images with 4 different amplifiers. With it the overscan for each amplifier was subtracted individually and after the subtraction of bias the images were corrected for the flat-field. 2.2 Near-IR Observations and Data Reduction For the near-IR $K_{s}$-band observations we used ISAAC, also on Antu. In this wavelength domain (0.9–2.5 $\mu$m) the detector is a 1024$\times$1024 Hawaii Rockwell array. The field of view is $2\aas@@fstack{\prime}5\times 2\aas@@fstack{\prime}5$ and the scale is $0\aas@@fstack{\prime\prime}147/$pixel. A series of 10-second exposures were taken at each epoch, usually in groups of six, with the number of repeats depending on weather conditions and any technical problems. The total exposure time for each epoch is given in Table 1. An additional $K_{s}$-band observation of Field 2, with a total on-target exposure time of 45 minutes, was acquired with the SOFI instrument at the ESO 3.5-m New Technology Telescope (NTT) at La Silla Observatory, Chile. The field of view of SOFI is $4\aas@@fstack{\prime}94\times 4\aas@@fstack{\prime}94$ and the scale is $0\aas@@fstack{\prime\prime}292/$pixel. We also obtained observations with the NIC3 array on HST, using the F222M filter, which is similar to (although narrower than) the standard K filter. The NIC3 field of view is $51\aas@@fstack{\prime\prime}2\times 51\aas@@fstack{\prime\prime}2$, much smaller than the one of ISAAC. The standard procedure in reducing IR data consists of (i) dark subtraction, (ii) flat-field correction, (iii) sky subtraction, (iv) registering and combining the images. We did not use the ESO pipeline reduction product. Good sky subtraction in a crowded field like that of a galactic halo is particularly important. For that step we used the DIMSUM package (Stanford et al. dimsum ) within IRAF111IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation. In DIMSUM the sky subtraction is made in two passes. In the first, a median sky is computed for each image from the six frames that are closest in time. The shifts between the sky-subtracted frames are then computed and all the images stacked together using a rejection algorithm to remove cosmic rays. An object mask is computed for the coadded image and then shifted back in order to create object masks for the individual frames. In the second pass, the sky subtraction is made using the object masks to avoid overestimation of the sky level. These masks are also used to check that the bright object cores were not removed as cosmic rays in the previous pass. After the mask-pass sky subtraction, all frames belonging to single epoch are registered with imalign and combined with imcombine task in IRAF. 3 The Photometry 3.1 Data analysis The photometric reduction of the combined $U$-, $V$- and $K_{s}$-band images was performed using the DAOPHOT II programme (Stetson stetson ; stetsonALF ). First, we located all the objects that were $>3\sigma$ above the background on individual images. More than 50 relatively bright, not saturated, isolated, stellar objects were chosen to create the variable PSF for each image. ALLSTAR fitting of the PSF to all the objects produced the object lists that were matched with DAOMATCH and DAOMASTER, where only objects with good photometry in at least two frames were kept. The final photometric catalogue was obtained with ALLFRAME which uses as input information photometry lists from ALLSTAR and fits the PSF to all frames (in $U$, $V$ and $K$-band) simultaneously. Again, only the objects detected in at least 2 images were kept. Using the information on the location of stars in all bands simultaneously improved our photometry, which is deeper by $\sim 1$ mag with respect to ALLSTAR photometry. Moreover, the treatment of close companions, in particular the ones that have different colors, is much better with ALLFRAME. NGC 5128 is close enough that its globular clusters appear slightly resolved, in the sense of having a larger FWHM and non-stellar PSF (Minniti et al. mi+96 , Rejkuba rejkuba01 ). Restricting the sharpness and goodness of the fit parameters to $-0.7<$SHARP$<0.7$, we rejected most of the galaxies, star clusters and other extended objects as well as remaining blemishes and cosmic rays from the final photometry list. Also stars with large photometric uncertainties in one or more filters ($\sigma\geq 0.5$ mag) were rejected. Finally the images were visually checked and a few remaining extended background objects (e.g. partially resolved star forming region in a background spiral galaxy) were discarded. With this selection our final $U,V$ photometry catalogue contains 1581 and 1944 stars in Fields 1 and 2, respectively, while the $V,K$ catalogue contains 5172 and 8005 stars and the numbers of stars with good photometry in $U$, $V$ and $K$-band are 508 and 663 in Fields 1 and 2, respectively. 3.2 The photometric calibration For the photometric calibration of the optical images, standard stars from the catalogue of Landolt (landolt ) were used. We checked the photometric quality of the two nights separately and since both were photometric, with almost identical zeropoints, we combined the standard stars for the two nights. Using a total of 14 stars in 4 different fields, spanning the color range $-1.321<(U-V)<4.162$, we derived the following calibration transformations: $$\displaystyle u_{\rm inst}$$ $$\displaystyle=$$ $$\displaystyle U-24.262(\pm 0.089)+0.379(\pm 0.068)*X$$ (1) $$\displaystyle-0.042(\pm 0.007)*(U-V)$$ $$v_{\rm inst}=V-27.348(\pm 0.042)+0.213(\pm 0.034)*X$$ (2) where $X$ is the mean airmass of the observations, $u_{\rm inst}$ and $v_{\rm inst}$ are instrumental magnitudes and $U$ and $V$ magnitudes from the Landolt (landolt ) catalogue. The one-sigma scatter around the mean was 0.031 mag for the $U$ band and 0.023 mag for the $V$ band (Fig. 3, left panel). Adding the color term ($U-V$) in the transformations slightly reduces the scatter in the $V$-band to 0.017 mag (Fig. 3, right panel). However, the calibration equation without the color term for the $V$-band was preferred, since the scientific data in that filter go much deeper and $U$ magnitudes for some objects could not be measured accurately enough. Observations at our reference $K_{s}$-band epoch (G in Table 1) were taken in photometric conditions. During the same night, July 8 2000, three standard stars from the list of Persson et al. (persson ) were observed. Each standard star was observed at 5 different positions on the IR-array. In this way, a total of 15 independent measurements were obtained. However, the number of measurements with different airmass was only 3, so that we preferred to adopt the mean extinction coefficients measured on Paranal for $K_{s}$ - band of 0.05 mag/airmass. The derived zeropoint of the G-epoch observations is $24.23\pm 0.04$. The following calibration equation was applied to our data: $$k_{s,\rm inst}=K_{s}-24.23(\pm 0.04)+0.05*X$$ (3) where $X$ is the airmass of observations, $k_{s,\rm inst}$ is the instrumental magnitude and $K_{s}$ is the calibrated magnitude. The photometry of all other $K$-band epochs were measured with respect to the reference epoch. 3.3 Completeness and contamination We made extensive tests to measure completeness and magnitude uncertainties as a function of magnitude and radial distance from the center of the galaxy. The completeness for the $U$, $V$ and $K_{s}$-band photometry has been calculated using the ADDSTAR programme within DAOPHOT. We made twenty artificial star experiments, adding each time $\sim 3000$ stars to the first frame. The stars were added on a regular grid separated by $2\times R_{PSF}+1$ pixels in order not to alter crowding (where $R_{PSF}$ is the PSF radius used for fitting the image with the worst seeing) and having magnitudes randomly distributed in the observed range. The position of the first star in the list was chosen randomly, so that over 20 different experiments the added stars were uniformly distributed over the whole field. After the appropriate coordinate shifts were applied and magnitudes changed to instrumental system taking into account the observed magnitudes and colors, the same stars from the first frame were added also to all other frames. Their photometry was recomputed in the same way as for the original images. The stellar PSF obtained from the field stars for the respective image was used in the simulations. Incompleteness in Field 1, defined by a recovery rate of 50% from the artificial-star experiment, sets in around magnitude 25 in the $U$-band (but depends strongly on $U-V$ color; see Fig. 4) and 22.5 in $K_{s}$-band (Fig. 5). The corresponding numbers for Field 2 are 25 for $U$-band and 21.3 for $K_{s}$-band. The difference at $K_{s}$-band between the two fields is due to (1) better seeing in Field 1 ($FWHM=2.1$ pix vs. 2.7 pix) and (2) higher surface brightness in Field 2. Dependence of the completeness on radial distance from the center of the galaxy ($\alpha_{2000}=13^{\rm h}25^{\rm m}26\aas@@fstack{s}4$, $\delta_{2000}=-43^{\circ}01\arcmin 05\aas@@fstack{\prime\prime}1$) was calculated for the magnitude bin around 90% level of completeness in $U$- and $V$-band (Fig. 6) and around 65% of completeness in $K_{s}$-band. There is no significant spatial variation of completeness in our data. To assess the accuracy of our photometry, we calculated the difference between the input and recovered magnitudes for each magnitude bin (Fig. 4, 5). Our photometry is reliable down to the incompleteness limit and blending is not seriously affecting our data (see the discussion in Sec. 4.2). For magnitudes fainter than the 50% completeness limit, the measured values are systematically brighter, because of the bias towards brighter fluctuations and blending due to crowding. The colors of the recovered stars with magnitudes fainter than the 50% completeness limits are redder than the input colors in the $UV$ CMDs (Fig. 4), due to the larger incompleteness in the $U$- than in the $V$-band. In the $VK$ CMDs the colors of the recovered stars range from slightly redder than the input color for the very red stars, due to the dominant incompleteness in the $V$-band, to bluer for the faintest and bluest stars, due to the dominant incompleteness in the $K$-band (Fig. 5). We did not correct our data for this systematic shift, because magnitudes fainter than the 50% completeness limit will not be used in further analysis. Contamination by foreground Galactic stars and unresolved background galaxies is important because of the low Galactic latitude of our fields ($b=19\aas@@fstack{\circ}5$) and the very deep photometric limits observed. We used the Besançon group model of stellar population synthesis of the Galaxy available through the Web222http://www.obs-besancon.fr/www/modele/modele_ang.html (Robin & Creze robin&creze , Robin et al. robin ) to simulate the total number and optical magnitude and color distribution of Galactic foreground stars in our fields. The simulated catalogue has 1827 stars in the FORS1 field ($6\aas@@fstack{\prime}8\times 6\aas@@fstack{\prime}8$) in the magnitude interval $18<V<30$. In order to get a realistic estimate for the number of stars that would be observed in our fields, the correction for completeness is necessary. Therefore, we added the stars from the simulated catalogue (with magnitudes scaled to correspond to instrumental magnitudes) to our images and re-measured their magnitudes. In this way, realistic photometric uncertainties were applied and the number of stars recovered in two fields was corrected for completeness. A total of 340 and 350 stars with $U$- and $V$-band photometry satisfying profile fitting and photometric uncertainty selection criteria were measured in Field 1 and 2, respectively. Most of the foreground stars have $0<U-V<3$ (Fig. 7). All of the stars in the red part of the Field 2 CMD brighter than $V\sim 22$ are expected to be foreground stars (see Sect. 4.1). In order to adjust for the expected number of Galactic stars we normalized the models to the observed number of reddest stars in Field 2. Thus the total number of foreground stars was increased by 31% in both fields. The Galaxy model simulation supplies not only the colors of the simulated stars, but also their metallicities, ages, spectral types and luminosity classes. Using all these data and the Kurucz (kurucz ) model atmospheres333http://cfaku5.harvard.edu/grids.html, we derived the foreground contamination in the $K$-band. After correction for completeness, the expected number of foreground stars in ISAAC images is 112 and 91 in the $VK$ CMD of Field 1 and 2, respectively. All of the foreground stars have $1.0<V-K<4.5$ (Fig. 7 right panel). The measured number of compact background galaxies on the FORS1 images, taken in similar observing conditions, is $\la 400$ in magnitude range $V=20-25$ mag for the selection of sharpness parameter $-1<sharp<1$ (Jerjen & Rejkuba jr01 ). Our tighter selection criteria ($-0.7<sharp<0.7$) eliminated most of them. In the smaller field of view of ISAAC, the predicted number of background galaxies is $\sim 330$ in the interval of magnitudes $16<K_{s}<23$ and $\sim 180$ between $16<K_{s}<22$ (Saracco et al. saracco ). Most of the background galaxies are resolved and rejected by sharp and magnitude uncertainty parameter requirements on our photometry. Only few compact galaxies might contaminate the sample. 3.4 Comparison with published data Several recent studies of resolved stellar populations in NGC 5128 exist in the literature, all but one made with HST in F606W ($V$) and F814W ($I$) or F110W ($J$) and F160W ($H$) photometric bands, which are not very sensitive to the recent star formation. Our Field 2 is centered on the field of Soria et al. (soria ), which was also observed in the near-IR (F110W and F160W filters) by Marleau et al. (marleau00 ), but the direct comparison is not possible due to different photometric bands used. The only possible direct comparison is for our Field 1 photometry, which partially overlaps with the HST photometry of Mould et al. (mould00 ) and the ground based photometry of Fasset & Graham (fg00 ). The last authors observed a wider field than ours in $U$, $B$ and $V$ Glass filters, but due to smaller telescope aperture (2.5m), their photometry is much shallower. The HST photometry is deeper, but covers a much smaller area. The mean difference between $V$ magnitudes of Fasset & Graham and Mould et al. photometry is $-0.13\pm 0.07$ mag, in the sense of HST photometry having a systematically fainter zero point (Fasset & Graham fg00 ). Comparison of 26 stars in common between our data and that of Fasset & Graham for the brightest blue stars (their Table 3) is presented in Fig. 8. The mean difference for all 26 stars is negligible for $V$-band photometry, but it amounts to 0.13 mag in $U$-band. From Fig. 8 a systematic trend with the magnitude is apparent in $V$ and $U$ band. Excluding the star #10 (see Tab. 2), which has HST $V$ magnitude of 21.04 (thus close to our value; star R4 in Table 1 of Mould et al.), and extended sources #4, #5 (see Rejkuba rejkuba01 ; filled triangles in Fig. 8) and #18, these trends can be represented by the following equations: $$V_{\rm FORS1}-V_{\rm FG}=0.030\times V_{\rm FORS1}-0.65$$ (4) $$U_{\rm FORS1}-U_{\rm FG}=0.089\times U_{\rm FORS1}-2.05$$ (5) with rms equal to 0.055 for $V$ and 0.096 for $U$. At least part of this difference may be due to different filters used (Bessell bessel ). Fasset & Graham further neglected the color term in their calibration due to insufficient color coverage of their standards, while we found that the color term is important for our $U$-band calibration. 4 The Color-Magnitude Diagrams 4.1 Optical CMDs Optical CMDs probe young and intermediate age populations. Theoretically, the $U$ and $V$ light is dominated by young main sequence stars (Buzzoni buzzoni ). Figure 9 shows ($U-V$) – $V$ color-magnitude diagrams for stars in both observed halo fields of NGC 5128. Due to better seeing the saturation magnitude of the $V$-band is $\sim 0.3$ mag fainter for Field 2. Most of the stars redder than $(U-V)\sim 0$ belong to our own Galaxy (see Fig. 7). The most important characteristic of the $UV$ CMD of Field 1 is the upper main sequence, visible as the blue plume at $(U-V)\sim-1$ mag. By contrast, there are no such young massive stars in Field 2. The Besançon Galaxy model (see previous section) was used to “clean” the CMDs of foreground star contamination. In Fig. 10 we show $UV$ CMDs after the subtraction of foreground stars. Overplotted are isochrones from Bertelli et al. (bertelli94 ) for $Z=0.004$ and for log(age)=7.0, 7.3, 7.5 for Field 1 and log(age)=7.0, 7.5 and 7.8 for Field 2. The isochrones were shifted to the distance of NGC 5128, assuming a distance modulus of $(m-M)_{V}=27.8$ and reddening of $E(B-V)=0.11$ mag (Schlegel et al. schlegel ). The difference between the two diagrams is striking: the blue main sequence containing stars as young as $\sim 10$ Myr that is present in Field 1 is completely absent from the CMD of Field 2. Well separated from the main sequence in Field 1 is the sequence of blue core-helium burning (BHeB) stars at $(U-V)\sim 0.5$. The width of the gap and the tightness of the main sequence indicate low differential extinction in the field. In Field 2, the stars with $0.5<(U-V)<2.8$ and brighter than $V\la 22.5$ have no corresponding main sequence stars and thus cannot be young stars migrating from blue to red during their core-He burning phase, as is the case for most of the objects in Field 1 with colors $0.5<(U-V)<2.5$. These stars in Field 2 are most probably the remaining foreground contamination, indicating that the foreground contamination may affect the numbers of blue and red HeB stars in Field 1 (see next paragraph). Only a few stars are lying along the isochrone of log(age$)=7.5$ (Fig. 10), while most of them are consistent with much older ages. We conclude that there are no stars younger than $\sim 40$ Myr in Field 2. Note that isochrones for metallicities higher than $Z=0.004$ extend on the red supergiant edge to redder values of $U-V$ than the reddest stars in Field 1 and thus do not fit well our observations. The ratio of blue to red supergiants strongly depends on metallicity (Langer & Maeder lm95 , Maeder & Meynet mm01 ) and in principle could be used to constrain the metallicity of the youngest population in NGC 5128. Counting the number of blue and red supergiants for stars more massive than $\sim 12$ M${}_{\odot}$, we find their ratio to be $B/R<0.7-0.8$ (although with high uncertainty due to the possible foreground contamination), in good agreement with the observed $B/R$ value in SMC cluster NGC 300 (see discussion by Langer & Maeder lm95 ). The metallicity of $Z=0.004$ (corresponding to [Fe/H]$=-0.7$ dex) is appropriate for the SMC. However, since the $B/R$ value depends also on other parameters such as stellar mass, rotation and degree of overshooting (Maeder & Meynet mm01 ), a more detailed comparison with models and the determination of the metallicity of this youngest stellar population in NGC 5128 is warranted (Rejkuba et al., in preparation). The low metallicity implied by the fit of the isochrones on $UV$ CMDs probably reflects the metallicity of the gas left in the halo of NGC 5128 by the accreted satellite. Atomic Hi (Schiminovich et al. schiminovich ) and molecular CO gas (Charmandaris et al. charmandaris ) present in Field 1 are slightly offset from the position of the diffuse stellar shell, the obvious remnant from the accreted galaxy. 4.2 Optical–Near IR CMDs Optical-near IR CMDs probe old and intermediate-age stellar populations. Theoretically, more than two thirds of the light in $K$-band is dominated by cool stars on the red giant branch (RGB) and asymptotic giant branch (AGB), and by red dwarfs (Buzzoni buzzoni ). The red dwarfs are too faint to be detected at the distance of NGC 5128, and thus our $VK$ CMDs are entirely dominated by RGB and AGB stars (Fig. 11). In Fig. 12 we show $VK$ CMDs of Fields 1 and 2 after the statistical subtraction of foreground stars. Overlaid are fiducial RGB sequences of Galactic globular clusters (from left to right: M15, 47 Tuc, NGC 6553 and NGC 6528; Ferraro et al. ferraro ) spanning a large range of metallicities ($-2.17\leq$[Fe/H]$\leq-0.23$ dex). As before, we used a distance modulus of 27.8 and reddening corresponding to $E(B-V)=0.1$ ($E(V-K)=0.274$ and $A_{K}=0.0347$; Rieke & Lebofsky rl85 ) to adjust the magnitudes and colors of RGB fiducials to those of NGC 5128. Obviously, most of the stars in Fig. 12 belong to the RGB. The right edge of the RGB is quite sharp, with most of the stars being more metal-poor than 47 Tuc ([Fe/H]$=-0.71$ dex) and none appearing to be as metal-rich as NGC 6553 ([Fe/H]$=-0.29$ dex). However, the latter is due to incompleteness in $V$-band photometry. The spread in color of the RGB is larger than the photometric uncertainties (Fig. 13), indicating the presence of spread in metallicity and/or age. The most metal-poor stars have metallicities of $-2$ dex if their ages correspond to those of Galactic globular clusters. The population we probe with $VK$ photometry is more metal-poor than $-0.7$ dex. Our $V$-band images are not deep enough to detect more metal rich giants. Walsh et al. (walsh ) measured the mean oxygen abundance of five planetary nebulae in NGC 5128 to be [O/H]$=-0.5\pm 0.3$ dex, consistent with the presence of the large population of stars with metallicities below solar, as we observe in the $VK$ CMDs. There are 1830 stars in Field 1 and 1197 in Field 2 which have good $K$-band photometry ($\sigma_{K}<0.5$, $-0.7<sharp<0.7$ and $\chi<2.0$) above the respective $K$-band completeness limits, but which have not been detected in optical bands. They are uniformly distributed over the whole ISAAC images. The large number of very red stars with good photometry in $K_{s}$ and no counterpart in $V$-band suggests that stars more metal-rich than [Fe/H]$=-0.7$ dex are present, as expected for a luminous giant elliptical galaxy. This is in good agreement with the results of Harris et al. (harris99 ) and Harris & Harris (harris00 ). 5 The Color-Color Diagram Fig. 14 shows the color-color diagram for the objects detected in $U$-, $V$- and $K$-band in Field 1 (open squares) and Field 2 (filled triangles) that had ALLFRAME photometry uncertainties $<0.5$ mag and magnitudes brighter than the 50% completeness limits in all bands. The error-bars plotted are the ones given by ALLFRAME. A total of 99 objects in Field 1 and 90 in Field 2 satisfy these criteria. Most of them are located along the stellar sequence that crosses the color-color diagram in diagonal. Matching the UV and VK catalogues after the statistical subtraction of the foreground contamination, the stellar sequence disappears and only 54 and 44 objects with good photometry in all three filters are left in Field 1 and 2, respectively. We use the color-color diagram presented in Fig. 14 to measure the reddening and the contamination by background galaxies. 5.1 Reddening The objects crossing the color-color diagram in diagonal from top right to bottom left belong to the Milky Way. The quality of the fit of the stellar sequence with isochrones depends on the age of the isochrone with respect to stars. Because the foreground Milky Way stars span a range of ages, isochrones of different ages fit different parts of the stellar sequence. As an example, in order to guide the eye we overplotted the stellar loci of solar metallicity stars and log(age)=7.0 yr (Bertelli et al. bertelli94 ). The tight stellar sequence in the $U-V$ vs. $V-K$ color-color diagram shows that the foreground reddening in the two fields is the same. Its magnitude was measured by fitting the theoretical isochrones and it amounts to $E(B-V)=0.15\pm 0.05$ mag. This is in excellent agreement with the Schlegel et al. (schlegel ) extinction maps and the reddening determined by Fasset & Graham (fg00 ), who used $UBV$ color-color diagrams to measure a value of $E(B-V)=0.14\pm 0.02$ mag. 5.2 Star-galaxy separation Using the color-color plots it is possible to distinguish the foreground dwarf stars from the most compact unresolved galaxies. Galaxies are located in the lower right part of the diagram in Fig. 14. The resolved galaxies are not plotted here since they got rejected by our selection of shape parameters in FIND and ALLFRAME. Note that the group of objects with $-0.5<(V-K)<2$ are the young stars detected in Field 1. 6 Jet aligned Star Formation in the Halo The chains of bright blue compact objects and optical filaments, first noticed by Blanco et al. (blanco ), were recognized as young blue supergiants and OB associations by Graham & Price (gp81 ) and Graham (graham98 ). Our Field 1, situated $\sim 14$ kpc away from the center of the galaxy and coinciding with the NE radio lobe, contains part of this “string of blue knots and filaments.” In Fig. 15 we show the spatial distribution of the bluest (left) and the reddest (right) stars in both our fields. The red stars, belonging to the foreground Galactic population, are uniformly distributed in both fields. Field 2 has almost no very young blue stars, while the majority of the young population in Field 1 is aligned with the jet direction coming from the nucleus of the galaxy. The vertical structure to the right of the jet is located at the leftmost edge of a large Hi cloud found by Schiminovich et al. (schiminovich ). In the northern part of this field, a CO molecular cloud was recently discovered by Charmandaris et al. (charmandaris ). Mould et al. (mould00 ), Graham (graham98 ) and Fasset & Graham (fg00 ) report on the presence of the collimated star formation in the halo of NGC 5128. They favor the scenario in which a past interaction between the radio jet (Morganti et al. morganti99 ) and the Hi cloud complex (Schiminovich et al. schiminovich ) is responsible for the star formation. Finding active star formation far out in the halo of an elliptical galaxy is unusual. Understanding the triggering mechanism and the origin of the star formation, as well as the nature of the underlying ionized gas (Graham graham98 , Morganti et al. morganti91 ; morganti92 ), is important for explaining the filamentary emission and jet-aligned structures observed in other galaxies (e.g., in NGC 1275: Lazareff et al. lazareff , Sabra et al. sabra ; in M87: Gavazzi et al. gavazzi ; and in other more distant radio sources: Rees rees , Best et al. bestetal , De Breuck et al. debreuck ). The alignment of the luminous blue stars with the radio axis of NGC 5128 (Fig. 15) may shed some light on the debated issue of the observed alignment of the radio and optical structures in high-$z$ radio galaxies (Rees rees , Best et al. bestetal ). One of the proposed alignment models is based on the assumption that the action of the jets tunneling through a galactic medium affects the gas via shock heating, thereby inducing vigorous star formation (Begelman & Cioffi begelman , Rees rees , Daly daly ). In particular, as Rees (rees ) discussed in detail, the stars form from the cool-phase gas (with $T\leq 10^{4}$ K) in clouds massive or dense enough to be Jeans-unstable. This mechanism may work well for massive high-$z$ radio galaxies, in which most of the baryonic mass would still be gaseous. In the present day giant ellipticals, on the other hand, little gas is usually left to support strong star formation. However, probably due to previous accretion of a gas rich galaxy, NGC 5128 has substantial cool, dense clouds of molecular CO and atomic Hi gas in its halo (Schiminovich et al. schiminovich , Charmandaris et al. charmandaris ). We note that these clouds could form the clumpy ISM/IGM necessary for the ‘bursting bubble’ model proposed by Morganti et al. (morganti99 ). We have shown clear evidence for blue stars as young as $10\times 10^{6}$ yr (Fig. 10). We also see underlying emission, particularly in our $U$-band images, which probably comes from the [OII] 3727 emission line. High-resolution spectra with good signal-to-noise are needed to decide whether this emission is due to gas being photoionized by the newly formed stars or being photoionized by particles coming directly from the nucleus. Other low-redshift galaxies also show filamentary ionized gas structures in their halos. In the case of M87 (Gavazzi et al. gavazzi ), the filament of ionized gas in the NE part of the halo coincides with the Eastern radio-lobe, in much the same way as in the much closer NGC 5128. It would be interesting to search for young stars associated with that filament too. 7 Intermediate Age Population In contrast with the conclusions of HST studies (Harris et al. harris99 , Harris & Harris harris00 ) of stellar populations in NGC 5128, we detect not only old population II stars, but also a significant number of stars with magnitudes brighter than the tip of RGB ($M_{K}^{\rm Tip}=-0.64(\pm 0.12){\rm[M/H]}-6.93(\pm 0.14)$; Ferraro et al. ferraro ). We took bolometric corrections from Bessell & Wood (bw84 ) and the empirical fit of ($V-K$) vs. $T_{\rm eff}$ from Bessell, Castelli & Plez (bcp ) to transform our $K$–($V-K$) CMDs to the theoretical plane (Fig. 16). Overplotted on the H-R diagrams are the Padova tracks from Girardi et al. (g00 ) for the first-ascent giant branch stars with masses M=0.6, 0.8, 1.2 and 1.6 M${}_{\odot}$ and metallicities $Z=0.004$ (full lines) and $Z=0.008$ (dashed lines). The sharp cut-off on the right side of the H-R diagrams is due to incompleteness in $V$-band. In discussing the H-R diagram, we should consider blending. According to theoretical predictions (Renzini renzini ), at the fiducial galactocentric distances of 9, 12.8 and 17.9 kpc, the number of blends consisting of 2 stars belonging to the tip of RGB is $\sim 1300$, 300 and 55 stars (per $2.2\arcmin\times 2.2\arcmin$; i.e. field of view of ISAAC), respectively. In this calculation, the $B$-band surface brightness measurements from Mathieu, Dejonghe & Hui (MDH ) were used. Obviously, the inner most regions of the galaxy, at the distance $<10$ kpc from the nucleus are too crowded to give accurate photometry for RGB stars and most of the stars above the tip of RGB. We attempted to use the NIC3 HST images overlapping with our Field 2 (galactocentric distance $R_{gc}\sim 9$ kpc) data to improve the resolution. However, we found out that many faint stars visible on ISAAC images are completely within the noise of NIC3 data. The resolution of our ISAAC data in the best seeing is as good as that of NICMOS images, with the advantage of having much better S/N and a more stable PSF. The confirmation of the number of AGB stars in Field 2 can be obtained only through the analysis of the LPVs (Rejkuba et al. in preparation). In Field 1 ($R_{gc}\sim 14$ kpc), our outer shell field, crowding is not that severe. The expected number of two-star blends at the RGB tip in this field ranges from 300–50, as the surface brightness drops across the field. The total number of such blends is therefore less than $\sim 200$ in the most pessimistic calculation. On the other hand, the number of stars above the tip of RGB ($M_{\rm bol}\sim-3.7$; Girardi et al. g00 ) is 768. Subtracting the number of possible blends and allowing for a few of the brightest and bluest stars ($\sim 30$ stars with $M_{K}<9$) to be the remaining foreground contamination, there are still more than 500 stars whose position in the H-R diagram and CMD is consistent with an intermediate-age AGB population. In the inner halo field, Field 2, the number of stars above the tip of RGB is 2844. After subtracting the number of expected blends ($\la 1500$), the number of AGB stars is 2.5 times larger than in Field 1. This confirms the presence of gradients in the intermediate-age population within the halo of NGC 5128, as suggested by Marleau et al. (marleau00 ). The intermediate-age AGB population could have easily been missed in $V$ and $I$-band HST studies due to small field of view and small ($<0.5$ mag) optical magnitude difference between the tip of the RGB and the tip of AGB. The AGB stars are up to $\sim 2$ magnitudes brighter in $M_{K}$ and $M_{\rm bol}$ than RGB stars and thus are easily detectable in near-IR. Thanks to this, Marleau et al. (marleau00 ) could detect some AGB stars in the much smaller NICMOS field. The brightest stars in M32, in the Galactic bulge and the bulge of M31 have similar brightnesses, reaching $M_{\rm bol}=-5.5$ (Freedman freedman92 , Elston & Silva es92 , Frogel & Whitford fw87 , Rich & Mould rm91 ). Due to this similarity, Davidge & van den Bergh (dvdb01 ) suggested that the tip of AGB could be used as standard candle for determination of distances. In Fig. 17 we present the $K_{s}$-band and bolometric magnitude luminosity functions. The tip of the AGB is observed at bolometric magnitude of $-5$ in both fields (in spite of the crowding in Field 2), consistent with the adopted distance modulus for NGC 5128. Most of the stars brighter than this magnitude (as well as stars brighter than $M_{K}=-9$; see left panels in Fig. 17) probably belong to our Galaxy. 8 The Shell In our deep $V$-band image of Field 1, the prominent NE shell is easily recognizable as the region with higher surface brightness, and also with higher density of stellar objects. All the stars that are spatially coincident with the shell on the $V$-band image are shown in Fig. 18. In the left two panels we plot the $UV$ CMD of all stars coincident with the position of the diffuse shell (middle panel), while the corresponding $VK$ CMD is on the right. The much smaller field of view of ISAAC is centered on the shell so that the shell stars cover most of the image. Note that projected on the shell is the stream of blue, young objects that are aligned with the jet direction. Excluding the stars that are coincident with the star forming area (lower panels), the stellar population belonging to the shell is very similar to the one observed in the halo of NGC 5128, i.e., our Field 2 population (Fig. 9, 11). As in the rest of the galaxy there is a significant spread in metallicity. The presence of stellar shells or ripples around early-type galaxies is usually explained as evidence of the accretion and later merging of a smaller companion galaxy. The simulations of the stellar component in the merging galaxies have shown that the shells are created as a result of phase-wrapping or of spatial-wrapping of the tidal debris (Quinn quinn ; Hernquist & Quinn hernquist&quinn89 ). The companion galaxies may contain different stellar populations with respect to the giant elliptical (e.g. most dwarf galaxies have much lower metallicity than giants Mateo mateo ). Here for the first time we obtained the magnitudes and colors of stars belonging to a shell. They are surprisingly similar to the rest of the halo stars in NGC 5128. However, due to incompleteness of our $V$-band photometry, we cannot probe the most metal rich part of the halo of NGC 5128. Very high resolution and deeper observations of the shell stars are possible with the Advanced Camera for Surveys at Hubble Space Telescope. In order to be able to better separate the stars that genuinely belong to shells from the “normal” halo stars in NGC 5128 and confirm the present result a more quantitative comparison between the stellar populations in our two fields is necessary (Rejkuba et al., in preparation). 9 Summary Using the VLT with FORS1 and ISAAC, we have resolved stars in the halo and in the diffuse north-eastern shell of the closest giant elliptical galaxy, NGC 5128. With the $U$, $V$ and $K_{s}$-band photometry we probe different stellar populations. The $U$ filter, in particular, is sensitive to the youngest stars. It has revealed the striking difference in the star formation history between the the inner halo field (Field 2) and outer halo field (Field 1), the latter coinciding with the diffuse shell that is presumably the signature of a recent merger. In Field 1, stars as young as 10 Myr are present, while there are no stars younger than at least 40 Myr in Field 2. Thanks to the high quality of our $UV$ photometry, we also detect the gap between the main sequence and the blue core-helium burning supergiants. The foreground and background contamination has been corrected using the Besançon Galaxy model and our own color-color diagram. The recent star formation in Field 1 is approximately aligned with the direction of the jet coming from the center of AGN. However, there is also a chain of young blue stars offset from the jet direction that is aligned with the edge of the molecular clouds. The presence of the atomic and molecular gas (Schiminovich et al. schiminovich , Charmandaris et al. charmandaris ) and their association with the diffuse stellar shell is compatible with the dynamical scenario of phase-wrapping following the merger of a smaller gas-rich galaxy with NGC 5128 (Quinn quinn ). In this scenario, the interaction of the material coming from the AGN in the center of the galaxy with the material left from the merger stimulated the star formation in the halo, $\sim 15$ kpc away from the galactic centre. The metallicity of the newly formed stars gives an upper limit on the abundance of the gas of the accreted satellite with value of $Z=0.004$, a value typical of the SMC. While the $UV$ CMDs probed the youngest stellar populations, the combination of optical and near-IR filters is most sensitive to old and intermediate-age populations. In our $VK$ CMDs we observe a very wide and extended giant branch. The width indicates the presence of Population II stars with metallicities ranging from $-2$ to $-0.7$ dex. Unfortunately, our $V$-band images are not deep enough to detect more metal rich stars, but the large number of stars detected on $K$-band images without a counterpart in $V$-band suggests an even redder and more metal-rich RGB population, in agreement with the previous HST studies (Soria et al. soria , Harris et al. harris99 , Harris & Harris harris00 ). In contrast with the previous two studies, we find a very extended giant branch up to $M_{\rm bol}=-5$, which reveals the presence of the intermediate-age AGB population. Our extensive crowding experiments and theoretical predictions of the amount of blending (Renzini renzini ) demonstrate that blending due to crowding is not significant enough to mimic this population in the outer field (Field 1). In the inner field, the extent and the properties of the AGB population will be assessed through the study of the long period variables that have been detected in both fields (Rejkuba et al. in preparation). Acknowledgements. We thank Manuela Zoccali for help with DAOPHOT and ALLFRAME and Joel Vernet for help with IDL. We thank Francesco Ferraro who kindly provided the tables of VK globular cluster fiducial RGB sequences in electronic form. Felipe Barrientos kindly provided the U$-$V vs. V$-$K colors for galaxies of different morphological types and redshifts. MR acknowledges ESO studentship programme. TRB is grateful to the Australian Research Council and P. Universidad Católica for financial support and to the DITAC International Science & Technology Program. This work was supported by NASA Grant GO-07874.01-96A and by the Chilean Fondecyt No. 01990440 and 799884. Finally, we are grateful to the referee Mario Mateo for his helpful and thorough report. References (1998) Aparicio, A., 1998, in: “IAU Symposium 192 - The Stellar Content of Local Group Galaxies”, P. Whitelock & R. Cannon eds., p. 20. (1989) Begelman, M.C. & Cioffi, D.F., 1989, APJ, 345, L21 (1994) Bertelli, G., Bressan, A., Chiosi, C., Fagotto, F. & Nasi, E., 1994, A&AS, 106, 275 (1995) Bessell, M.S., 1995, PASP, 107, 672 (1984) Bessell, M.S. & Wood, P.R., 1984, PASP, 96, 247 (1998) Bessell, M.S., Castelli, F. & Plez, B., 1998, A&A, 333, 231 (1996) Best, P.N., Longair, M.S. & Röttgering, 1996, H.J.A., MNRAS, 280, L9 (1988) Bica, E., 1988, A&A, 195, 76 (1975) Blanco, V.M., Graham, J.A., Lasker, B.M. & Osmer, P.S., 1975, ApJ, 198, L63 (1993) Bruzual, A.G. & Charlot, S., 1993, ApJ, 405, 538 (1995) Buzzoni, A., 1995, ApJS, 98, 69 (2000) Charmandaris, V., Combes, F. & van der Hulst, J.M., 2000, A&A, 356, L1 (1992) Chiosi, C., Bertelli, G. & Bressan, A., 1992, ARA&A, 30, 235 (1992) Clarke, D.A., Burns, J.O. & Norman, M.L., 1992, ApJ, 395, 444 (1965) Cooper, B.F.C., Price, R.M. & Cole, D.J., 1965, Aust.J.Phys., 18, 589 (1990) Daly, R.A., 1990, ApJ, 355, 416 (2001) Davidge, T.J. & van den Bergh, S., 2001, ApJ, 553, L133 (1999) De Breuck, C., van Breugel, W., Minniti, D.  et al., 1999, A&A, 352, L51 (1978) Dufour, R.J. & van den Bergh, S., 1978, ApJ, 226, L73 (1992) Elston, R. & Silva, D.R., 1992, AJ, 104, 1360 (2000) Fassett, C.I. & Graham, J.A., 2000, ApJ, 538, 594 (1981) Feigelson, E.D., Schreier, E.J., Delvaille, J.P., Giacconi, R., Grindlay, J.E. & Lightman, A.P., 1981, ApJ, 251, 31 (2000) Ferraro, F.R., Montegriffo, P., Origlia, L. & Fusi Pecci, F., 2000, AJ, 119, 1282 (1992) Freedman, W.L., 1992, AJ, 104, 1349 (1987) Frogel, J.A. & Whitford, A.E., 1987, ApJ, 320, 199 (1998) Gallart, C., 1998, ApJ, 495, L43 (1999) Gallart, C., Freedman, W.L., Mateo, M., et al., 1999, ApJ, 514, 665 (2000) Gavazzi, G., Boselli, A., Vílchez, J.M., Iglesias-Paramo, J. & Bonfanti, C., 2000, A&A, 361, 1 (2000) Girardi, L., Bressan, A., Bertelli, G., & Chiosi, C., 2000, A&AS, 141, 1 (1979) Graham, J.A., 1979, ApJ, 232, 60 (1998) Graham, J.A., 1998, ApJ, 502, 245 (1981) Graham, J.A. & Price, R.M., ApJ, 247, 813 (2000) Grebel, E.K., 2000, in: “The Evolution of Galaxies. I.Observational Clues”, J.M. Vilchez, G. Stasinska & E. Perez eds., Kluwer, Dordrecht (2000) Harris, G.L.H. & Harris, W.E., 2000, AJ, 120, 2423 (1999) Harris, G.L.H., Harris, W.E. & Poole, G.B., 1999, AJ, 117, 855 (2000) Held, E.V., Saviane, I., Momany, Y. & Carraro, G., 2000, ApJ, 530, L85 (1989) Hernquist, L., Quinn, P.S., 1989, ApJ, 342, 1 (1995) Hui, X., Holland, C.F., Freeman, K.C. & Dopita, M.A., 1995, ApJ, 449, 592 (1998) Hurley-Keller, D., Mateo, M. & Nemec, J., 1998, AJ, 115, 1840 (1998) Israel, F.P., 1998, A&AR 8, 237 (2001) Jerjen, H. & Rejkuba, M., 2001, A&A, 371, 487 (1998) Kennicutt, R.C., 1998, ARA&A, 36, 189 (2000) Kraft, R.P., Forman, W, Jones, C., Kenter, A.T., et al., 2000, ApJ, 531, L9 (1998) Kurucz, R.L., 1998, IAU Symposium 189, T.R. Bedding, A.J. Booth & J. Davis eds., Kluwer, Dordrecht, p. 217 (1992) Landolt, A.U., 1992, AJ 104, 340 (1995) Langer, N. & Maeder, A., 1995, A&A, 295, 685 (1989) Lazareff, B, Castets, A., Kim D.-W. & Jura, M., 1989, ApJ, 336, L13 (2001) Maeder, A. & Meynet, G., 2001, A&A, in press; astro-ph/0105051 (1983) Malin, D.F., Quinn, P.J. & Graham, J.A., 1983, ApJ 272, L5 (1998) Maraston, C., 1998, MNRAS, 300, 872 (2000) Marleau, F.R., Graham, J.R., Liu, M.C. & Charlot, S., 2000, AJ, 120, 1779 (1998) Mateo, M., 1998, ARA&A, 36, 435 (1996) Mathieu, A., Dejonghe, H. & Hui, X., 1996, A&A, 903, 30 (1997) Minniti, D. & Zijlstra, A.A., 1997, AJ, 117, 1743 (1996) Minniti, D. & Zijlstra, A.A., 1996, ApJ, 467, L13 (1996) Minniti, D., Alonso, M.V., Goudfrooij, P., Jablonka, P. & Meylan, G., 1996, ApJ, 467, 221 (1999) Morganti, R., Killeen, N.E.B., Ekers, R.D. & Oosterloo, T.A., 1999, MNRAS, 307, 750 (1992) Morganti, R., Fosbury, R.A.E., Hook, R.N., Robinson, A. & Tsvetanov, Z., 1992, MNRAS, 256, 1P (1991) Morganti, R., Robinson, A., Fosbury, R.A.E., di Serego Alighieri, S., Tadhunter, C.N. & Malin, D.F., 1991, MNRAS, 249, 91 (2000) Mould, J.R., Ridgewell, A., Gallagher, J.S. III et al., 2000, ApJ, 536, 266 (1998) Persson, S.E., Murphy, D.C., Krzeminski, W., Roth, M. Rieke, M.J., 1998, AJ, 116, 2475 (1984) Quinn, P.J., 1984, ApJ, 279, 596 (1989) Rees, M.J., 1989, MNRAS, 239, 1P (1985) Rieke, M.J. & Lebofsky, M.J., 1985, ApJ, 288, 618 (2001) Rejkuba, M., 2001, A&A, 369, 812 (2000) Rejkuba, M., Minniti, D., Gregg, M.D., Zijlstra, A.A., Alonso, M.V. & Goudfrooij, P., 2000, AJ, 120, 801 (1998) Renzini, A., 1998, AJ, 115, 2459 (1988) Renzini, A. & Fusi Pecci, F., 1988, ARA&A, 26, 199 (1991) Rich, R.M. & Mould, J.R., 1991, AJ, 101, 1268 (1986) Robin, A. & Creze, M., 1986, A&A, 157, 71 (1996) Robin, A.C., Haywood, M., Creze, M., Ojha, D.K. & Bienayme, O., 1996, A&A, 305, 125 (2000) Sabra, B.M., Shields, J.C. & Filippenko, A.V., 2000, ApJ, 545, 157 (2001) Saracco, P., Giallongo, E., Cristiani, S., et al., 2001, A&A, in press; astro-ph/0104284 (1998) Schlegel, D.J., Finkbeiner, D.P. & Davis, M., 1998, ApJ, 500, 525 (1994) Schiminovich, D.; van Gorkom, J.H., van der Hulst, J.M. & Kasow, S., 1994, ApJ 423, L101 (1981) Schreier, E.J., Burns, J.O. & Feigelson, E.D., 1981, ApJ, 251, 523 (1996) Soria, R., Mould, J.R., Watson, A.M., et al., 1996, ApJ 465, 79 (1995) Stanford, S.A., Eisenhardt, P.R.M. & Dickinson, M., 1995, ApJ, 450, 512 (1994) Stetson, P.B., 1994, PASP, 106, 250 (1987) Stetson, P.B., 1987, PASP, 99, 191 (1998) Tolstoy, E., 1998, in: “Dwarf Galaxies & Cosmology”, T.X. Thuan, C. Balkowski, V. Cayatte & J. Tran Thanh Van eds., astro-ph/9807154 (1990) Tonry, J.L. & Schechter, P.L., 1990, AJ, 100, 1794 (2000) Unger, S.J., Clegg, P.E., Stacey, G.J., et al., 2000, A&A, 355, 885 (2000) van den Bergh, S., 2000, “The Galaxies of the Local Group”, Cambridge University Press (1999) van den Bergh, S., 1999, A&ARv, 9, 273 (2001) Vazdekis, A., Salaris, M., Arimoto, N. & Rose, J.A., 2001, ApJ, 549, (in press) (1999) Walsh, J.R., Walton, N.A., Jacoby, G.H. & Peletier, R.F., 1999, A&A, 346, 753 (2000) Wild, W. & Eckart, A., 2000, A&A, 359, 483
Affine transformations of the plane and their geometrical properties Irina Busjatskaja    Yury Kochetkov () Abstract In this work, oriented for students with knowledge of basics of linear algebra, we demonstrate, how the use of polar decomposition allows one to understand metric properties of non-degenerate linear operators in $\mathbf{R}^{2}$. Introduction Let $\mathbf{R}^{2}$ be the standard two-dimensional Euclidean space and $\varphi$ be a reversible, non-isometric linear operator on this space. Let $A=\begin{pmatrix}a&b\\ c&d\end{pmatrix}$ be the matrix of $\varphi$ in the standard basis $\mathbf{i}=\begin{pmatrix}1\\ 0\end{pmatrix}$, $\mathbf{j}=\begin{pmatrix}0\\ 1\end{pmatrix}$ of $\mathbf{R}^{2}$. We will assume that operator $\varphi$ is reversible, so $\det(A)=ad-bc\neq 0$. Let $P(\lambda)=\begin{vmatrix}a-\lambda&b\\ c&d-\lambda\end{vmatrix}$ be the characteristic polynomial of operator $\varphi$. The roots $\lambda_{1},\lambda_{2}$ of polynomial $P$ could be either real numbers (real spectrum), or complex conjugate (complex spectrum). In the case of real spectrum we will assume that $\lambda_{2}>\lambda_{1}>0$. In the real case there are two one-dimensional invariant subspaces $L_{1}$ and $L_{2}$ in $\mathbf{R}^{2}$, which consist of the eigenvectors with eigenvalues $\lambda_{1}$ and $\lambda_{2}$, respectively. In the complex case there are no non-trivial invariant subspaces. Matrix $A$ of operator $\varphi$ satisfy the condition $$(a+d)^{2}-4(ad-bc)<0$$ (1) or $$(a-d)^{2}+4bc<0.$$ (2) in the complex case, and the condition $$(a+d)^{2}-4(ad-bc)>0$$ (3) in the real case. Remark.  As similar matrices have the same trace and determinant, then conditions (1), (2) and (3) will be true for matrix of operator in any base. In this paper we will determine the range of angles between the vectors $\mathbf{x}$ and $\varphi(\mathbf{x})$, using the polar decomposition of given operator $\varphi$. In § 1 we will recall the notion of polar decomposition and its properties. In § 2 we will consider an operator with a real spectrum and will estimate the angle $\gamma$ between the vectors $\mathbf{x}$ and $\varphi(\mathbf{x})$. Theorem 1.  Let $\varphi$ be the operator with the real spectrum, $\lambda_{1},\lambda_{2}$ are positive eigenvalues of the operator $\varphi$ and $\beta$ is the acute angle between the eigenvectors, then $0\leqslant\gamma\leqslant\arccos\dfrac{2\sqrt{\lambda_{1}\lambda_{2}}-(\lambda% _{1}+\lambda_{2})\cos\beta}{\lambda_{1}+\lambda_{2}-2\sqrt{\lambda_{1}\lambda_% {2}}\cos\beta}$. In § 3 we consider an operator $\varphi$ with the complex spectrum and, using its polar decomposition, we obtain the following result: Theorem 2.  Let $\varphi$ be an operator with the complex spectrum, then $\alpha-\arccos\dfrac{2\sqrt[4]{\lambda\mu}}{\sqrt{\lambda}+\sqrt{\mu}}% \leqslant\gamma\leqslant\alpha+\arccos\dfrac{2\sqrt[4]{\lambda\mu}}{\sqrt{% \lambda}+\sqrt{\mu}}$, where $\alpha$ is the rotation angle of the isometric operator and $\sqrt{\lambda}$ and $\sqrt{\mu}$ are the eigenvalues of the positive self-conjugate operator in the polar decomposition of the operator $\varphi$. In § 4 we compute $\bar{\gamma}$ the mean value of the angle $\gamma$ for the operators with the complex spectrum. Theorem 3.  For the operators with the complex spectrum $\dfrac{\pi}{2}-\dfrac{2}{\pi}\leqslant\bar{\gamma}\leqslant\dfrac{\pi}{2}+% \dfrac{2}{\pi}$. In § 5 we compute the norm of an operator $\varphi$ and in § 6 we study trajectories of the points under the action of an operator with the complex spectrum such that $\det(A)=1$. We prove that each trajectory belongs to some ellipse. Other applications of polar decomposition can be seen in [4]. § 1 The polar decomposition of the reversible linear operator on the plane Let us note at first that if $\varphi$ is an operator in $\mathbf{R}^{2}$ with complex spectrum, then $\det(A)>0$, where $A=\begin{pmatrix}a&b\\ c&d\end{pmatrix}$ is the matrix of $\varphi$ in the standard base. Indeed, in this case the discriminant $(a+d)^{2}-4(ad-bc)$ of the characteristic polynomial of matrix $A$ is negative. In the real case we will assume that eigenvalues $\lambda_{1}$ and $\lambda_{2}$ of $\varphi$ are positive. Here $\det(A)$ is also positive. Now let us consider a self-conjugate operator $\varphi\circ\varphi^{*}$ ($\varphi\varphi^{*}(x)=\varphi^{*}\bigl{(}\varphi(x)\bigr{)}$) and its (symmetric) matrix $A^{\mathrm{t}}A$. It is well known that eigenvalues $\lambda$ and $\mu$ of operator $\varphi\circ\varphi^{*}$ are positive and the corresponding eigenvectors $\mathbf{e}_{1}$ and $\mathbf{e}_{2}$ — orthonormal. There exists operator $\varphi^{\prime}$ such that $\varphi\circ\varphi^{*}=(\varphi^{\prime})^{2}$. $\varphi^{\prime}$ is a positive operator with the matrix $B$ in the standard base and the matrix $$B^{\prime}=\begin{pmatrix}\sqrt{\lambda}&0\\ 0&\sqrt{\mu}\end{pmatrix}$$ (4) in base $\mathbf{e}_{1}$ and $\mathbf{e}_{2}$. It must be noted that $B$ is a symmetric matrix, because $\varphi^{\prime}$ is a self-adjoint operator, and $B^{2}=A^{\mathrm{t}}A$. The product $A=(AB^{-1})B=OB$ is called a polar decomposition of matrix $A$. Here the matrix $O=AB^{-1}$ is orthogonal. Indeed, $$O^{\mathrm{t}}O=(AB^{-1})^{\mathrm{t}}AB^{-1}=B^{-1}A^{\mathrm{t}}AB^{-1}=B^{-% 1}B^{2}B^{-1}=E.$$ Orthogonal matrix $O$ defines an operator $\psi$, which is either a rotation, if $\det(O)>0$, or reflection, if $\det(O)<0$. If $\det(A)>0$ and $\varphi^{\prime}$ is a positive operator, then $O$ defines a rotation on angle $\alpha$, thus $$O=\begin{pmatrix}\cos\alpha&-\sin\alpha\\ \sin\alpha&\cos\alpha\end{pmatrix}.$$ If $\varphi$ is an operator with complex spectrum, then, as we know, $\det(A)>0$. So, in this case also the orthogonal operator $\psi$ in polar decomposition is a rotation. Let $A^{\prime}$ be the matrix of operator $\varphi$ in the base $\mathbf{e}_{1}$ and $\mathbf{e}_{2}$. As $A^{\prime}=OB^{\prime}$, then $$A^{\prime}=\begin{pmatrix}\cos\alpha&-\sin\alpha\\ \sin\alpha&\cos\alpha\end{pmatrix}\begin{pmatrix}\sqrt{\lambda}&0\\ 0&\sqrt{\mu}\end{pmatrix}.$$ Condition (2) may be rewritten as $$(\sqrt{\lambda}\cos\alpha-\sqrt{\mu}\cos\alpha)^{2}-4\sqrt{\lambda}\,\sqrt{\mu% }\sin^{2}\alpha<0.$$ Thus $$\cos\alpha<\frac{2\sqrt[4]{\lambda\mu}}{\sqrt{\lambda}+\sqrt{\mu}}$$ (5) in the case of complex spectrum, and $$\cos\alpha>\frac{2\sqrt[4]{\lambda\mu}}{\sqrt{\lambda}+\sqrt{\mu}}$$ (6) in the case of the real one. § 2 Angles of rotations of the operator with the real spectrum Let $\lambda_{1}$ and $\lambda_{2}$, $0<\lambda_{1}<\lambda_{2}$, be eigenvalues of $\varphi$ and $\mathbf{u}_{1}$ and $\mathbf{u}_{2}$, $\lvert\mathbf{u}_{1}\rvert=\lvert\mathbf{u}_{2}\rvert=1$, be the corresponding eigenvectors. Let $\beta$ be the acute angle between $\mathbf{u}_{1}$ and $\mathbf{u}_{2}$. The plane is divided in four parts (cones) by two lines corresponding to the subspaces $L_{1}=\langle\mathbf{u}_{1}\rangle$ and $L_{2}=\langle\mathbf{u}_{2}\rangle$. Each open cone is invariant under the action of $\varphi$ and under this action each vector $\mathbf{x}$ moves in the direction to the line $L_{2}$. Let us compute the maximal value $\gamma_{\text{max}}$ of the angle $\gamma$ between the vectors $\mathbf{x}$ and $\varphi(\mathbf{x})$ and estimate the angle $\gamma$. Theorem 1.  Let $\varphi$ be the operator with the real spectrum , $\lambda_{1},\lambda_{2}$ are positive eigenvalues of the operator $\varphi$ and $\beta$ is the acute angle between the eigenvectors, then $0\leqslant\gamma\leqslant\arccos\dfrac{2\sqrt{\lambda_{1}\lambda_{2}}-(\lambda% _{1}+\lambda_{2})\cos\beta}{\lambda_{1}+\lambda_{2}-2\sqrt{\lambda_{1}\lambda_% {2}}\cos\beta}$. Proof. We shall work in the base $\{\mathbf{u}_{1},\mathbf{u}_{2}\}$. Let $x_{1}$ and $x_{2}$ be the coordinates of vector $\mathbf{x}$ in our base, then $\varphi(\mathbf{x})=\lambda_{1}x_{1}\mathbf{u}_{1}+\lambda_{2}x_{2}\mathbf{u}_% {2}$. As $\cos\gamma=\dfrac{(\mathbf{x},\varphi(\mathbf{x}))}{\lvert\mathbf{x}\rvert\,% \lvert\varphi(\mathbf{x})\rvert}$ and Gram matrix of the vectors $\mathbf{u}_{1},\mathbf{u}_{2}$ is $G=\begin{pmatrix}1&\cos\beta\\ \cos\beta&1\end{pmatrix}$ then $\cos\gamma=\dfrac{(x_{1}x_{2})\,G\begin{pmatrix}\lambda_{1}&x_{1}\\ \lambda_{2}&x_{2}\end{pmatrix}}{\sqrt{(x_{1}\,x_{2})\,G\begin{pmatrix}x_{1}\\ x_{2}\end{pmatrix}(\lambda_{1}x_{1}\,\lambda_{2}x_{2})G\begin{pmatrix}\lambda_% {1}&x_{1}\\ \lambda_{2}&x_{2}\end{pmatrix}}}$. $$\cos\gamma=\dfrac{\lambda_{1}(x_{1})^{2}+(\lambda_{1}+\lambda_{2})x_{1}x_{2}% \cos\beta+\lambda_{2}(x_{2})^{2}}{\sqrt{x_{1}+2x_{1}x_{2}\cos\beta+(x_{2})^{2}% }\,\sqrt{(\lambda_{1})^{2}(x_{2})^{2}+2\lambda_{1}\lambda_{2}x_{1}x_{2}\cos% \beta+(\lambda_{2})^{2}(x_{2})^{2}}}$$ (7) If we denote $\dfrac{x_{1}}{x_{2}}$ by $t$ and $\dfrac{\lambda_{2}}{\lambda_{1}}$ by $\delta$, then the formula (7) may be rewritten as $$f(t)=\cos\gamma=\frac{(t)^{2}+(1+\delta)\cos\beta t+\delta}{\sqrt{(t)^{2}+2t% \cos\beta+1}\,\sqrt{(t)^{2}+2\delta t\cos\beta+(\delta)^{2}}}.$$ (8) To estimate the values of the function $f(t)$ we calculate its derivative $$f^{\prime}(t)=\frac{(\delta-1)^{2}(1-(\cos\beta)^{2})((t)^{3}-\delta t)}{\sqrt% {((t)^{2}+2t\cos\beta+1)^{3}}\,\sqrt{((t)^{2}+2\delta t\cos\beta+(\delta)^{2})% ^{3}}}.$$ (9) From (9) follows that the equation $f^{\prime}(t)=0$ has three roots: $t=0$, $t=\sqrt{\delta}$ and $t=-\sqrt{\delta}$, hence $f_{\text{min}}(t)=f(\sqrt{\delta})=\dfrac{2\sqrt{\delta}+(1+\delta)\cos\beta}{% 1+2\sqrt{\delta}\cos\beta+\delta}$ for $t\in(0,+\infty)$, and $f_{\text{min}}(t)=f(-\sqrt{\delta})=\dfrac{2\sqrt{\delta}-(1+\delta)\cos\beta}% {1-2\sqrt{\delta}\cos\beta+\delta}$ for $t\in(-\infty,0)$. Notice that if $\cos\beta\neq 0$ then $f(-\sqrt{\delta})<0<f(\sqrt{\delta})$ and hence $$\gamma_{\text{max}}=\arccos\frac{2\sqrt{\lambda_{1}\lambda_{2}}-(\lambda_{1}+% \lambda_{2})\cos\beta}{\lambda_{1}+\lambda_{2}-2\sqrt{\lambda_{1}\lambda_{2}}% \cos\beta}.$$ (10) Thus $0\leqslant\gamma\leqslant\arccos\dfrac{2\sqrt{\lambda_{1}\lambda_{2}}-(\lambda% _{1}+\lambda_{2})\cos\beta}{\lambda_{1}+\lambda_{2}-2\sqrt{\lambda_{1}\lambda_% {2}}\cos\beta}$. Remark 1.  If the base $\mathbf{u}_{1},\mathbf{u}_{2}$ is orthonormal, thus $\cos\beta=0$, then from (10) follows that $$\gamma_{\text{max}}=\arccos\frac{2\sqrt{\lambda_{1}\lambda_{2}}}{\lambda_{1}+% \lambda_{2}}.$$ (11) Remark 2.  If $\lambda_{1}<0$, $\lambda_{2}<0$, then the operator $\varphi$ is the composition of the operator with the real positive spectrum $\lvert\lambda_{1}\rvert,\lvert\lambda_{2}\rvert$ and the operator of the central symmetry and angle $\gamma$ also can be estimated $\pi-\arccos\dfrac{2\sqrt{\lambda_{1}\lambda_{2}}-(\lambda_{1}+\lambda_{2})\cos% \beta}{\lambda_{1}+\lambda_{2}-2\sqrt{\lambda_{1}\lambda_{2}}\cos\beta}% \leqslant\gamma\leqslant\pi$. Remark 3.  If $\lambda_{1}$ and $\lambda_{2}$ are of different signs, then vectors $\mathbf{x}$ and $\varphi(\mathbf{x})$ are in the adjacent cones and $0\leqslant\gamma\leqslant\pi$. § 3 Angles of rotations of the operator with the complex spectrum Let $$A^{\prime}=\begin{pmatrix}\cos\alpha&-\sin\alpha\\ \sin\alpha&\cos\alpha\end{pmatrix}\begin{pmatrix}\sqrt{\lambda}&0\\ 0&\sqrt{\mu}\end{pmatrix}\quad\alpha\neq k\pi$$ (12) be its matrix in the base $\mathbf{e}_{1},\mathbf{e}_{2}$ (see § 1). Here $\alpha$ is the rotation angle of the orthogonal operator $\psi$ and $\lambda$ and $\mu$ are eigenvalues of positive operator $\varphi\circ\varphi^{*}=(\varphi^{\prime})^{2}$. As $\varphi$ has no eigenvectors then it rotates each vector $\mathbf{x}\neq 0$ in one direction on some angle $\gamma=\gamma(\mathbf{x})\neq 0$. If $\gamma^{\prime}(\mathbf{x})$ is the angle between vectors $\mathbf{x}$ and $\varphi^{\prime}(\mathbf{x})$, then $\gamma(\mathbf{x})=\alpha+\gamma^{\prime}(\mathbf{x})$. It must be noted that angles $\gamma^{\prime}$ may be positive and negative i. e. rotation from $\mathbf{x}$ to $\varphi^{\prime}(\mathbf{x})$ may be counterclockwise or clockwise. Theorem 2.  Let $\varphi$ be the operator with the complex spectrum, then $$\alpha-\arccos\frac{2\sqrt[4]{\lambda\mu}}{\sqrt{\lambda}+\sqrt{\mu}}\leqslant% \gamma\leqslant\alpha+\arccos\frac{2\sqrt[4]{\lambda\mu}}{\sqrt{\lambda}+\sqrt% {\mu}}.$$ Proof. As $\varphi^{\prime}$ is an operator with real positive spectrum and the base $\mathbf{e}_{1},\mathbf{e}_{2}$ is orthonormal, then we can use the formula (11). $$\gamma_{\text{max}}^{\prime}=\arccos\frac{2\sqrt[4]{\lambda\mu}}{\sqrt{\lambda% }+\sqrt{\mu}}.$$ (13) Now from (5) we see that $\lvert\gamma_{\text{max}}^{\prime}\rvert<\lvert\alpha\rvert$, thus maximal rotation angle of $\varphi$ is the sum $\gamma_{\text{max}}=\alpha+\arccos\dfrac{2\sqrt[4]{\lambda\mu}}{\sqrt{\lambda}% +\sqrt{\mu}}$ and minimal rotation angle of $\varphi$ is the difference $\gamma_{\text{min}}=\alpha-\arccos\dfrac{2\sqrt[4]{\lambda\mu}}{\sqrt{\lambda}% +\sqrt{\mu}}$. So $\alpha-\arccos\dfrac{2\sqrt[4]{\lambda\mu}}{\sqrt{\lambda}+\sqrt{\mu}}% \leqslant\gamma\leqslant\alpha+\arccos\dfrac{2\sqrt[4]{\lambda\mu}}{\sqrt{% \lambda}+\sqrt{\mu}}$. Remark.  For an operator $\varphi$ with the real positive spectrum, its polar decomposition also can be used to calculate the angles $\gamma_{\text{min}}$ and $\gamma_{\text{max}}$, but the condition (6) shows us that $\lvert\gamma_{\text{max}}\rvert>\lvert\alpha\rvert$, hence $\gamma_{\text{min}}$ and $\gamma_{\text{max}}$ have different signs. It means that $\varphi$ rotates vectors in different directions, while the operator $\varphi$ with the complex spectrum rotates all vectors in one direction — in the direction of the angle $\alpha$. § 4 The mean value of the angle of rotation (complex case) Let $\varphi$ be an operator with the complex spectrum and $A=\begin{pmatrix}a&b\\ c&d\end{pmatrix}$ be its matrix in the standard base. Then, as we know (1), $a^{2}+2ad+4bc+d^{2}<0$. This condition defines a domain $G$ in the four-dimensional real space with coordinates $a,b,c,d$. In § 3 it was proved that $\alpha-\gamma_{\text{max}}^{\prime}<\gamma<\alpha+\gamma_{\text{max}}^{\prime}$ where $\gamma$ is the angle between the vectors $\mathbf{x}$ and $\varphi(\mathbf{x})$, $\alpha$ is the rotation angle of orthogonal operator $\psi$ and $\gamma_{\text{max}}^{\prime}=\arccos\dfrac{2\sqrt[4]{\lambda\mu}}{\sqrt{% \lambda}+\sqrt{\mu}}$ is the maximal rotation angle of positive operator $\varphi^{\prime}$ ($\varphi\circ\varphi^{*}=(\varphi^{\prime})^{2}$). As $\lambda$ and $\mu$ are eigenvalues of matrix $A^{\mathrm{t}}A$, then $\lambda+\mu=a^{2}+b^{2}+c^{2}+d^{2}$, $\lambda\mu=(ad-bc)^{2}$. Set $a=\dfrac{1}{\sqrt{2}}(x+t)$, $b=\dfrac{1}{\sqrt{2}}(y+z)$, $c=\dfrac{1}{\sqrt{2}}(y-z)$, $d=\dfrac{1}{\sqrt{2}}(t-x)$, then $\sqrt{\lambda}\,\sqrt{\mu}=\dfrac{1}{2}(z^{2}+t^{2}-x^{2}-y^{2})$, $\sqrt{\lambda}+\sqrt{\mu}=\sqrt{2(t)^{2}+2(z)^{2}}$ and the inequality $x^{2}+y^{2}-z^{2}<0$ describes the domain $G$ in new coordinates. Also we have, that $\gamma_{\text{max}}^{\prime}=\gamma_{\text{max}}^{\prime}(x,y,z,t)=\arccos% \sqrt{1-\dfrac{(x)^{2}+(y)^{2}}{(t)^{2}+(z)^{2}}}=\arcsin\sqrt{\dfrac{(x)^{2}+% (y)^{2}}{(t)^{2}+(z)^{2}}}$. Let $\bar{\gamma}$ be the mean value of the angle of rotation of the operators with the complex spectrum. Theorem 3.  For the operators with the complex spectrum $\dfrac{\pi}{2}-\dfrac{2}{\pi}\leqslant\bar{\gamma}\leqslant\dfrac{\pi}{2}+% \dfrac{2}{\pi}$. Proof. As $\alpha-\gamma_{\text{max}}^{\prime}<\gamma<\alpha+\gamma_{\text{max}}^{\prime}$, then, in order to estimate $\bar{\gamma}$, we have to calculate $\bar{\gamma}_{\text{max}}^{\prime}$ — the mean value of the maximal rotation angle of operators $\varphi^{\prime}$. $$\bar{\gamma}_{\text{max}}^{\prime}=\frac{\int_{G}\gamma_{\text{max}}^{\prime}(% x,y,z,t)\,dv}{\int_{G}dv}.$$ (14) Let us consider the domain $G^{\prime}=G\cap C$, where $C=C^{1}\times C^{2}$, $C^{1}=\{(x,y);(x)^{2}+(y)^{2}<1\}$ and, $C^{2}=\{(z,t);(z)^{2}+(t)^{2}<1\}$. As $\gamma_{\text{max}}^{\prime}(x,y,z,t)$ is a homogeneous function, we , instead of calculating $\bar{\gamma}_{\text{max}}^{\prime}$ by the formula (14), can use the following one: $$\bar{\gamma}_{\text{max}}^{\prime}=\frac{\int_{G^{\prime}}\gamma_{\text{max}}^% {\prime}(x,y,z,t)\,dv}{\int_{G^{\prime}}dv}.$$ (15) In order to simplify the integration we apply the polar coordinates $x=\rho\cos\eta$, $y=\rho\sin\eta$, $t=r\cos\xi$, $z=r\sin\xi$. $G^{\prime}=\{(\eta,\xi,r,\rho);0\leqslant\eta\leqslant 2\pi,0\leqslant\xi% \leqslant 2\pi,0\leqslant r\leqslant 1,0\leqslant\rho\leqslant\lvert r\sin\xi\rvert\}$. Now we can compute both integrals in formula (15). $$\displaystyle\int_{G^{\prime}}dv$$ $$\displaystyle=4\int_{0}^{2\pi}d\eta\int_{0}^{1}r\,dr\int_{0}^{\tfrac{\pi}{2}}d% \xi\int_{0}^{r\sin\xi}\rho\,d\rho=\frac{(\pi)^{2}}{4}$$ (16) $$\displaystyle\int_{G^{\prime}}\gamma_{\text{max}}^{\prime}\,dv$$ $$\displaystyle=4\int_{0}^{2\pi}d\eta\int_{0}^{1}r\,dr\int_{0}^{\tfrac{\pi}{2}}d% \xi\int_{0}^{r\sin\xi}\rho\arcsin\frac{\rho}{r}\,d\rho=\frac{\pi}{8}.$$ (17) So from (15)–(17) it follows that $\bar{\gamma}_{\text{max}}^{\prime}=\dfrac{2}{\pi}$. In order to get the sensible result for $\bar{\gamma}$ we have to limit the value of the angles $\alpha$ — the angles of rotation of the operators $\psi$. Let $\alpha\in[0,\pi]$ then $\alpha$ can be uniquely determined by $\cos\alpha$. As similar matrices have the same trace, then $\cos\alpha(\sqrt{\lambda}+\sqrt{\mu})=a+d$ or, after the change of variables, $\cos\alpha=\dfrac{t}{\sqrt{(t)^{2}+(z)^{2}}}$. As $\sin\alpha>0$ for $\alpha\in[0,\pi]$ then $\sin\alpha=\dfrac{z}{\sqrt{(t)^{2}+(z)^{2}}}$, $\alpha=\arccos\dfrac{t}{\sqrt{(t)^{2}+(z)^{2}}}$ and, in order to compute $\bar{\alpha}$ — the mean value of the angles $\alpha$, we have to consider only part of the domain $G$ corresponding to $z>0$. As $\alpha=\alpha(x,y,z,t)$ is the homogeneous function, we can use the domain $G^{\prime}$ with the condition $z>0$. $$\bar{\alpha}=\frac{2}{(\pi)^{2}}\int_{0}^{2\pi}d\eta\int_{0}^{1}r\,dr\int_{0}^% {\pi}\xi\,d\xi\int_{0}^{r\sin\xi}\rho\,d\rho=\frac{\pi}{2}.$$ Thus we have the final result $\dfrac{\pi}{2}-\dfrac{2}{\pi}\leqslant\bar{\gamma}\leqslant\dfrac{\pi}{2}+% \dfrac{2}{\pi}$. § 5 The norm of the operator Let now $\varphi$ be an operator with the complex spectrum or an operator with the positive real spectrum, then $A^{\prime}=\begin{pmatrix}\cos\alpha&-\sin\alpha\\ \sin\alpha&\cos\alpha\end{pmatrix}\begin{pmatrix}\sqrt{\lambda}&0\\ 0&\sqrt{\mu}\end{pmatrix}\quad\alpha\neq k\pi$ be its matrix in the base $\mathbf{e}_{1},\mathbf{e}_{2}$ (see § 1). The operator $\varphi$ not only rotates the vector $\mathbf{x}$ but also changes its length. If $\mathbf{x}=x_{1}\mathbf{e}_{1}+x_{2}\mathbf{e}_{2}$ then $\varphi^{\prime}(\mathbf{x})=\sqrt{\lambda}\,x_{1}\mathbf{e}_{1}+\sqrt{\mu}\,x% _{2}\mathbf{e}_{2}$. As an orthogonal operator $\psi$ does not change the length of the vectors, then $\lvert\varphi(\mathbf{x})\rvert=\lvert\varphi^{\prime}(\mathbf{x})\rvert$. So $\dfrac{(\lvert\varphi(\mathbf{x})\rvert)^{2}}{(\lvert\mathbf{x}\rvert)^{2}}=% \dfrac{\lambda(x_{1})^{2}+\mu(x_{2})^{2}}{(x_{1})^{2}+(x_{2})^{2}}=\mu+\dfrac{% (\lambda-\mu)(x_{1})^{2}}{(x_{1})^{2}+(x_{2})^{2}}$. It is easy to estimate all possible values of this ratio and to calculate the norm of an operator $\varphi$. $$\min(\lambda,\mu)\leqslant\frac{(\lvert\varphi(\mathbf{x})\rvert)^{2}}{(\lvert% \mathbf{x}\rvert)^{2}}\leqslant\max(\lambda,\mu).$$ (18) If $x_{1}=0$, then $\dfrac{\lvert\varphi(\mathbf{x})\rvert}{\lvert\mathbf{x}\rvert}=\sqrt{\mu}$, but if $x_{2}=0$, then $\dfrac{\lvert\varphi(\mathbf{x})\rvert}{\lvert\mathbf{x}\rvert}=\sqrt{\lambda}$, hence from (18) it follows that the norm of an operator $\varphi$ is $\|\varphi\|=\max(\sqrt{\lambda},\sqrt{\mu})$. Notice that if $\dfrac{(x_{2})^{2}}{(x_{1})^{2}}=\dfrac{\lambda-1}{1-\mu}$ then an operator $\varphi$ preserves the length of the vector $\mathbf{x}$. We can find vector $\mathbf{x}$ satisfying this condition if and only if $\min(\lambda,\mu)\leqslant 1\leqslant\max(\lambda,\mu)$. § 6 Some curves related to the operator with the complex spectrum Let $\varphi$ be an operator with the complex spectrum, such that $\det(A)=1$. Thus its eigenvalues are complex conjugate numbers $\exp(i\theta)$, $\exp(-i\theta)$ of modul $1$. Let $\Phi$ be the complexification of $\varphi$ and $\mathbf{z}=\begin{pmatrix}z_{1}\\ z_{2}\end{pmatrix}=\begin{pmatrix}u_{1}\\ u_{2}\end{pmatrix}+i\begin{pmatrix}v_{1}\\ v_{2}\end{pmatrix}=\mathbf{u}+i\mathbf{v}$ be the eigenvector of $\Phi$ corresponding to the eigenvalue $\exp(i\theta)$, then $\Phi(\mathbf{z})=A\cdot\begin{pmatrix}z_{1}\\ z_{2}\end{pmatrix}=A\cdot\begin{pmatrix}u_{1}\\ u_{2}\end{pmatrix}+iA\cdot\begin{pmatrix}v_{1}\\ v_{2}\end{pmatrix}=\varphi(\mathbf{u})+i\varphi(\mathbf{v})$, where $\mathbf{u}\in\mathbf{R}^{2}$, $\mathbf{v}\in\mathbf{R}^{2}$. As $\Phi(\mathbf{z})=\exp(i\theta)\cdot\mathbf{z}=(\cos\theta+i\sin\theta)\cdot(% \mathbf{u}+i\mathbf{v})=(\cos\theta\cdot\mathbf{u}-\sin\theta\cdot\mathbf{v})+% i(\sin\theta\cdot\mathbf{u}+\cos\theta\cdot\mathbf{v})$, then $$\displaystyle\varphi(\mathbf{u})$$ $$\displaystyle=\cos\theta\cdot\mathbf{u}-\sin\theta\cdot\mathbf{v},$$ (19) $$\displaystyle\varphi(\mathbf{v})$$ $$\displaystyle=\sin\theta\cdot\mathbf{u}+\cos\theta\cdot\mathbf{v}.$$ (20) Now consider conjugate complex vector $\bar{\mathbf{z}}=\mathbf{u}-i\mathbf{v}$. $\Phi(\bar{\mathbf{z}})=A\cdot\bar{\mathbf{z}}=\overline{A\cdot\mathbf{z}}=% \overline{\exp(i\theta)}\cdot\bar{\mathbf{z}}=\exp(-i\theta)\cdot\bar{\mathbf{% z}}$, hence $\bar{\mathbf{z}}$ is the eigenvector of $\Phi$ corresponding to the eigenvalue $\exp(-i\theta)$. The vectors $\mathbf{z}$ and $\bar{\mathbf{z}}$ are linearly independent hence the vectors $\mathbf{u}=\dfrac{\mathbf{z}+\bar{\mathbf{z}}}{2}$ and $\mathbf{v}=\dfrac{\mathbf{z}-\bar{\mathbf{z}}}{2i}$ are linearly independent too and can be taken as the base of the space $\mathbf{R}^{2}$. It follows from (19) and (20) that the matrix for the operator $\varphi$ in the base $\mathbf{u},\mathbf{v}$ is $$A^{\prime\prime}=\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}.$$ (21) This matrix describes a rotation of the plane on an angle $\theta$ if and only if the basis $\mathbf{u},\mathbf{v}$ is orthonormal, but for an operator with the complex spectrum it is not true in general. Let us look at two co-ordinate systems on the plane. One of them has the $x$-axis and the $y$-axis corresponding the base $\mathbf{i},\mathbf{j}$ and another co-ordinate system has the $x^{\prime}$-axis and the $y^{\prime}$-axis corresponding the base $\mathbf{u},\mathbf{v}$. Let $x^{\prime}$ and $y^{\prime}$ are coordinates of vector $\mathbf{x}$ in the base $\mathbf{u},\mathbf{v}$, then according to (21), coordinates of vector $\varphi(\mathbf{x})$ in this base, $\cos\theta\cdot x^{\prime}-\sin\theta\cdot y^{\prime}$ and $\sin\theta\cdot x^{\prime}+\cos\theta\cdot y^{\prime}$, satisfy the condition $(\cos\theta\cdot x^{\prime}-\sin\theta\cdot y^{\prime})^{2}+(\sin\theta\cdot x% ^{\prime}+\cos\theta\cdot y^{\prime})^{2}=(x^{\prime})^{2}+(y^{\prime})^{2}$. It means, that under the action of $\varphi$, each point $M_{0}(x_{0}^{\prime},y_{0}^{\prime})$ remains on a curve $\Gamma$ $$\Gamma\colon(x^{\prime})^{2}+(y^{\prime})^{2}=(x_{0}^{\prime})^{2}+(y_{0}^{% \prime})^{2}.$$ (22) Let $P=\begin{pmatrix}p&q\\ s&t\end{pmatrix}$ be the change-of-basis matrix (from the base $\mathbf{u},\mathbf{v}$ to the base $\mathbf{i},\mathbf{j}$), then $$\begin{pmatrix}x\\ y\end{pmatrix}=P^{-1}\begin{pmatrix}x^{\prime}\\ y^{\prime}\end{pmatrix}\quad\text{and}\quad\begin{pmatrix}x^{\prime}\\ y^{\prime}\end{pmatrix}=P\begin{pmatrix}x\\ y\end{pmatrix}.$$ (23) Now we are able to get the equation of curve $\Gamma$ in the Cartesian coordinate system. $\Gamma\colon(x^{\prime})^{2}+(y^{\prime})^{2}=(px+qy)^{2}+(sx+ty)^{2}=(p^{2}+s% ^{2})x^{2}+2(pq+st)xy+(q^{2}+t^{2})y^{2}=r^{2}$ where $r^{2}=(x_{0}^{\prime})^{2}+(y_{0}^{\prime})^{2}$. The curve $\Gamma$ is the second-order curve and its quadric quantic is $f(x,y)=(p^{2}+s^{2})x^{2}+2(pq+st)xy+(q^{2}+t^{2})y^{2}$. $\mathbf{A}_{f}=\begin{pmatrix}p^{2}+s^{2}&pq+st\\ pq+st&q^{2}+t^{2}\end{pmatrix}$ is the matrix of this quadric quantic. The three fundamental invariants $\Delta$, $\delta$ and $S$ determine a second-order curve up to a motion of the Euclidean plane: $\Delta=\begin{vmatrix}p^{2}+s^{2}&pq+st&0\\ pq+st&q^{2}+t^{2}&0\\ 0&0&-r^{2}\end{vmatrix}$, $\delta=\begin{vmatrix}p^{2}+s^{2}&pq+st\\ pq+st&q^{2}+t^{2}\end{vmatrix}$, $S=p^{2}+s^{2}+q^{2}+t^{2}$. As $S>0$, $\delta=(p^{2}+s^{2})(q^{2}+t^{2})-(pq+st)^{2}=(sq-pt)^{2}=\det^{2}(P)>0$ and $\Delta=-\delta r^{2}<0$, then $\Gamma$ is ellipse. The directions of major axis and minor axes coincide with the directions of the eigenvectors corresponding to the eigenvalues, $\mu^{\prime}>\lambda^{\prime}>0$ of the matrix $A_{f}$. So the canonical equation of an ellipse is $\dfrac{x^{2}}{a^{2}}+\dfrac{y^{2}}{b^{2}}=1$, where $a=\dfrac{r}{\sqrt{\lambda^{\prime}}}$, $b=\dfrac{r}{\sqrt{\mu^{\prime}}}$ and irrespective of $r$ all these ellipses are similar. Thus under the action of $\varphi$ each point of the plane remains on the ellipse. The trajectory of the points is either finite number of points of ellipse if the angle $\theta$ is commensurable with $\pi$ or the infinite dense set of points in the opposite case. Literature: [1] Halmos, Paul R. (1974), Finite-dimensional vector spaces, New York: Springer-Verlag. ISBN 978-0-387-90093-3. [2] Lang, Serge (1987), Linear algebra, New York: Springer-Verlag. ISBN 978-0-387-96412-6. [3] P. S. Aleksandrov, Lectures on analytical geometry, Moscow (1968). MRO 244836. [4] Higham N. J., Computing the polar decomposition with application, SIAM J. Sci. Stat. Comput., 1986, 7(4), 1160–1174. email: ibusjatskaja@hse.ru, yukochetkov@hse.ru
Relationship between Hawking radiation from black holes and spontaneous excitation of atoms Hongwei Yu and Wenting Zhou Department of Physics and Institute of Physics, Hunan Normal University, Changsha, Hunan 410081, China Abstract Using the formalism that separates the contributions of vacuum fluctuations and radiation reaction to the rate of change of the mean atomic energy, we show that a two-level atom in interaction with a quantum massless scalar field both in the Hartle-Hawking and Unruh vacuum in a 1+1 dimensional black hole background spontaneously excites as if there is thermal radiation at the Hawking temperature emanating from the black hole. Our calculation, therefore, ties the existence of Hawking radiation to the spontaneous excitation of a two-level atom placed in vacuum in the exterior of a black hole and shows pleasing consistence of two different physical phenomena, the Hawking radiation and the spontaneous excitation of atoms, which are quite prominent in their own right. Introduction. Hawking radiation from black holes, as one of the most striking effects that arise from the combination of quantum theory and general relativity, has attracted widespread interest in physics community. Currently, several derivations of Hawking radiation have been proposed, including Hawking’s original one which calculates the Bogoliubov coefficients between the quantum scalar field modes of the in vacuum states and those of the out vacuum Hawking:rv ; Hawking:sw , an elegant one based upon the Euclidean quantum gravity Gibbons:1976ue which has been interpreted as a calculation of tunneling through classically forbidden trajectory Parikh:1999mf , the approach based upon string theory SC ; Peet , and a recent proposal which ties its existence to the cancellation of gravitational anomalies at the horizon Robinson:2005pd . Here we discuss yet another approach which calculates the spontaneous excitation rate of a two-level atom interacting with massless quantum scalar fields in vacuum states in a black hole background. Our investigation ties the existence of Hawking radiation to the spontaneous excitation of a two-level atom placed in vacuum in the exterior of a black hole, thus revealing an interesting relationship between the existence of Hawking radiation from black holes and the spontaneous excitation of atoms in vacuum. Spontaneous emission, on the other hand, is one of the most important features of atoms and so far mechanisms such as vacuum fluctuations Welton48 ; CPP83 , radiation reaction Ackerhalt73 , or a combination of them Milonni88 have been put forward to explain why spontaneous emission occurs. The ambiguity in physical interpretation arises because of the freedom in the choice of ordering of commuting operators of the atom and field in a Heisenberg picture approach to the problem. The controversy was resolved when Dalibard, Dupont-Roc and Cohen-Tannoudji(DDC) Dalibard82 ; Dalibard84 proposed a formalism which distinctively separates the contributions of vacuum fluctuations and radiation reaction by demanding a symmetric operator ordering of atom and field variables. The DDC formalism has recently been generalized to study the spontaneous excitation of uniformly accelerated atoms in interaction with vacuum scalar and electromagnetic fields in a flat spacetime Audretsch94 ; H. Yu ; ZYL06 , and these studies show that when an atom is accelerated, the delicate balance between vacuum fluctuations and radiation reaction that ensures the ground state atom’s stability in vacuum is altered, making possible the transitions to excited states for ground-state atoms even in vacuum. In this paper, we apply this generalized DDC formalism to investigate the spontaneous excitation of an atom held static in the exterior region of a black hole and interacting with vacuum quantum massless scalar fields in two dimensions, and show that the atom spontaneously excites as if it were irradiated by or immersed in a thermal radiation at the Hawking temperature, depending on whether the scalar field is in the Unruh or the Hartle-Hawking vacuum. In other words, atoms feel the Hawking radiation from black holes Formalism. When vacuum fluctuations are concerned in a curved spacetime, a delicate issue then arises as to how the vacuum state of the massless scalar field is determined. Normally, the vacuum state is associated with non-occupation of positive frequency modes. However, the positive frequency of field modes are defined with respect to the time coordinate. Therefore, to define positive frequency, one has to first specify a definition of time. In a spherically symmetric black hole background, one definition is the Schwarzschild time and it is a natural definition of time in the exterior region. However, the vacuum state associated with this choice of time coordinate (Boulware vacuum) becomes problematic in the sense that the expectation value of the energy-momentum tensor, evaluated in a free falling frame, diverges at the horizon. Other possibilities that avoid this problem are the Unruh vacuum Unruh and the Hartle-Hawking vacuum Hartle-Hawking . The Unruh vacuum is defined by taking modes that are incoming from $\mathscr{J}^{-}$ to be positive frequency with respect to the Schwarzschild time, while those that emanate from the past horizon are taken to be positive frequency with respect to the Kruskal coordinate $\bar{u}$, the canonical affine parameter on the past horizon. The Unruh vacuum is regarded as the vacuum state that best approximates the state that would obtain following the gravitational collapse of a massive body. The Hartle-Hawking vacuum, on the other hand, is defined by taking the incoming modes to be positive frequency with respect to $\bar{v}$, the canonical affine parameter on the future horizon, and outgoing modes to be positive frequency with respect to $\bar{u}$. Let us note that the Hartle-Hawking state does not correspond to our usual notion of a vacuum since it has thermal radiation incoming to the black hole from infinity and describes a black hole in equilibrium with a sea of thermal radiation. Consider, in two dimensions, a two-level atom in interaction with a quantum real massless scalar field in a spherically symmetric black hole background, of which the metric is given by $$ds^{2}=\bigg{(}1-{2M\over r}\bigg{)}\;du\,dv={2M\over r}e^{-r/2M}d\bar{u}\,d% \bar{v}\;,$$ (1) where $$u=t-r^{*},\;\;v=t+r^{*},\;\;r^{*}=r+2M\ln[(r/2M)-1],\;\;\bar{u}=-e^{-\kappa u}% /\kappa,\;\;\bar{v}=e^{-\kappa v}/\kappa\;.$$ (2) Here $\kappa=1/4M$ is the surface gravity of the black hole. Without loss of generality, let us assume a pointlike two-level atom on a stationary space-time trajectory $x(\tau)$, where $\tau$ denotes the proper time on the trajectory. The stationarity of the trajectory guarantees the existence of stationary atomic states, $|+\rangle$ and $|-\rangle$, with energies $\pm{1\over 2}\omega_{0}$ and a level spacing $\omega_{0}$. The atom’s Hamiltonian which controls the time evolution with respect to $\tau$ is given, in Dicke’s notation Dicke54 , by $$H_{A}(\tau)=\omega_{0}R_{3}(\tau)\;,$$ (3) where $R_{3}={1\over 2}|+\rangle\langle+|-{1\over 2}|-\rangle\langle-|$ is the pseudospin operator commonly used in the description of two-level atomsDicke54 . The free Hamiltonian of the quantum scalar field that governs its time evolution with respect to $\tau$ is $$H_{F}(\tau)=\int d^{3}k\,\omega_{\vec{k}}\,a^{\dagger}_{\vec{k}}\,a_{\vec{k}}% \,{dt\over d\tau}\;.$$ (4) Here $a^{\dagger}_{\vec{k}}$, $a_{\vec{k}}$ are the creation and annihilation operators with momentum ${\vec{k}}$. Following Ref. Audretsch94 , we assume that the interaction between the atom and the quantum field is described by a Hamiltonian $$H_{I}(\tau)=c\,R_{2}(\tau)\,\phi(x(\tau))=\mu\;\omega_{0}\,R_{2}(\tau)\,\phi(x% (\tau))\;,$$ (5) where $c$ is a coupling constant which we assume to be small, $R_{2}={1\over 2}i(R_{-}-R_{+})$, and $R_{+}=|+\rangle\langle-|$, $R_{-}=|-\rangle\langle+|$. The coupling is effective only on the trajectory $x(\tau)$ of the atom. Note that here we have defined a dimensionless parameter, $\mu=c/\omega_{o}$. We can now write down the Heisenberg equations of motion for the atom and field observables. The field is always considered to be in its vacuum state $|0\rangle$. We will separately discuss the two physical mechanisms that contribute to the rate of change of atomic observables: the contribution of vacuum fluctuations and that of radiation reaction. For this purpose, we can split the solution of field $\phi$ of the Heisenberg equations into two parts: a free or vacuum part $\phi^{f}$, which is present even in the absence of coupling, and a source part $\phi^{s}$, which represents the field generated by the interaction between the atom and the field. Following DDCDalibard82 ; Dalibard84 , we choose a symmetric ordering between atom and field variables and consider the effects of $\phi^{f}$ and $\phi^{s}$ separately in the Heisenberg equations of an arbitrary atomic observable G. Then, we obtain the individual contributions of vacuum fluctuations and radiation reaction to the rate of change of G. Since we are interested in the spontaneous excitation of the atom, we will concentrate on the mean atomic excitation energy $\langle H_{A}(\tau)\rangle$. The contributions of vacuum fluctuations(vf) and radiation reaction(rr) to the rate of change of $\langle H_{A}\rangle$ can be written as ( cf. Ref.Dalibard82 ; Dalibard84 ; Audretsch94 ; H. Yu ; ZYL06 ) $$\displaystyle\left\langle{dH_{A}(\tau)\over d\tau}\right\rangle_{vf}$$ $$\displaystyle=$$ $$\displaystyle 2i\,c^{2}\int_{\tau_{0}}^{\tau}d\tau^{\prime}\,C^{F}(x(\tau),x(% \tau^{\prime})){d\over d\tau}\chi^{A}(\tau,\tau^{\prime})\;,$$ (6) $$\displaystyle\left\langle{dH_{A}(\tau)\over d\tau}\right\rangle_{rr}$$ $$\displaystyle=$$ $$\displaystyle 2i\,c^{2}\int_{\tau_{0}}^{\tau}d\tau^{\prime}\,\chi^{F}(x(\tau),% x(\tau^{\prime})){d\over d\tau}C^{A}(\tau,\tau^{\prime})\;,$$ (7) with $|\rangle=|a,0\rangle$ representing the atom in the state $|a\rangle$ and the field in the vacuum state $|0\rangle$. Here the statistical functions of the atom, $C^{A}(\tau,\tau^{\prime})$ and $\chi^{A}(\tau,\tau^{\prime})$, are defined as $$\displaystyle C^{A}(\tau,\tau^{\prime})$$ $$\displaystyle=$$ $$\displaystyle{1\over 2}\langle a|\{R_{2}^{f}(\tau),R_{2}^{f}(\tau^{\prime})\}|% a\rangle\;,$$ (8) $$\displaystyle\chi^{A}(\tau,\tau^{\prime})$$ $$\displaystyle=$$ $$\displaystyle{1\over 2}\langle a|[R_{2}^{f}(\tau),R_{2}^{f}(\tau^{\prime})]|a% \rangle\;,$$ (9) and those of the field are as $$\displaystyle C^{F}(x(\tau),x(\tau^{\prime}))$$ $$\displaystyle=$$ $$\displaystyle{1\over 2}{\langle}0|\{\phi^{f}(x(\tau)),\phi^{f}(x(\tau^{\prime}% ))\}|0\rangle\;,$$ (10) $$\displaystyle\chi^{F}(x(\tau),x(\tau^{\prime}))$$ $$\displaystyle=$$ $$\displaystyle{1\over 2}{\langle}0|[\phi^{f}(x(\tau)),\phi^{f}(x(\tau^{\prime})% )]|0\rangle\;.$$ (11) $C^{A}$ is called the symmetric correlation function of the atom in the state $|a\rangle$, $\chi^{A}$ its linear susceptibility. $C^{F}$ and $\chi^{F}$ are the Hadamard function and Pauli-Jordan or Schwinger function of the field respectively. The explicit forms of the statistical functions of the atom are given by $$\displaystyle C^{A}(\tau,\tau^{\prime})$$ $$\displaystyle=$$ $$\displaystyle{1\over 2}\sum_{b}|\langle a|R_{2}^{f}(0)|b\rangle|^{2}\left(e^{i% \omega_{ab}(\tau-\tau^{\prime})}+e^{-i\omega_{ab}(\tau-\tau^{\prime})}\right)\;,$$ (12) $$\displaystyle\chi^{A}(\tau,\tau^{\prime})$$ $$\displaystyle=$$ $$\displaystyle{1\over 2}\sum_{b}|\langle a|R_{2}^{f}(0)|b\rangle|^{2}\left(e^{i% \omega_{ab}(\tau-\tau^{\prime})}-e^{-i\omega_{ab}(\tau-\tau^{\prime})}\right)\;,$$ (13) where $\omega_{ab}=\omega_{a}-\omega_{b}$ and the sum runs over a complete set of atomic states. Spontaneous excitation of atoms. First let us apply the above formalism to the case of the Hartle-Hawking vacuum for the scalar field. Consider an atom held static at a radial distance $R$ from the black hole. The Wightman function for massless scalar fields in the Hartle-Hawking vacuum in (1+1) dimension is given by QFT $$\displaystyle D_{H}^{+}\,(x,x^{\prime})=-\frac{1}{4\pi}\ln{\frac{4e^{2\kappa R% ^{*}}\sinh^{2}\biggl{(}\kappa_{R}\;(\Delta\tau/2)-i\varepsilon\biggr{)}}{% \kappa^{2}}}\;,$$ (14) where $$\Delta\tau=\Delta\,t\sqrt{g_{00}}=\Delta\,t\sqrt{1-\frac{2M}{R}}\;,\;\;\kappa_% {R}=\frac{\kappa}{\sqrt{1-\frac{2M}{R}}}\;.$$ (15) This leads to the following statistical functions of the scalar field $$\displaystyle C^{F}(x\,(\tau),x\,(\tau^{\prime})\,)=-\frac{1}{8\pi}\,\biggl{[}% 2\ln\frac{4e^{2\kappa R^{*}}}{\kappa^{4}}+\ln\sinh^{2}\biggl{(}\frac{\kappa_{R% }}{2}\Delta\tau-i\varepsilon\biggr{)}+\ln\sinh^{2}\biggl{(}\frac{\kappa_{R}}{2% }\Delta\tau+i\varepsilon\biggr{)}\biggr{]}\;,$$ (16) $$\displaystyle\chi^{F}(x\,(\tau),x\,(\tau^{\prime}))=\,-\frac{1}{8\pi}\,\biggl{% [}\,\ln\sinh^{2}\biggl{(}\frac{\kappa_{R}}{2}\Delta\tau-i\varepsilon\biggr{)}-% \ln\sinh^{2}\biggl{(}\frac{\kappa_{R}}{2}\Delta\tau+i\varepsilon\biggr{)}% \biggr{]}\;.$$ (17) Plugging the above expressions into Eq. (6) and Eq. (7) and performing the integration yields the contribution of the vacuum fluctuations to the rate of change of the mean atomic energy for the atom held static at a distance $R$ from the black hole $$\displaystyle\left\langle\;\frac{dH_{A}(\tau)}{d\tau}\right\rangle_{vf}$$ $$\displaystyle=$$ $$\displaystyle-\mu^{2}\,\biggl{[}\sum_{\omega_{a}>\omega_{b}}\;\omega_{0}^{2}\;% |\langle a|R_{2}^{f}(0)|b\rangle|^{2}\;\biggl{(}\,\frac{1}{2}+\frac{1}{e^{\,% \omega_{ab}\,(2\pi/\kappa_{R})}-1}\biggr{)}$$ (18) $$\displaystyle\quad\quad-\sum_{\omega_{a}<\omega_{b}}\;\omega_{0}^{2}\;|\langle a% |R_{2}^{f}(0)|b\rangle|^{2}\;\biggl{(}\,\frac{1}{2}+\frac{1}{e^{\,|\,\omega_{% ab}|\,(2\pi/\kappa_{R})}-1}\biggr{)}\;\biggr{]}\;,$$ and that of radiation reaction $$\displaystyle\left\langle\frac{dH_{A}(\tau)}{d\tau}\right\rangle_{rr}\,=-\mu^{% 2}\,\biggl{[}\sum_{\omega_{a}>\omega_{b}}\frac{\omega_{0}^{2}}{2}\,|\langle a|% R_{2}^{f}(0)|b\rangle|^{2}+\sum_{\omega_{a}<\omega_{b}}\frac{\omega_{0}^{2}}{2% }\,|\langle a|R_{2}^{f}(0)|b\rangle|^{2}\biggr{]}\;.$$ (19) Here we have extended the integration range to infinity for sufficiently long times $\tau-\tau_{0}$. Adding up two contributions, we obtain the total rate of change of the mean atomic energy $$\displaystyle\left\langle\frac{dH_{A}(\tau)}{d\tau}\right\rangle_{tot}=-\mu^{2% }\,\biggl{[}\sum_{\omega_{a}>\omega_{b}}\;\omega_{0}^{2}\;|\langle a|R_{2}^{f}% (0)|b\rangle|^{2}\,\biggl{(}\,1+\frac{1}{e^{\,\omega_{ab}\,(2\pi/\kappa_{R})}-% 1}\biggr{)}$$ $$\displaystyle-\sum_{\omega_{a}<\omega_{b}}\;\omega_{0}^{2}\;|\langle a|R_{2}^{% f}(0)|b\rangle|^{2}\;\frac{1}{e^{\,|\,\omega_{ab}|\,(2\pi/\kappa_{R})}-1}\,% \biggr{]}\;.$$ (20) One can see that, for a ground state atom held static at a radial distance $R$ from the black hole, only the second term ( $\omega_{a}<\omega_{b}$) contributes and this contribution is positive, revealing that the atom spontaneously excites and thus transitions from ground state to the excited states occur spontaneously in the exterior region of the black hole. The most striking feature is that the spontaneous excitation rate is what one would obtain if the atom were immersed in a thermal bath of radiation at the temperature $$\displaystyle T=\frac{\kappa}{2\pi}\frac{1}{\sqrt{1-\frac{2M}{R}}}=(g_{00})^{-% 1/2}\,T_{H}\;,$$ (21) where $T_{H}=\kappa/2\pi$ is just the Hawking temperature of the black hole. In fact, the above result is the well-known Tolman relation Tolman which gives the proper temperature as measured by a local observer. While for an atom at spatial infinity ($R\rightarrow\infty$), the temperature as felt by the atom, $T$, approaches $T_{H}$. Therefore, an atom infinitely far away from the black hole (in the asymptotic region) would spontaneously excite as if in a thermal bath of radiation at the Hawking temperature. However, as the atom approaches the horizon ($R\rightarrow 2M$), the temperature $T$ diverges. This can be understood as the fact that the atom must be in acceleration relative to the local free-falling frame, which blows up at the horizon, to maintain at a fixed distance from the black hole, and this acceleration gives rises to additional thermal effect Unruh . Now let us briefly discuss what happens if the Hartle-Hawking vacuum is replaced by the Unruh vacuum. Then it is easy to show that the statistical functions of the scalar field become $$\displaystyle C^{F}\,(x(\tau),x(\tau^{\prime})\,)$$ $$\displaystyle=$$ $$\displaystyle-\,\frac{1}{8\pi}\,\biggl{\{}\,2\ln\frac{2e^{KR^{*}}}{k\sqrt{1-% \frac{2M}{R}}}+\ln\,\biggl{[}\sinh\,\biggl{(}\frac{a}{2}\Delta\tau-i% \varepsilon\biggr{)}\,\biggl{(}\Delta\tau-i\varepsilon\biggr{)}\,\biggr{]}+$$ (22) $$\displaystyle\ln\,\biggl{[}\sinh\,\biggl{(}\frac{a}{2}\Delta\tau+i\varepsilon% \biggr{)}\,\biggl{(}\Delta\tau+i\varepsilon\biggr{)}\,\biggr{]}\,-a(\tau+\tau^% {\prime})\biggr{\}}\;,$$ $$\displaystyle\chi^{F}\,(x(\tau),x(\tau^{\prime})\,)=-\,\frac{1}{8\pi}\,\biggl{% \{}\,\ln\,\biggl{[}\sinh\,\biggl{(}\frac{a}{2}\Delta\tau-i\varepsilon\biggr{)}% \,\biggl{(}\Delta\tau-i\varepsilon\biggr{)}\,\biggr{]}-\ln\,\biggl{[}\sinh\,% \biggl{(}\frac{a}{2}\Delta\tau+i\varepsilon\biggr{)}\,\biggl{(}\Delta\tau+i% \varepsilon\biggr{)}\,\biggr{]}\,\biggr{\}}\;,$$ (23) and the contribution of the vacuum fluctuations to the rate of change of the mean atomic energy for the atom held static at a distance $R$ from the black hole is given by $$\displaystyle\left\langle\frac{dH_{A}(\tau)}{d\tau}\right\rangle_{vf}$$ $$\displaystyle=$$ $$\displaystyle-\mu^{2}\,\biggl{[}\sum_{\omega_{a}>\omega_{b}}\;\omega_{0}^{2}\;% |\langle a|R_{2}^{f}(0)|b\rangle|^{2}\,\biggl{(}\,{1\over 2}+{1\over 2}\,\frac% {1}{e^{\,\omega_{ab}\,(2\pi/\kappa_{R})}-1}\biggr{)}$$ (24) $$\displaystyle\quad\quad-\sum_{\omega_{a}<\omega_{b}}\;\omega_{0}^{2}\;|\langle a% |R_{2}^{f}(0)|b\rangle|^{2}\,\biggl{(}\,{1\over 2}+{1\over 2}\,\frac{1}{e^{\,|% \,\omega_{ab}|\,(2\pi/\kappa_{R})}-1}\biggr{)}\,\biggr{]}\;,$$ and that of radiation reaction by $$\displaystyle\bigg{\langle}\frac{dH_{A}(\tau)}{d\tau}\bigg{\rangle}_{rr}\,=-% \mu^{2}\,\biggl{[}\sum_{\omega_{a}>\omega_{b}}\,{\omega_{0}^{2}\over 2}\,|% \langle a|R_{2}^{f}(0)|b\rangle|^{2}+\sum_{\omega_{a}<\omega_{b}}\,{\omega_{0}% ^{2}\over 2}\,|\langle a|R_{2}^{f}(0)|b\rangle|^{2}\,\biggr{]}\;.$$ (25) Consequently, we obtain, by adding up two contributions, the total rate of change of the mean atomic energy $$\displaystyle\bigg{\langle}\frac{dH_{A}(\tau)}{d\tau}\bigg{\rangle}_{tot}$$ $$\displaystyle=$$ $$\displaystyle-\mu^{2}\,\biggl{[}\sum_{\omega_{a}>\omega_{b}}\,\omega_{0}^{2}\;% |\langle a|R_{2}^{f}(0)|b\rangle|^{2}\,\biggl{(}\,1+{1\over 2}\,\frac{1}{e^{\,% \omega_{ab}\,(2\pi/\kappa_{R})}-1}\biggr{)}$$ (26) $$\displaystyle\quad\quad-\sum_{\omega_{a}<\omega_{b}}\;\omega_{0}^{2}\,|\langle a% |R_{2}^{f}(0)|b\rangle|^{2}\,{1\over 2}\,\frac{1}{e^{\,|\,\omega_{ab}|\,(2\pi/% \kappa_{R})}-1}\,\biggr{]}\;.$$ It is interesting to note that the term of the thermal like contribution is half of that in the Hartle-Hawking case (refer to Eq. (20)). This is consistent with our understanding that the Unruh vacuum corresponds to the state following the collapsing of a massive body to form a black hole, and as a result, the atom, held static at a radial distance $R$, spontaneously excites as if it were irradiated by a beam of outgoing thermal radiation at the temperature $T=\kappa_{R}/2\pi$, in other words, atoms feel the Hawking radiation. While the Hartle-Hawking vacuum is the state that includes a thermal radiation at the Hawking temperature incoming from infinity and describes an eternal black hole in thermal equilibrium with the incoming thermal radiation. Therefore the spontaneous excitation rate doubles. Summary By evaluating the rate of change of the mean atomic energy for a two-level atom in interaction with a massless scalar field in both the Hartle-Hawking and the Unruh vacuum in a 1+1 dimensional black hole background, we have demonstrated that an inertial atom far away from the black hole (in the asymptotic region) would spontaneously excite as if there is thermal radiation at the Hawking temperature emanating from the black hole, or in other words, atoms feel the Hawking radiation from black holes. Therefore, our discussion can be considered as providing another approach to the derivation of the Hawking radiation. Our result also reveals an interesting relationship between the existence of Hawking radiation from black holes and the spontaneous excitation of a two-level atom in vacuum in the exterior of a black hole, and shows pleasing consistence of two different physical phenomena, the Hawking radiation and the spontaneous excitation of atoms, which are quite prominent in their own right. This work was supported in part by the National Natural Science Foundation of China under Grants No.10375023 and No.10575035, and the Program for New Century Excellent talents in University(No.04-0784) References (1) S. Hawking, Nature (London) 248, 30 (1974). (2) S. Hawking, Commun. Math. Phys.  43, 199 (1975). (3) G. Gibbons and S. Hawking, Phys. Rev. D 15, 2752 (1977). (4) M. Parikh and F. Wilczek, Phys. Rev. Lett.  85, 5042 (2000). (5) A. Strominger and C. Vafa, Phys. Lett. B 379, 99(1996). (6) A. Peet, hep-th/0008241. (7) S. P. Robinson and F. Wilczek, Phys. Rev. Lett.  95, 011303 (2005); S. Iso, H. Umetsu, and F. Wilczek, Phys. Rev. Lett.  96, 151302 (2006); Phys. Rev. D74, 044017 (2006). (8) T. A. Welton, Phys. Rev. 74, 1157(1948). (9) G. Compagno, R. Passante and F. Persico, Phys. Lett. A 98,253(1983). (10) J. R. Ackerhalt, P. L. Knight and J. H. Eberly, Phys. Rev. Lett. 30, 456(1973). (11) P. W. Milonni, Phys. Scr. T21, 102(1988); P. W. Milonni and W. A. Smith, Phys. Rev. A 11, 814(1975). (12) J. Dalibard, J. Dupont-Roc and C. Cohen-Tannoudji, J. Phys. (France)43, 1617(1982). (13) J. Dalibard, J. Dupont-Roc and C. Cohen-Tannoudji, J. Phys. (France)45, 637(1984). (14) J. Audretsch and R. Müller, Phys. Rev. A 50, 1755(1994). (15) H. Yu and S. Lu, Phys. Rev. D 72, 064022(2005). (16) Z. Zhu, H. Yu and S. Lu, Phys. Rev. D 73, 107501(2006). (17) W.G. Unruh, Phys. Rev. D 14, 870(1976). (18) J. Hartle and S. Hawking, Phys. Rev. D13, 2188 (1976). (19) R. H. Dicke Phys. Rev. 93, 99(1954). (20) N. D. Birrell and P. C. W. Davies, Quantum Field Theory in Curved Space (Cambridge Univ. Press, Cambridge, 1982). (21) R. Tolman, Phys. Rev. 35, 904 (1930); R. Tolman and P. Ehrenfest, Phys. Rev. 36, 1791 (1930).
FEEBO: An Empirical Evaluation Framework for Malware Behavior Obfuscation Sebastian Banescu Technische Universität München, Germany    Tobias Wüchner Technische Universität München, Germany    Marius Guggenmos Technische Universität München, Germany    Martín Ochoa Technische Universität München, Germany    Alexander Pretschner Technische Universität München, Germany Abstract Program obfuscation is increasingly popular among malware creators. Objectively comparing different malware detection approaches with respect to their resilience against obfuscation is challenging. To the best of our knowledge, there is no common empirical framework for evaluating the resilience of malware detection approaches w.r.t. behavior obfuscation. We propose and implement such a framework that obfuscates the observable behavior of malware binaries. To assess the framework’s utility, we use it to obfuscate known malware binaries and then investigate the impact on detection effectiveness of different $n$-gram based detection approaches. We find that the obfuscation transformations employed by FEEBO significantly affect the precision of such detection approaches. Several $n$-gram-based approaches can hence be concluded not to be resilient against this simple kind of obfuscation. 1 Introduction Malware continues to be a relevant cyber security threat. While in the early days of the Internet malware was often developed for the pure sake of curiosity, malware development today follows a clear-cut business model. The motivations to develop and utilize malware ranges from supporting cyber espionage over theft of confidential data, denial-of-service of commercial services, or even black-mailing, up to tampering with military or civilian infrastructures. Industry and academia continuously devise countermeasures to cope with this threat in form of advanced malware detection approaches. However, malware developers are often several steps ahead the state of the art. Most commercial antivirus software in principle continues to be some form of signature-based analysis on the persistent representation of potential malware. Not surprisingly, almost all modern malware families employ some means to confuse and hamper signature-based approaches. Such countermeasures range from simple techniques (e.g. build-time encryption and runtime decryption up), to more sophisticated techniques (e.g. control-flow obfuscation or anti-debugging mutations) [6]. Given control-flow obfuscation of today’s malware, one intuitively appropriate detection strategy is so-called behavioral detection. The idea is to look at the malware’s runtime behavior rather than its static code. This behavior includes issued function or system calls, or in general, every runtime interaction with system resources. By construction, behavioral detection approaches are barely affected by control-flow obfuscation. However, although behavioral detection techniques compensate the effects of (build-time) control-flow obfuscation techniques to a large extent, they are often vulnerable to more advanced (run-time) behavior obfuscation techniques that “blur” the externally visible behavior of malware. Examples for such behavior obfuscation techniques include the injection of bogus system calls or the deliberate randomized re-ordering of call execution sequences. While control-flow obfuscation of malware and respective countermeasures at the detection side have been well researched [6], the effects of behavior obfuscation on the effectiveness of detection approaches so far only received very little attention in the literature. Behavior obfuscation in itself has been discussed from a theoretical perspective [7], but we are not aware of any empirical investigations of the effects of behavior obfuscation of real-world malware. To provide a foundation for such empirical evaluations, we propose a behavior obfuscation framework which we call FEEBO. Provided an arbitrary malware sample as input, it applies a diverse set of behavior obfuscation transformations to its externally visible behavior, which is defined by issued system calls. This makes it possible to “inject” behavior obfuscation mechanisms into malware samples in a structured and targeted way, regardless of whether or not the specific malware sample performs any behavior obfuscation itself. Considering that behavior obfuscation at the system call level is still rarely done by real-world malware, this approach allows us to get one step ahead of malware developers and reason about the impact of such obfuscation techniques on state of the art detection approaches before they are implemented and released into the wild. Contributions: a) To our best knowledge, we are the first to propose an empirical malware behavior obfuscation framework that is able to behaviorally obfuscate standard malware binaries. b) With FEEBO we establish a basis for a wide range of reproducible behavioral obfuscation resilience experiments. c) Our evaluations show that for certain configurations, the precision of $n$-gram  [15] based detection approaches are significantly affected by behavioral obfuscation. Organization: We introduce the concept of behavior obfuscation and discuss two main representative $n$-gram-based behavioral detection techniques in §2. Then we describe the design and implementation of FEEBO in §3. We show the effectiveness of a prototypical implementation of our framework and discuss its limitations in §4. We discuss possible application areas of our approach and give an outlook on future work in §5. 2 Preliminaries We start with some relevant concepts from the literature. In particular we recall related work on behavior obfuscation and detection based on $n$-grams. 2.1 Behavior Obfuscation This paper is inspired by the work of Péchoux and Ta [11] on behavior obfuscation of malware. They divide the behavior, i.e. executed operations of a program (e.g. malware) into (i) internal computations and (ii) system calls. Internal computations operate only on the process memory of the corresponding program and they only affect and are affected by the information stored inside this process’ memory. System calls represent interactions with the operating system (OS) kernel, i.e. there is a transfer of control from the corresponding program to the kernel and back. Therefore, system calls affect and are affected by the information stored anywhere in the OS memory. The sequence of system calls performed by a program is called the observable (execution) path or behavior. Péchoux and Ta show that it is possible to transform (obfuscate) the observable path of known malware samples such that the original malware functionality is preserved by: (i) inserting system calls before and/or after system calls in the observable path, (ii) reordering system calls in the observable path and (iii) substitution of system calls by other system calls which provide at least the same functionality. Different from our work, their goal is to obtain a trace that is similar to a goodware trace (mimicry). We, on the other hand, focus on randomly generating sets of malware “mutants” to assess their effect on behavioral detection approaches that analyze the system calls executed by malware. There is an important difference between behavior obfuscation and control-flow obfuscation. Control-flow obfuscation applies transformations at the source code or intermediate representation levels in order to make a program harder to understand by a human or an automated analysis engine. Such code transformations include virtualization obfuscation, insertion of bogus code via opaque predicates, function splitting, and control-flow flattening [6]. These transformation will typically not have an effect on the observable execution path of that program. On the other hand, behavior obfuscation strictly implies changing the observable execution path of the program being obfuscated. 2.2 Behavioral Malware Detection In contrast to approaches that focus on the persistent representation of malware, behavioral detection approaches discriminate malware from goodware by establishing characteristic behavior profiles. Such approaches range from using raw system call traces to short sequences of calls, so-called $n$-grams [9, 14, 15], to more elaborate concepts that model the semantic interdependencies between different calls in call-graphs [10, 5, 4]. There also exist approaches that model behavior by abstracting system calls into induced data flows [1, 3]. These approaches are based on traces of issued system calls and are thus likely to be affected by the aforementioned behavior obfuscation transformations. In this study we focus on approaches that base on $n$-grams as a behavior model, due to its prevalence in academic publications [15]. We are aware that findings based on this model do not necessarily generalize. Nevertheless we are convinced that such an evaluation is a good starting point to reason about the effects of behavior obfuscation in general and will be the basis for future work. To cover a broad range of $n$-gram based detection approaches, we follow the categorization schema of Canali et al. [2]. We consider $n$-grams built on system calls without arguments as atoms and both a) considering or b) ignoring the ordering of calls for their construction. To test the aforementioned approaches we first executed known malware and goodware in a sandboxed environment and monitored their executed system calls. This procedure yielded labeled event logs, which we tokenized with a sliding window, moving a window of defined but fixed size over the respective log, thus yielding sets of $n$-grams of system calls. For the first $n$-gram approach, which considers the ordering of system calls (a), we directly feed the obtained $n$-grams as features into a supervised machine learning classifier. For the second $n$-gram approach (b), which does not consider the ordering of system calls (b), we count the number of occurrences of each system call in the $n$-gram, build a feature vector with the number of occurrences of each of the system calls in the $n$-gram, and feed these vectors into the classifier. Note the independence of the feature vectors from the ordering of system calls in the $n$-gram. Figure 1 depicts the resulting feature vectors for both approaches when applied to a small sample call trace (left). The middle shows n-grams for approach (a), consisting of 4 system calls on each row (i.e, 4-grams). The contents of the cells are the initials of the system calls from the trace to the left. The table on the right part shows $n$-grams for approach (b), which consist of the frequency of every system call (depicted in the table header) for a 4-gram on each row. 3 Our Approach to Behavior Obfuscation Transforming (obfuscating) x86 binary programs without debugging symbols is a non-trivial task which involves binary rewriting [12]. This task becomes even more challenging when the binaries we want to transform are malware, which employ anti-disassembly techniques [8]. However, since we only want the binary to have a different observable behavior in terms of systems calls, we have taken an alternative approach by using binary instrumentation [13]. In a nutshell binary instrumentation allows one to intercept any system calls performed by the target binary. One can choose to execute, delay, drop or even swap the intercepted system call, plus perform other additional instructions including making more system calls. We have implemented the following two behavior obfuscation transformations: (i) system call insertion and (ii) system call reordering. These are relatively simple techniques in comparison to substitution of system calls with functionally equivalent systems calls; we leave their implementation to future work. 3.1 System Call Insertion With a given probability $p_{i}$, system call insertion adds for each system call made by the obfuscated application a number of additional system calls randomly chosen from the previously executed system calls. The number of inserted system calls is randomly chosen between $\mathit{min_{i}}$ and $\mathit{max_{i}}$, two more input parameters of FEEBO. To prevent these inserted calls from changing the original functionality of the application, we modify the values of their parameters in case the system calls belong to a set $S$ of calls that have side-effects such as writing to a file. The values of the changed parameters are chosen such that they will not collide with existing data, e.g., files. Furthermore, system calls that access a unique system resource are excluded. For instance, if we were to insert the system call that sets the clipboard data, we would need to also insert a second call to restore the clipboard data since there is only one clipboard on each system. For example, with $p_{i}=0.25$, $\mathit{min_{i}}=2$ and $\mathit{max_{i}}=5$, every system call made by the application has a 25% chance to insert a randomly chosen number between 2 and 5 of system calls after the execution of the intercepted system call. This obfuscation transformation changes the externally visible behavior by inserting a random number of system calls in random locations of the original execution trace. The intuition is that it should be effective against $n$-grams-based detection approaches since they rely on patterns. 3.2 System Call Reordering System call reordering can naïvely be implemented by delaying a sequence of system calls in a buffer which is randomly permuted before execution. This would most likely break the functionality of the transformed program or even cause it to crash. Instead, every system call in $S$ executed by the transformed application, can be delayed with probability $p_{r}$ and placed in a queue (of size $n$) for later execution. The reason only calls in $S$ are being delayed is that calls outside $S$ generally read information which applications need to continue their proper execution. Moreover, we use a queue for the delayed system calls, because we want to preserve the original ordering of system calls that have side effects like writing to a file. Once the queue reaches a certain size, our tool will execute them in their original order. Each of the delayed calls can additionally trigger the insertion other system calls with similar parameters as described in §3.1, i.e. probability of insertion denoted $p_{ri}$ and the minimum and maximum number of inserted system calls, denoted by $\mathit{min}_{ri}$, respectively $\mathit{max}_{ri}$. For example, for $p_{r}=0.5$, $n=5$, $p_{ri}=0.75$, $\mathit{min}_{ri}=1$ and $\mathit{max}_{ri}=2$, every system call from $S$ made by the application is delayed with a 50% probability. Once 5 calls have been delayed, they will be executed. Each of the delayed executions has a 75% probability to insert one or two other system calls. 3.3 Obfuscation Profiles The range of the input parameters of the previously described obfuscation transformations are shown in Table 1. The insertion and reordering probabilities range from 0 to 1. The minimum and maximum numbers of inserted system calls as well as the size of the reordering queue are positive integers. Their upper bound depends on the data type and the architecture of the system they are running on. Based on these parameters of system call insertion and system call reordering we can configure various obfuscation profiles, e.g. “always insert 2 system calls after each system call in the original observable path”, “do not insert any calls, only reorder” or “insert 1 system call after each reordered call”. We will see concrete detection values for different obfuscation profiles in §4. 4 Evaluation To assess the applicability of FEEBO, we obfuscated a set of real-world malware with the help of FEEBO and then applied the previously introduced behavior detection approaches, based on n-grams of system calls, to the resulting obfuscated system call traces. Setup. We executed 100 malware samples within an installation of the Cuckoo malware analysis sandbox111http://www.cuckoosandbox.org/., where we replaced the behavior monitor with FEEBO to obtain a variety of obfuscated behavior traces of those samples. In addition, we collected the traces of 100 known goodware samples which we did not obfuscate, to use as comparison baseline for later training the detection classifiers. The large range of values that the obfuscation parameters can take (see Table 1, quickly leads to a combinatorial explosion of the obfuscation profiles. Moreover, to capture a critical mass of system calls sufficiently large to allow training a classifier with good accuracy, we need to monitor a malware sample for at least 3 minutes. With one configuration profile capturing the obfuscated traces of 100 malware samples would then take 300 minutes which, with help of parallel execution of multiple VMs on 5 cores, we could cut down to about one hour per run. Therefore, we conducted experiments with 375 different combinations of the obfuscation parameters. More specifically, we set all probabilistic parameters like the insertion or reordering probability to selected values between 0% and 100%, i.e. $p_{\left\{i,r,ri\right\}}\in\left\{0.0,0.25,0.5,0.75,1.0\right\}$, and particular interesting discrete parameters to fixed values between 1 and 10, i.e. $max_{i}\in\left\{1,5,10\right\}$. All other parameters were set to fixed values, i.e. $min_{\{i,ri\}}=1$, $max_{ri}=3$ and $n=5$. Conducting one evaluation for each configuration profile (e.g. one combination of the aforementioned parameters and value ranges) ends up in $5\times 5\times 5\times 3=375$ runs, which sums up to a total runtime of about 16 days. Experiments. Using the resulting execution traces, we trained the respective classifiers on the feature vectors computed on the non-obfuscated baseline traces and used the generated classifier on the remaining obfuscated event traces. For the ideal case of the applied obfuscations not having any effect on the externally visible behavior, the detection rate should remain 100%. With this setting we could investigate the effects of the applied obfuscation transformations with respect to detection accuracy. To assess the effects of different $n$-gram sizes we repeated this procedure for all possible $n$-grams for $n$ between $3$ and $10$. Figure 2 summarizes the experimental findings. As a measure of the degree of obfuscation, we calculated the Levenshtein distance between the respective traces, as it represents the number of atomic insertion, deletion, and substitution operations that are needed to transform one event trace into another one. For computing the Levenshtein distance we abstracted our traces to only the name of the system calls (not their parameters), which are elements of our alphabet. Correspondingly, the $x$-axis of each diagram represents the average obfuscation degree of all considered event traces, whereas the $y$-axis represents the detection rate (percentage of correctly identified malware samples) achieved by different detection approaches. To visualize the development of the median detection rate for increasing obfuscation degree we also plot trend-lines for each $n$-gram. We split the evaluation results into three parts: the first row represents the results for the experiments where both type of obfuscation transformations, i.e. call reordering and call insertion were applied; the second row illustrates the results for the insertion experiments; and the last row the results from the reordering experiments. As we can see, the applied obfuscation transformations have a significant effect on the detection effectiveness of the $n$-gram approaches. In the first row of Figure 2 we can deduce a roughly quadratic relationship between an increase of obfuscation degree and an decrease of the detection rate. Also we can see, that the spread in classification accuracy, i.e. the standard deviation of the detection rate, significantly rises the more obfuscation is applied. Furthermore we can see that higher-order $n$-grams are more sensitive towards obfuscation. Looking at the remaining diagrams we notice that insertion transformations seem to have a bigger impact on detection accuracy than reordering transformation, which is reflected in a significantly steeper slope of the trend-lines in the insertion diagrams than in the reordering diagrams. Also we can say that for very small $n$-gram sizes, reordering transformations seem to have barely any influence on the detection rate, as can be seen by almost constantly high detection rates. Finally, our evaluations did not reveal any significant difference in obfuscation resilience between the ordered and unordered types of $n$-gram approaches. Discussion and threats to validity. First note that although we only conducted one execution run for one constellation of configuration parameter, the fact that several parameter configurations lead to a sample with a similar Levenshtein distance allows us to achieve a good saturation of the obfuscation spectrum. Given that we obtain 375 distinct sets of 100 obfuscated event traces, for each profile in our experiment, this gives a rather high density of 41 data-points in a range of 20 units on the $x$-axis in the first row from Figure 2, which correspond to the “both insertion and reordering” obfuscation profile. However, the density is 7 data-points in a range of 20 units for the second and third rows which correspond to “insertion-only”, respectively “reordering-only” obfuscation profiles. We intentionally did not mention false positive rates of the $n$-gram approaches in our evaluation, because they are not relevant for our experiments, since we do not change or obfuscate the set of goodware during our experiments. Currently our experimental setting assumes the presence of a certain ground truth, i.e. the availability of a critical mass of unobfuscated malware for classifier training. If malware developers start to make more use of behavioral obfuscation mechanism the availability of such a basic training set is not guaranteed. Using obfuscated malware for both, testing and training the classifiers, will likely diminish their effectiveness even more. For future work, we therefore also plan on investigating whether these factors impact the results. Having performed some initial experiments with Naïve Bayes, Gaussian-kernel SVMs, and Random Forest classifiers, we can confirm that the choice of the baseline classifier does not have a significant effect on the relative obfuscation sensitivity of the considered $n$-gram approaches. The functionality of any obfuscated program should include the functionality of the original (non-obfuscated) program. For many software transformation engines such as optimizing compilers, this is a strict requirement. However, even very widely used compilers such as GCC or Clang have been found to contain optimizations that break the functionality of the original source code [16]. The system call reordering transformation described above suffers from the same issue, i.e. it may change the functionality of malware such that it becomes ineffective. Arguably, however, in the case of obfuscating widely used malware it is more important to avoid detection even if the obfuscation engine will output some samples which are not effective. We do not yet possess statistics regarding the number of effective malware samples output by our tool. However we plan to study this fact as part of future work. We still consider our results valuable given that checking for behavioral equality in general is not decidable and in our experiments none of the obfuscated malware samples crashed during execution. In sum, we can draw two main conclusions from our experiments: a) FEEBO is able to effectively obfuscate the behavior of real-world malware with significant effect on the effectiveness of behavioral detection approaches; b) the considered type of $n$-gram approaches is highly sensitive to the evaluated forms of behavior obfuscation. 5 Conclusions and Future Work We have introduced FEEBO, a framework to conduct empirical experiments on the effects of behavior obfuscation on malware detections. To this extent we developed a prototype that can apply certain obfuscation transformations to the externally visible behavior of malware samples. To evaluate the effectiveness of the implemented obfuscation transformations and of our approach in general, we investigated the effects of a wide range of behavior obfuscation transformations on the detection capabilities of two representative $n$-gram behavior detection approaches. We could show that both types of $n$-gram approaches are considerably vulnerable to the applied obfuscation transformations. We are aware that our presented evaluation results are not comprehensive in its present form. In particular, for future work we plan to repeat the experiments for a bigger configuration space and malware sets. We also plan to investigate the effects of lack of ground truth by training the classifiers on obfuscated malware samples instead on solely unobfuscated ones. In terms of possible extensions of FEEBO, we plan to implement additional obfuscation transformations that e.g. also tackle the substitution of certain system calls with semantically equivalent ones. Although we release FEEBO222https://www22.in.tum.de/tools/feebo/ to parties from academia and industry, for ethical reasons we will provide a version that is not capable of generating self-contained obfuscated malware binaries. Instead, FEEBO needs to be manually installed in the evaluation environment, together with a installation of Intel Pin [13], which hopefully hampers misuse of FEEBO by malware developers. References [1] S. Bhatkar, A. Chaturvedi, and R. Sekar. Dataflow anomaly detection. In S&P, pages 15–pp. IEEE, 2006. [2] D. Canali, A. Lanzi, D. Balzarotti, C. Kruegel, M. Christodorescu, and E. Kirda. A quantitative study of accuracy in system call-based malware detection. In ISSTA ’12, pages 122–132. ACM, 2012. [3] L. Cavallaro and R. Sekar. Taint-enhanced anomaly detection. Information Systems Security, pages 160–174, 2011. [4] M. Christodorescu, S. Jha, and C. Kruegel. Mining specifications of malicious behavior. In India Software Engineering Conference, pages 5–14, 2008. [5] M. Christodorescu, S. Jha, S. Seshia, D. Song, and R. Bryant. Semantics-Aware Malware Detection. S&P’05, pages 32–46, 2005. [6] C. Collberg and J. Nagra. Surreptitious Software: Obfuscation, Watermarking, and Tamperproofing for Software Protection. Addison-Wesley Professional, 1st edition, 2009. [7] M. Dalla Preda, M. Christodorescu, S. Jha, and S. Debray. A semantics-based approach to malware detection. ACM Transactions on Programming Languages and Systems, 30(5):1–54, 2008. [8] C. Eagle. The IDA pro book: the unofficial guide to the world’s most popular disassembler. No Starch Press, 2011. [9] S. Forrest, S. A. Hofmeyr, A. Somayaji, and T. A. Longstaff. A sense of self for unix processes. In S&P ’96, pages 120–128, Washington, DC, USA, 1996. IEEE Computer Society. [10] Y. Park, D. S. Reeves, and M. Stamp. Deriving common malware behavior through graph clustering. Computers & Security, 2013. [11] R. Péchoux and T. D. Ta. A categorical treatment of malicious behavioral obfuscation. In Theory and Applications of Models of Computation, pages 280–299. Springer, 2014. [12] M. Prasad and T.-c. Chiueh. A binary rewriting defense against stack based buffer overflow attacks. In USENIX Annual Technical Conference, General Track, pages 211–224, 2003. [13] V. J. Reddi, A. Settle, D. A. Connors, and R. S. Cohn. Pin: a binary instrumentation tool for computer architecture research and education. In Proceedings of the 2004 workshop on Computer architecture education, page 22. ACM, 2004. [14] K. Rieck, P. Trinius, C. Willems, and T. Holz. Automatic analysis of malware behavior using machine learning. J. of Computer Security, pages 639–668, 2011. [15] C. Wressnegger, G. Schwenk, D. Arp, and K. Rieck. A close look on n-grams in intrusion detection: anomaly detection vs. classification. In Workshop on Artificial intelligence and security, pages 67–76, 2013. [16] X. Yang, Y. Chen, E. Eide, and J. Regehr. Finding and Understanding Bugs in C Compilers. SIGPLAN Not., 46(6):283–294, June 2011.
Optimal consensus control of the Cucker-Smale model [    [    [    [ Department of Mathematics, Imperial College London, South Kensington Campus, London SW7 2AZ, United Kindgom (e-mail: {carrillo,dkaliseb,r.bailo}@ic.ac.uk) CEREMADE, Université Paris Dauphine, Place du Maréchal de Lattre de Tassigny, 75775 Paris Cedex 16, France (email: bongini@ceremade.dauphine.fr) Abstract We study the numerical realisation of optimal consensus control laws for agent-based models. For a nonlinear multi-agent system of Cucker-Smale type, consensus control is cast as a dynamic optimisation problem for which we derive first-order necessary optimality conditions. In the case of a smooth penalization fo the control energy, the optimality system is numerically approximated via a gradient-descent method. For sparsity promoting, non-smooth $\ell_{1}$-norm control penalizations, the optimal controllers are realised by means of heuristic methods. For an increasing number of agents, we discuss the approximation of the consensus control problem by following a mean-field modelling approach. A First]Rafael Bailo Second]Mattia Bongini First]José A. Carrillo First]Dante Kalise gent-based models, Cucker-Smale model, consensus control, optimal control, first-order optimality conditions, sparse control, mean-field modelling. 1 Introduction This paper is concerns control problems for second-order, nonlinear, multi-agent systems (MAS). In such dynamics, the state of each agent is characterized by a pair $(x_{i},v_{i})$, representing variables which we refer to as position and velocity respectively. The uncontrolled system consists of simple binary interaction rules between the agents, such as attraction, repulsion, and alignment forces. This commonly leads to self-organization phenomena, flocking or formation arrays. However, this behaviour strongly depends on the cohesion of the initial configuration of the system, and therefore control design is relevant in order to generate an external intervention able to steer the dynamics towards a desired configuration. For second-order MAS, it is of particular interest the study of consensus emergence and control. In this context, we understand consensus as a traveling formation in which every agent has the same velocity. Self-organized consensus emergence for the Cucker-Smale model, see Cucker and Smale (2007), has been already characterized in Ha and Liu (2009); Ha et al. (2010); Carrillo et al. (2010a). The problem of consensus control for Cucker-Smale type models has been discussed in Caponigro et al. (2013); Bongini et al. (2015). A related problem is the design of controllers achieving a given formation, which has been previously addressed in Perea et al. (2009); Borzì and Wongkaew (2015). In this work we focus on the design of control laws enforcing consensus emergence. For this, we cast the consensus control problem in the framework of optimal control theory, for which an ad-hoc computational methodology is presented. We begin by considering a finite horizon control problem, in which the deviation of the population with respect to consensus is penalised along a quadratic control term. In this smooth case we derive first-order optimality conditions, which are then numerically realised via a first-order, gradient descent method: the so-called Barzilai-Borwein (BB) method. While the use of gradient descent methods is a standard tool for the numerical approximation of optimal control laws, see Borzí and Schulz (2011), the use of the BB method for large-scale agent-based models is relatively recent, Deroo et al. (2012), and we report on its use as a reliable method for optimal consensus control problems (OCCP) in nonlinear MAS. While the control performance is satisfactory, our setting allows the controller to act differently on every agent at every instant. The question of a more parsimonious control design remains open. As an extension of the proposed methodology, we address the finite horizon OCCP with a non-smooth, sparsity-promoting control penalisation. While this control synthesis is sparse, acting on a few agents over a finite time frame, its numerical realisation is far more demanding due to the lack of smoothness in the cost functional. To circumvent this difficulty, the numerical realisation of the control synthesis resorts to the use of heuristics related to particle swarm optimisation (PSO), and to nonlinear model predictive control (NMPC). Finally, based on the works Carrillo et al. (2010a, b); Fornasier and Solombrino (2014); Bongini et al. (2017); Albi et al. (2017a), we discuss the resulting mean-field optimal control problem: that obtained as the number of agents $N$ tends to $\infty$ and the micro-state $(x_{i}(t),v_{i}(t))$ of the population is replaced by an agent density function $f(t,x,v)$. The rest of the paper is organized as follows: In Section 2 we revisit the Cucker-Smale model and results on consensus emergence and control. In Section 3 we address the OCCP via first-order necessary optimality conditions and its numerical realisation. Section 4 introduces the sparse OCCP and its approximation. Concluding, in Section 5 we present a mean-field modelling approach for OCCP when the number of agents is sufficiently large. 2 The Cucker-Smale model and consensus emergence We consider a set of $N$ agents with state $(x_{i}(t),v_{i}(t))\in\mathbb{R}^{d}\times\mathbb{R}^{d}$ interacting under second-order Cucker-Smale type dynamics $$\displaystyle\frac{dx_{i}}{dt}$$ $$\displaystyle=v_{i}\,,\quad\frac{dv_{i}}{dt}=\frac{1}{N}\sum_{j=1}^{N}a(\|x_{i% }-x_{j}\|)(v_{j}-v_{i})$$ (2.1) $$\displaystyle\mathbf{x}(0)$$ $$\displaystyle=\mathbf{x}_{0}\,,\quad\mathbf{v}(0)=\mathbf{v}_{0}\,,$$ (2.2) where $a(r)$ is a communication kernel of the type $$\displaystyle a(r)=\frac{K}{(1+r^{2})^{\beta}}\,,$$ (2.3) and we use the notation $\mathbf{x}(t)=(x_{1}(t),\ldots,x_{N}(t))^{t}$, $\mathbf{v}(t)=(v_{1}(t),\ldots,v_{N}(t))^{t}\in\mathbb{R}^{dN}$. Both $\|\cdot\|$ and $\|\cdot\|_{2}$ are indistinctly used for the $\ell_{2}$-norm, while $\|\cdot\|_{1}$ stands for the $\ell_{1}$-norm. We will focus on the study of consensus emergence, i.e. the convergence towards a configuration in which $$\displaystyle v_{i}=\bar{v}=\frac{1}{N}\sum_{j=1}^{N}v_{j}\,\quad\forall i.$$ (2.4) Note that for a system of the type (2.1)-(2.2), a consensus configuration will remain as such without any external intervention, and positions will evolve in planar formation. The emergence of consensus as a self-organization phenomenon, either by a sufficiently cohesive initial configuration $(\mathbf{x}_{0},\mathbf{v}_{0})$ or a strong interaction $a(r)$, is a problem of interest in its own right. To study consensus emergence, it is useful to define the bilinear form $B:\mathbb{R}^{dN}\times\mathbb{R}^{dN}\to\mathbb{R}$ $$B(\mathbf{w},\mathbf{v})=\frac{1}{2N^{2}}\sum_{i,j=1}^{N}\|w_{i}-v_{j}\|^{2}\,.$$ Note that for a population in consensus, $B(\mathbf{v},\mathbf{v})\equiv 0$. A solution $(\mathbf{x}(t),\mathbf{v}(t))$ to (2.1)-(2.2) tends to a consensus configuration if and only if $$V(t):=B(\mathbf{v}(t),\mathbf{v}(t))\rightarrow 0\quad\text{ as }\quad t% \rightarrow+\infty.$$ Analogously, we define $X(t):=B(\mathbf{x}(t),\mathbf{x}(t))$. We briefly recall some well-known results on self-organized consensus emergence. {thm} Unconditional consensus emergence (Cucker and Smale (2007); Carrillo et al. (2010b)). Given an interaction kernel $a(r)=\frac{K}{(1+r^{2})^{\beta}}$ with $K>0$ and $0\leq\beta\leq\frac{1}{2}$, the Cucker-Smale dynamics (2.1)-(2.2) convergence asymptotically to consensus, i.e. $V(0)\leq e^{-\lambda t}V(t)$, for $\lambda>0$. {thm} Conditional consensus emergence (Ha and Liu (2009); Ha et al. (2010)). For $a(r)=\frac{K}{(1+r^{2})^{\beta}}$ with $K>0$ and $\frac{1}{2}\leq\beta$, if $$\sqrt{V(0)}<\int\limits_{\sqrt{X(0)}}^{+\infty}a(2\sqrt{N}s)\,ds\,,$$ then the Cucker-Smale dynamics (2.1)-(2.2) convergence asymptotically to consensus. In this work we are concerned with inducing consensus through the synthesis of an external forcing term $\mathbf{u}(t)=(u_{1}(t),\ldots,u_{N}(t))^{t}$ in the form $$\displaystyle\frac{dx_{i}}{dt}$$ $$\displaystyle=v_{i}\,,$$ (2.5) $$\displaystyle\frac{dv_{i}}{dt}$$ $$\displaystyle=\frac{1}{N}\sum_{j=1}^{N}a(\|x_{i}-x_{j}\|)(v_{j}-v_{i})+u_{i}(t% )\,,$$ (2.6) $$\displaystyle x_{i}(0)$$ $$\displaystyle=x_{0}\,,\quad v_{i}(0)=v_{0}\,,\qquad i=1,\ldots,N\,,$$ (2.7) where the control signals $u_{i}\in{\mathcal{U}}=\{u:\mathbb{R}_{+}\longrightarrow U\}$ and $U$ a compact subset of $\mathbb{R}^{d}$. 3 The optimal control problem and first-order necessary conditions In this section we entertain the problem of obtaining a forcing term $\mathbf{u}(t)$ which will either induce consensus on an initial configuration $(\mathbf{x}_{0},\mathbf{v}_{0})$ that would otherwise diverge, or which accelerates the rate of convergence for initial data that would naturally self-organise. Formally, for $T>0$ and given a set of admissible control signals ${\mathcal{U}}^{N}:\mathbb{R}^{+}_{0}\to[L^{\infty}(0,T;\mathbb{R}^{d})]^{N}$ for the entire population, we seek a solution to the minimisation problem $$\displaystyle\underset{\mathbf{u}(\cdot)\in{\mathcal{U}}^{N}}{\min}\mathcal{J}% (\mathbf{u}(\cdot);\mathbf{x}_{0},\mathbf{v}_{0}):=\int_{0}^{T}\ell(\mathbf{v}% (t),\mathbf{u}(t))\,dt\,,$$ (3.8) with the running cost defined as $$\displaystyle\ell(\mathbf{v},\mathbf{u}):=\frac{1}{N}\sum_{j=1}^{N}\left(\left% \|\bar{v}-v_{j}\right\|^{2}+\gamma\left\|u_{j}\right\|^{2}\right),$$ (3.9) with $\gamma>0$, subject to the dynamics (2.5)-(2.7). 3.1 First-order optimality conditions While existence of a minimiser $\mathbf{u}^{*}$ of (3.8) follows from the smoothness and convexity properties of the system dynamics and the cost, the Pontryagin Minimum Principle Pontryagin et al. (1962) yields first-order necessary conditions for the optimal control. Let $(p_{i}(t),q_{i}(t))\in\mathbb{R}^{d}\times\mathbb{R}^{d}$ be adjoint variables associated to $(x_{i},v_{i})$, then the optimality system consists of a solution $(\mathbf{x}^{*},\mathbf{v}^{*},\mathbf{u}^{*},\mathbf{p}^{*},\mathbf{q}^{*})$ satisfying (2.5)-(2.7) along with the adjoint equations $$\displaystyle-\frac{dp_{i}}{dt}$$ $$\displaystyle=\frac{1}{N}\sum_{j=1}^{N}\frac{a^{\prime}\left(\left\|x_{j}-x_{i% }\right\|\right)}{\left\|x_{j}-x_{i}\right\|}\left\langle q_{j}-q_{i},v_{j}-v_% {i}\right\rangle\left(x_{j}-x_{i}\right)\,,$$ (3.10) $$\displaystyle-\frac{dq_{i}}{dt}$$ $$\displaystyle=p_{i}+\frac{1}{N}\sum_{j=1}^{N}a\left(\left\|x_{j}-x_{i}\right\|% \right)\left(q_{j}-q_{i}\right)-2\left(\bar{v}-v_{i}\right),$$ (3.11) $$\displaystyle p_{i}(T)$$ $$\displaystyle=0\,,\quad q_{i}(T)=0\,,\qquad i=1,\ldots,N\,,$$ (3.12) and the optimality condition $$\displaystyle\mathbf{u}(t)=\underset{\mathbf{w}\in\mathbb{R}^{dN}}{argmin}\sum% _{j=1}^{N}\left(\left\langle q_{j},\frac{dv_{j}}{dt}\right\rangle+\gamma\left% \|w_{j}\right\|^{2}\right)=-\frac{1}{2\gamma}\mathbf{q}^{t}\,.$$ (3.13) 3.2 A gradient-based realisation of the optimality system The adjoint system (3.10)-(3.13) is used to implement a gradient descent method for the numerical realisation of the optimal control law. It can be readily verified that the gradient of the cost $\mathcal{J}$ in (3.8) is given by $$\displaystyle\nabla\mathcal{J}\left(\mathbf{u}\right)=\mathbf{q}+2\gamma% \mathbf{u}\,,$$ (3.14) obtained by differentiating (3.13) with respect to $\mathbf{u}$. With this expression for $\nabla\mathcal{J}$, the gradient iteration is presented in Algorithm 1. In Algorithm 1, the gradient is first obtained by integrating the foward-backward optimality system, and then the step $\alpha_{k}$ in 4) is chosen as in the Barzilai-Borwein method, see Barzilai and Borwein (1988). 3.3 Numerical experiments Figure 3.1 shows a comparison of the free two-dimensional dynamics of a sample initial condition $(\mathbf{x}_{0},\mathbf{v}_{0})$ and the system under an approximation to the optimal control $\mathbf{u}^{*}$ found with this algorithm. A Runge-Kutta 4th order scheme was used to integrate the differential equations for the state and the adjoint with end time $T=10$, time step $dt=0.1$ (resulting on $N_{T}=51$ points for the time discretisation), and a stopping tolerance for the gradient norm of $10^{-3}$. The condition $(\mathbf{x}_{0},\mathbf{v}_{0})$ is chosen such that consensus would not be reached naturally; long-time numerics of the free system show that $V(t)$ converges to an asymptotic value around $\bar{V}=0.4$. In the controlled setting, consensus is reached rapidly as can be seen from the trajectories themselves, as well as from the fast convergence of the functional $V(t)$ to zero. Furthermore, the norm of the control also decreases in time as the system is steered towards and into the self-organisation region. 4 The sparse consensus control and its approximation via heuristics In this section, we address the problem of enforcing sparsity on the optimal consensus strategy. As shown in Albi et al. (2017b); Bongini and Fornasier (2014); Caponigro et al. (2013); Kalise et al. (2017), one way to do it is by using as control cost the $\ell_{1}$-norm $\|\cdot\|_{1}$ in the minimization problem (3.8), instead of the standard squared $\ell_{2}$-norm $\|\cdot\|_{2}^{2}$. However, the choice of the non-differentiable control cost $\|\cdot\|_{1}$ gives rise to a non-smooth cost functional $\mathcal{J}$, for which gradient-based numerical solvers like the one presented in Section 3.2 are not directly suitable. To circumvent the nonsmoothness of $\mathcal{J}$, we shall resort to a metaheuristic procedure known as particle swarm optimization (PSO). 4.1 Particle Swarm Optimization First introduced in Kennedy and Eberhart (1995); Shi and Eberhart (1998), PSO is a numerical procedure that solves a minimization problem by iteratively trying to improve a candidate solution. PSO solves the problem by generating a population of points in the discrete control state space of solutions $\mathcal{U}^{N}=\mathbb{R}^{d\times N\times N_{T}}$ called particles. Each particle is treated as a point in this $D=d\times N\times N_{T}$-dimensional space with coordinates $(z_{i1},\ldots,z_{iD})$, and the cost functional is evaluated at each of these points. The best previous position (i.e., the one for which the cost functional is minimal) of any particle $(m_{i1},\ldots,m_{iD})$ is recorded, together with the index $h$ of the best particle among all the particles. We let then the particles evolve according to the system $$\displaystyle z_{ij}$$ $$\displaystyle:=z_{ij}+w_{ij},$$ $$\displaystyle w_{ij}$$ $$\displaystyle:=w_{ij}+c_{1}\xi(m_{ij}-z_{ij})+c_{2}\eta(m_{hj}-z_{ij}),$$ where $c_{1},c_{2}>0$ are two constant parameters and $\xi,\eta$ are two random variables with support in $[0,1]$. PSO is a metaheuristic since it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions: in particular, PSO does not use the gradient of the problem being optimized, which means PSO does not require that the optimization problem be differentiable. However, it yields to a decrease of the cost function. Notice that whenever the dimension of the control space $D$ is very large (that is, either $d$, $N$ or $N_{T}$ are large), the problem suffers from the curse of dimensionality, as the evaluation of $\mathcal{J}$ at all the particles and the subsequent search for the best position becomes prohibitively expensive. To mitigate this difficulty, we shall optimise within a nonlinear model predictive control (NMPC) loop with short prediction horizon. 4.2 Nonlinear Model Predictive Control For a prediction horizon of $H$ steps, for $k=1,\ldots,N_{T}-H$ and a discrete time version of the dynamics (2.1)-(2.2), we minimise the following performance index $$\displaystyle\sum^{H}_{h=0}\frac{1}{N}\sum^{N}_{j=1}\bigg{(}\|\overline{v}^{k+% h}-v^{k+h}_{j}\|_{2}^{2}+\gamma\|u^{k+h}_{j}\|^{r}_{r}\bigg{)},$$ (4.16) for some $\ell_{r}$-norm with $r\geq 1$, generating a sequence of controls $(\mathbf{u}^{k},\mathbf{u}^{k+1},\ldots,\mathbf{u}^{k+H})$ from which only the first term $\mathbf{u}^{k}$ is taken to evolve the dynamics from $k$ to $k+1$. The system’ state is sampled again and the calculations are repeated starting from the new current state, yielding a new control and a new predicted state path. Although this approach is suboptimal with respect to the full time frame optimisation presented in Section 3, in practice it produces very satisfactory results. For $H=1$, the NMPC approach recovers an instantaneous controller, whereas for $H=N_{T}-1$ it solves the full time frame problem (3.8). Such flexibility is complemented with a robust behaviour, as the optimisation is re-initialized every time step, allowing to address perturbations along the optimal trajectory. For further references, see Mayne et al. (2000). 4.3 Numerical experiments We now report the results of the numerical simulations of (3.8) with $\|\cdot\|_{1}$ and $\|\cdot\|_{2}^{2}$ together with the setup PSO-NMPC described above. The aim is to check whether the optimal control obtained with the $\ell_{1}$-control cost is sparser than the one obtained with the $\ell_{2}$-norm. To do so, we shall compare their norms at each time, since a sparse control will be equal to 0 most of the time. Starting from an initial configuration $(\mathbf{x}_{0},\mathbf{v}_{0})$ that does not converge to consensus, we compare the effect of a different NMPC horizon $H\in\{3,10\}$ and of a different $\ell_{r}$-control cost for $r\in\{1,2\}$ in (4.16) on the optimal control strategy. 4.4 The case $H=3$ In this case, the controller is optimizing $H=3$ periods ahead instead of the full time frame. Figure 4.2 shows the controlled dynamics of the agents with the optimal control obtained for $r=1$ (top) and $r=2$ (bottom). Both controls decisively improve the alignment behaviour with respect to the uncontrolled dynamics. Figure 4.3 shows the behaviour of the $V$ functional: for both controlled dynamics, the velocity spread $V(t)$ goes steadily to 0. To see how sparse the controls are, for each control strategy in Figure 4.4 we show the corresponding heat map, i.e., the matrix $\mathcal{H}$ such that $\mathcal{H}_{ik}$ contains the norm of the control acting on the $i$-th agent at step $k$. The stronger the control, the brighter the entry shall be: we can notice that the heat map for $r=1$ is sparser than the one for $r=2$, being concentrated on few bright spots. This corroborates the findings of Bongini and Fornasier (2014); Caponigro et al. (2013), where the sparsifying powers of an $\ell_{1}$-control cost were shown. 4.5 The case $H=10$ We now consider the case $H=10$. Since the effects of the controls on the dynamical systems are the same as for the case $H=3$, we just show the heat map of the controls to compare their sparsity, see Figure 4.5. We notice that the heat map of the $\ell_{1}$ control is less bright than the one for $r=2$, showing that the $\ell_{1}$-cost is again able to enforce sparsity better than the standard $\ell_{2}$-cost. The final black strip in the heat map means that the system has reached the consensus region and the optimal control is just the zero vector. Hence, in this particular case the control obtained with the $\ell_{1}$-cost is actually better than the $\ell_{2}$ control in steering the system towards a consensus configuration. 5 The Continuous Control Problem In this section we consider the continuous control problem that results in the limit of the discrete problem from Section 3 as $N\rightarrow\infty$. Formally, the forced Cucker-Smale dynamics (2.5)-(2.7) can be written continuously as a Vlasov-type transport equation $$\frac{\partial f}{\partial t}+\left(v\cdot\nabla_{x}\right)f+\nabla_{v}\cdot% \left[\left(A(x,v)*f+u(t,x,v)\right)f\right]=0,$$ (5.17) where $A(x,v)=a(|x|)v$ and $f,u:[0,T]\times\mathbb{R}^{2}\times\mathbb{R}^{2}\rightarrow\mathbb{R}$ are respectively a probability density for the state and a forcing term. An equivalent minimisation problem can be posed: $$\displaystyle\underset{u\in{\mathcal{U}}}{\min}\mathcal{J}(u;f_{0}):=\int_{0}^% {T}\ell(f,u)\,dt\,,$$ (5.18) for fixed $T>0$ and a running cost $\ell(f,u)$ given by the expression: $$\displaystyle\int_{\mathbb{R}^{2d}}\left\|v-\int_{\mathbb{R}^{2d}}w\,df(y,w)% \right\|^{2}+\gamma\left\|u(t,x,v)\right\|^{2}\,df(x,v).$$ (5.19) The solutions of the discrete control problem (3.8) converge to that of the continuous problem (5.17) in some sense, see Fornasier and Solombrino (2014). This can be verified numerically by fixing an initial distribution $f_{0}$ and studying the sequence solutions of the discrete problem with initial conditions sampled from said distribution; a subsequence is known to converge as $N\rightarrow\infty$. Besides the solution, the optimal value of the objective functional $\mathcal{J}^{*}_{N}$ is also expected to converge, which can be verified. Initial conditions without natural consensus were constructed by sampling $x$ from a superposition of two Gaussian distributions and letting $x=v$. The marginal distributions of $f_{0}$ on $x$ and in $v$ are thus given by111Here $\mathcal{N}(\mu,\Sigma)$ stands for the bivariate Normal distribution with mean vector $\mu$ and positive semi-definite covariance matrix $\Sigma$.: $$\displaystyle\frac{1}{2}\left[\mathcal{N}((2,2),I)+\mathcal{N}((-2,2),I)\right],$$ (5.20) where $I$ is the identity matrix, see Figure 5.6. A sequence of such discrete problems were solved for several values of $N$. Figure 5.7 shows the comparison between the free and controlled trajectories for various values of $N$. The Runge-Kutta scheme was used to solve the differential equations for the state and the adjoint with end time $T=5$, time step $dt=0.1$ (resulting on $51$ points for the time discretisation), and a stopping tolerance of $10^{-2}$. Figure 5.8 shows the evolution of the optimal cost $\mathcal{J}^{*}_{N}$, which appears to be of order $\mathcal{J}^{*}_{N}\sim\mathcal{O}(1)$ as expected for the convergence as $N\rightarrow\infty$. Figure 5.9 shows the marginal distribution of $f(T)$ on $v$ for the free and forced settings with the same scale; the controlled case yields a singular distribution indicating consensus. Figure 5.10 shows a heat map of the optimal control $u_{i}^{*}(t)$. We observed that the average norm of the control is of $\sim\mathcal{O}(1)$ as $N\rightarrow\infty$; furthermore the time at which the control is nearly zero is roughly constant for large $N$. Table 5.1 shows the evolution of the number of optimisation iterations (i.e. loops on Algorithm 1) as well as the computation CPU time in hours; notice that the number of iterations remains roughly constant, while the computation time scales quadratically in $N$. References Albi et al. (2017a) Albi, G., Choi, Y.P., Fornasier, M., and Kalise, D. (2017a). Mean-field control hierarchy. Appl. Math. Optim., 76(1), 93–175. Albi et al. (2017b) Albi, G., Fornasier, M., and Kalise, D. (2017b). A Boltzmann approach to mean-field sparse feedback control. IFAC-PapersOnLine, 50(1), 2898 – 2903. Barzilai and Borwein (1988) Barzilai, J. and Borwein, J.M. (1988). Two-Point Step Size Gradient Methods. IMA J. Numer. Anal., 8(1), 141–148. Bongini and Fornasier (2014) Bongini, M. and Fornasier, M. (2014). Sparse stabilization of dynamical systems driven by attraction and avoidance forces. Netw. Heterog. Media, 9(1), 1–31. Bongini et al. (2015) Bongini, M., Fornasier, M., and Kalise, D. (2015). (Un)conditional consensus emergence under perturbed and decentralized feedback controls. Discrete Contin. Dyn. Syst. Ser. A, 35(9), 4071–4094. Bongini et al. (2017) Bongini, M., Fornasier, M., Rossi, F., and Solombrino, F. (2017). Mean-field pontryagin maximum principle. J. Opt. Theory Appl., 175(1), 1–38. Borzí and Schulz (2011) Borzí, A. and Schulz, V. (2011). Computational Optimization of Systems Governed by Partial Differential Equations. Society for Industrial and Applied Mathematics. Borzì and Wongkaew (2015) Borzì, A. and Wongkaew, S. (2015). Modeling and control through leadership of a refined flocking system. Math. Models Methods Appl. Sci., 25(2), 255–282. Caponigro et al. (2013) Caponigro, M., Fornasier, M., Piccoli, B., and Trélat, E. (2013). Sparse stabilization and optimal control of the Cucker-Smale model. Math. Control Relat. Fields, 3, 447–466. Carrillo et al. (2010a) Carrillo, J.A., Fornasier, M., Rosado, J., and Toscani, G. (2010a). Asymptotic flocking dynamics for the kinetic Cucker-Smale model. SIAM J. Math. Anal., 42, 218–236. Carrillo et al. (2010b) Carrillo, J.A., Fornasier, M., Toscani, G., and Vecil, F. (2010b). Particle, Kinetic, and Hydrodynamic Models of Swarming. In G. Naldi, L. Pareschi, and G. Toscani (eds.), Mathematical Modeling of Collective Behavior in Socio-Economic and Life Sciences, Series: Modelling and Simulation in Science and Technology, 297–336. Birkhäuser, Boston, MA. Cucker and Smale (2007) Cucker, F. and Smale, S. (2007). Emergent behavior in flocks. IEEE Trans. Automat. Control, 52(5), 852–862. Deroo et al. (2012) Deroo, F., Ulbrich, M., Anderson, B.D.O., and Hirche, S. (2012). Accelerated iterative distributed controller synthesis with a barzilai-borwein step size. In 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), 4864–4870. Fornasier and Solombrino (2014) Fornasier, M. and Solombrino, F. (2014). Mean-field optimal control. ESAIM Control Optim. Calc. Var., 20(4), 1123–1152. Ha et al. (2010) Ha, S.Y., Ha, T., and Kim, J.H. (2010). Emergent behavior of a Cucker-Smale type particle model with nonlinear velocity couplings. IEEE Trans. Automat. Control, 55(7), 1679–1683. Ha and Liu (2009) Ha, S.Y. and Liu, J.G. (2009). A simple proof of the Cucker-Smale flocking dynamics and mean-field limit. Commun. Math. Sci., 7(2), 297–325. Kalise et al. (2017) Kalise, D., Kunisch, K., and Rao, Z. (2017). Infinite horizon sparse optimal control. J. Opt. Theory Appl., 172(2), 481–517. Kennedy and Eberhart (1995) Kennedy, J. and Eberhart, R. (1995). Particle swarm optimization. In Proceedings of IEEE International Conference on Neural Networks IV, volume 1000, 1942–1948. IEEE. Mayne et al. (2000) Mayne, D.Q., Rawlings, J.B., Rao, C.V., and Scokaert, P.O. (2000). Constrained model predictive control: Stability and optimality. Automatica, 36(6), 789–814. Perea et al. (2009) Perea, L., Gómez, G., and Elosegui, P. (2009). Extension of the Cucker-Smale control law to space flight formations. AIAA Journal of Guidance, Control, and Dynamics, 32, 527–537. Pontryagin et al. (1962) Pontryagin, L.S., Boltyanskii, V.G., Gamkrelidze, R.V., and Mishchenko, E.F. (1962). The mathematical theory of optimal processes. Interscience Publishers John Wiley & Sons, Inc.  New York-London. Shi and Eberhart (1998) Shi, Y. and Eberhart, R. (1998). A modified particle swarm optimizer. In Evolutionary Computation Proceedings, 1998. IEEE World Congress on Computational Intelligence., The 1998 IEEE International Conference on, 69–73. IEEE.
The Mass Distribution of CL0939+4713 obtained from a ‘Weak’ Lensing Analysis of a WFPC2 image Carolin Seitz${}^{1}$, Jean-Paul Kneib${}^{2}$, Peter Schneider${}^{1}$ & Stella Seitz${}^{1}$ ${}^{1}$ Max-Planck-Institut für Astrophysik, Postfach 1523, D-85740 Garching, Germany. ${}^{2}$ Institute of Astronomy, Madingley Road, Cambridge CB3 0HA, UK. Abstract The image distortions of high-redshift galaxies caused by gravitational light deflection of foreground clusters of galaxies can be used to reconstruct the two-dimensional surface mass density of these clusters. We apply an unbiased parameter-free reconstruction technique to the cluster CL0939+4713 (Abell 851), observed with the WFPC2 on board of the HST. We demonstrate that a single deep WFPC2 observation can be used for cluster mass reconstruction despite its small field of view and the irregular shape of the data field (especially for distant clusters). For CL0939, we find a strong correlation between the reconstructed mass distribution and the bright cluster galaxies indicating that mass follows light on average. The detected anti-correlation between the faint galaxies and the reconstructed mass is most likely an effect of the magnification (anti) bias, which was detected previously in the cluster A1689. Because of the high redshift of CL0939 ($z_{d}=0.41$), the redshift distribution of the lensed, faint galaxies has to be accounted for in the reconstruction technique. We derive an approximate global transformation for the surface mass density which leaves the mean image ellipticities invariant, resulting in an uncertainty in the normalization of the mass. From the non-negativity of the surface mass density, we derive lower limits on the mass inside the observed field of $0.75({h^{-1}_{50}\hbox{\thinspace\rm Mpc}})^{2}$ ranging from $M>3.6\times 10^{14}h^{-1}_{50}M_{\odot}$ to $M>6.3\times 10^{14}h^{-1}_{50}M_{\odot}$ for a mean redshift of $\left\langle z\right\rangle=1$ to $\left\langle z\right\rangle=0.6$ of the faint galaxy images with $R\in(23,25.5)$. However, we can break the invariance transformation for the mass using the magnification effect on the observed number density of the background galaxies. Assuming a mean redshift of $\left\langle z\right\rangle=0.8$ and a fraction of $x=15\%$ ($x=20\%$) of cluster galaxies in the observed galaxy sample with $R\in(23,25.5)$ we obtain for the mass inside the field $M\approx 5\times 10^{14}h^{-1}_{50}M_{\odot}$ ($M\approx 7\times 10^{14}h^{-1}_{50}M_{\odot}$) which corresponds to $M/L\approx 100h_{50}$ ($M/L\approx 140h_{50}$). 1. Introduction Since the pioneering work of Tyson, Valdes and Wenk (1990) it has been realized that the weak shearing effects of clusters introduced on images of faint background galaxies can be used to obtain the mass distribution of these lensing clusters (for a recent review on cluster lensing, see Fort & Mellier 1994; see also Kochanek 1990; Miralda-Escudé 1991). Kaiser & Squires (1993, hereafter KS) have derived an explicit expression for the two-dimensional surface mass density as a function of the shear (or tidal gravitational field) caused by the cluster, which in turn can be obtained from the distorted images of background galaxies. This inversion method has been applied to several clusters observed from the ground (Fahlman et al. 1994, Smail et al. 1995a, Kaiser at al. 1995), demonstrating the applicability of this new method to determine mass profiles and total mass estimates of clusters. The detection of sheared images far out in the cluster 0024+16 (Bonnet, Mellier & Fort 1994; Kassiola et al. 1994) shows that weak lensing can investigate previously unexplored regions in clusters. Recently, the KS inversion technique has been modified and generalized to account for strong lensing, as it should occur near the center of clusters (Schneider & Seitz 1995, Seitz & Schneider 1995a, Kaiser 1995), and for the finite data field defined by the CCD size (Schneider 1995, Kaiser et al. 1995, Bartelmann 1995, Seitz & Schneider 1995b, henceforth SS). In SS, a detailed quantitative comparison between the various inversion techniques has been made, and it was demonstrated that the inversion formula derived in SS is the most accurate of the unbiased ones. In particular, if the cluster mass distribution is significantly more extended than the data field (i.e. the CCD), the SS inversion formula is significantly more accurate than the other currently known inversion techniques. Such a situation generally occurs if the data are taken with the WFPC2 on board the Hubble Space Telescope (HST), owing to its fairly small field of view. Hence, if WFPC2 images are used for the reconstruction of the surface mass density of a cluster, it is necessary to use a finite-field inversion formula such as the one derived in SS. As was pointed out in Schneider & Seitz 1995, even then the mass density can be derived only up to a global invariance transformation, which is the mass-sheet degeneracy found by Gorenstein et al. (1988). The invariance transformation may be broken if the magnification effects are taken into account which changes the local number density of images of an appropriately chosen subset of faint galaxies (Broadhurst, Taylor & Peacock 1995), and which changes the size of galaxy images at fixed surface brightness – which is unchanged by gravitational light deflection (Bartelmann & Narayan 1995). In particular, this latter paper demonstrates that the inclusion of magnification effects may improve the cluster mass inversion considerably, and can also provide a unique means to determine the redshift distribution down to very faint magnitudes. In this paper we present the first application of a finite-field cluster inversion to the deep WFPC2 observation (10 orbits) of the distant cluster CL 0939+4713 retrieved from the HST archive. These data have been used for the study of the morphology of the cluster galaxies and the Butcher–Oemler effect by Dressler et al. (1994a). In Sect. 2 we briefly describe the data, and discuss the image identification and the determination of the image shapes, which is used for the estimate of the local image distortion. Sect. 3 briefly summarized the inversion method, the results of which are presented in Sect. 4 and discussed in Sect. 5. 2. Observation and data analysis Fig. 1a. Observations of the cluster CL0939+4713 obtained with WFPC2 using the $702W$ filter and an exposure time of $21000s$. The side-length of the data field is $2\hbox{$.\!\!^{\prime}$}5$ ($1h^{-1}_{50}$ Mpc) for an EdS-universe with $H_{0}=50h_{50}$ km/s/Mpc, 1 arcsec on the sky represents $6.51h_{50}^{-1}$ kpc). Dressler & Gunn (1992) propose the cluster center to be close to the 3 bright galaxies in the upper left corner of the lower left CCD. North is at the bottom east to the right Cl0939+4713 was observed in January 1994 with the WFPC2 camera on the Hubble Space Telescope (Dressler et al. 1994a). The observation consists of 10 single orbits of 2100 seconds (or a total exposure time of 5h50min) and corresponds probably to the deepest cluster observation done with the HST/WFPC2. The exposures were divided into two groups of 5 with a shift of 10 pixel East and 20 pixels South between the two groups. After StSci pipeline processing, the data were shifted and combined to remove cosmic rays and hot pixel using standard STSDAS/IRAF routines. A mosaic of the 3 WFC chips and the PC chips was constructed, though due to the smaller pixel size of the PC chip and therefore a brighter isophotal limit we discard it from the analysis. The image was then run through the SExtractor package (Bertin & Arnouts 1995) to detect objects, measure their magnitudes, mean isophotal surface brightness and second moments. All objects with isophotal areas larger than 12 pixels and higher than 2$\sigma$/pixel ($\mu_{F702W}=25.3{\rm mag/arcsec^{2}}$) were selected. For each object the unweighted first and second moments were computed to determine their center, their size, their ellipticity and orientation. To convert instrumental F702W magnitudes into standard $R$ we use the synthetic zero point and color corrections listed in Holtzman et al. (1995). For the color term we choose $(V-R)\simeq 0.6$ typical of a $z\sim 0.8$ late-type spiral. The color correction is then $+0.2$ mag, and remains small for other choices of the colour term. The typical photometric errors of our faintest objects, $R<26.5$, are $\delta R\sim$0.1–0.2. A neural network algorithm was used to identify stellar objects, $22$ of those were detected. A galaxy catalogue was then constructed with a total of $572$ galaxies down to R=26.5. Fig. 1a shows the full WFPC2 image of the cluster, and Fig. 1b a zoomed image of the region marked in Fig. 1a. A detailed inspection discovered the arc candidate and a likely pair in this central cluster region. These strong lensing features confirm that the cluster is over-critical and probably indicate the densest part of the cluster. The spectroscopic observation of the bright pair (R=22.5, and R=22.9 for it’s counter-image candidate) will confirm or otherwise the lensing assumption. If it is indeed a gravitational pair, it will constrain strongly the mass distribution of the very central part of the cluster. Fig. 1b. A zoomed image of the region marked in Fig. 1a. We find an arc-candidate A0 and a likely gravitational pair P1& 2 with the counter image P3 Fig. 2 shows the number vs. magnitude diagram of the galaxies detected in the field (solid line). These numbers are compared with the field galaxies counts in the R band from Smail et al (1995b). It is clear that most of the galaxies with $R\in(19,22)$ are likely cluster members. Furthermore there is a likely contamination from cluster members of $\sim$ 150 objects down to $R=25.5$ within the WFC field; the cluster contamination is expected to be higher in the central part than in the outer part. Fig. 2. The number vs. magnitude diagram of all galaxies detected within the WFC field (solid line). The dotted line shows the number counts – rescaled to the area of WFC field – from Smail et al. (1995b), which yields $N(R)\propto 10^{\gamma R}$, with $\gamma=0.32$ and normalization such that $N(R<27)=7.3\times 10^{5}$ per square degree. Assuming that the dotted line represents the counts of the faint galaxies, the dashed-dotted histogram gives the number counts versus magnitude of the cluster galaxies From the second moments of light $Q_{ij}$ we calculate for each galaxy image the (complex) ellipticity $$\chi={Q_{11}-Q_{22}+2{\rm i}Q_{12}\over Q_{11}+Q_{22}}=\left|\chi\right|e^{2{% \rm i}\vartheta}\ .$$ (2.1⁢a)2.1𝑎( 2.1 italic_a ) However, it is more convenient to work in terms of the ellipticity parameter ${\epsilon}$ which has the same phase $\theta$ as $\chi$, and modulus $$\left|{\epsilon}\right|={1-r\over 1+r}\quad{\rm with}\quad r=\sqrt{1-\left|% \chi\right|\over 1+\left|\chi\right|}\ .$$ (2.1⁢b)2.1𝑏( 2.1 italic_b ) Then we use these image ellipticities ${\epsilon}_{k}$ of a galaxy at ${\hbox{$\bf\theta$}}_{k}$ to calculate the local mean image ellipticity on a grid ${\hbox{$\bf\theta$}}_{ij}$ $$\bar{\epsilon}({\hbox{$\bf\theta$}}_{ij})={\sum_{k=1}^{N_{\rm gal}}{\epsilon}_% {k}\;u\left(\left|{\hbox{$\bf\theta$}}_{ij}-{\hbox{$\bf\theta$}}_{k}\right|% \right)\over\sum_{k=1}^{N_{\rm gal}}u\left(\left|{\hbox{$\bf\theta$}}_{ij}-{% \hbox{$\bf\theta$}}_{k}\right|\right)}\ ,$$ (2.2)2.2( 2.2 ) with the weight factor $$u(d)=\exp(-d^{2}/s^{2})\;,$$ with a smoothing length $s$, which, unless noted otherwise, is chosen as $s=0\hbox{$.\!\!^{\prime}$}3$ (117 $h^{-1}_{50}$ kpc). The resulting map of $\bar{\epsilon}({\hbox{$\bf\theta$}})$ is shown in Fig. 3 using all images covering more than $12$ pixels (pixel-size $0\hbox{$.\!\!^{\prime\prime}$}1$) and using four different magnitude cuts for the galaxies: $R\in(24,25.5)$ for the upper left panel, $R\in(23,25.5)$ for the upper right, $R\in(22,25.5)$ for the lower left and $R\in(21,25.5)$ for the lower right. The corresponding numbers of galaxies used for constructing the shear maps are 226, 295 343, and 383, respectively, meaning that the average number of galaxies having a distance of less than the smoothing length from the point ${\hbox{$\bf\theta$}}_{ij}$ is about 13, 17, 20 and 22. The cut at fainter magnitude was chosen in order to be not too much contaminated by the circularization effect of measuring small galaxies with poor signal to noise. We find that the “shear field” is quite robust under adding brighter galaxies to the sample: the direction of the local shear vector is almost unchanged and its absolute value is decreased on average for the brighter galaxy samples, especially in regions close to the cluster center. The reason for this is that the modulus of the expectation value of the mean image ellipticities is smaller for background galaxies closer to the cluster and it is zero for cluster- and foreground galaxies, both leading to a decrease in the mean image ellipticities for the brighter galaxy samples. The direction of the expectation value of the mean image ellipticities is not changed, since the mean image orientation of the background galaxies does not depend on their distances to the cluster, and since cluster- and foreground galaxies show no preferred alignment at all. Fig. 3. The orientation and absolute value of the local mean image ellipticities $\bar{\epsilon}=\left|\bar{\epsilon}\right|\left(\cos 2\varphi+i\sin 2\varphi\right)$ of galaxies with $R\in(24,25.5)$ (upper left), $R\in(23,25.5)$ (upper right), $R\in(22,25.5)$ (lower left) and $R\in(21,25.5)$ (lower right). We exclude galaxies covering less than $12$ pixels (pixel-size $0\hbox{$.\!\!^{\prime\prime}$}1$). We choose a smoothing length of $s=0\hbox{$.\!\!^{\prime}$}3$; as is clearly seen, the ‘coherence length’ of the shear pattern is larger than this smoothing length. The vectors displayed include an angle of $\varphi$ with the x-axis, and a mean image ellipticity $\left|\bar{\epsilon}\right|=1$ would correspond to a vector of length $0\hbox{$.\!\!^{\prime}$}4$ 3. Method of Reconstruction In this section we briefly describe the reconstruction of the cluster surface mass density using the observed map of mean image ellipticities. Due to the high redshift of the cluster ($z_{d}=0.41$) we can not assume that all source galaxies are at the same effective distance to the cluster, i.e. that their $D(z_{d},z)/D(z)$ is the same. Therefore, we relate the critical surface mass density $$\mathchar 262\relax_{\rm crit}(z)={c^{2}D(z)\over 4\pi GD(z_{d})D(z_{d},z)}$$ for a source at redshift $z$ to the critical surface mass density for a source at ‘infinity’ by defining $w(z)$ through $$w(z)\;\mathchar 262\relax_{\rm crit}(z)=\lim_{z\to\infty}\mathchar 262\relax_{% \rm crit}(z)\ ,$$ for $z>z_{d}$, and $w(z)=0$ for $z\leq z_{d}$, and obtain for the dimensionless surface mass density $\kappa({\hbox{$\bf\theta$}},z)$ and the shear $\gamma({\hbox{$\bf\theta$}},z)$ $$\kappa({\hbox{$\bf\theta$}},z)=w(z)\kappa_{\infty}({\hbox{$\bf\theta$}})\quad,% \quad\gamma({\hbox{$\bf\theta$}},z)=w(z)\gamma_{\infty}({\hbox{$\bf\theta$}})\quad.$$ (3.1)3.1( 3.1 ) The form of $w(z)$ depends on the geometry of the universe, and for an Einstein-de Sitter universe we have $$w(z)=\cases{\displaystyle\;0\qquad\qquad\quad\qquad\qquad\hbox{for }\quad z% \leq z_{d};\cr\displaystyle{\sqrt{1+z}-\sqrt{1+z_{d}}\over\sqrt{1+z}-1}\qquad% \hbox{for }\quad z>z_{d}.\cr}$$ (3.2)3.2( 3.2 ) The following description of the reconstruction of the surface mass density is based on the simplifying assumption that the cluster is not critical, i.e., $(1-w\kappa_{\infty})^{2}-w^{2}\gamma_{\infty}^{2}>0$ for all sources. However, we point out that all the resulting mass maps shown in this paper have been calculated without this assumption, using the more complicated method described in Seitz & Schneider (1995c). However, since the reconstruction is much easier to describe for non-critical clusters, we describe here the reconstruction of non-critical clusters only. We also note that there are only minor changes in the results if this assumption is introduced. As described in Seitz & Schneider (1995c), the local expectation value of the image ellipticity can be approximated through $$\left\langle{\epsilon}\right\rangle\approx{-\left\langle w\right\rangle\gamma_% {\infty}\over 1-f\left\langle w\right\rangle\kappa_{\infty}}\ ,$$ (3.3)3.3( 3.3 ) if ${{{{\kappa_{\infty}\mathrel{\mathchoice{\lower 2.0pt\vbox{\halign{\cr}$% \displaystyle\hfil<$\cr$\displaystyle\hfil\sim$}}}{\lower 2.0pt\vbox{\halign{% \cr}$\textstyle\hfil<$\cr$\textstyle\hfil\sim$}}}{\lower 2.0pt\vbox{\halign{% \cr}$\scriptstyle\hfil<$\cr$\scriptstyle\hfil\sim$}}}{\lower 2.0pt\vbox{% \halign{\cr}$\scriptscriptstyle\hfil<$\cr$\scriptscriptstyle\hfil\sim$}}}}0.8$ and if the mean redshift of the sources is ${{{{\left\langle z\right\rangle\mathrel{\mathchoice{\lower 2.0pt\vbox{\halign{\cr}% $\displaystyle\hfil>$\cr$\displaystyle\hfil\sim$}}}{\lower 2.0pt\vbox{\halign{% \cr}$\textstyle\hfil>$\cr$\textstyle\hfil\sim$}}}{\lower 2.0pt\vbox{\halign{% \cr}$\scriptstyle\hfil>$\cr$\scriptstyle\hfil\sim$}}}{\lower 2.0pt\vbox{% \halign{\cr}$\scriptscriptstyle\hfil>$\cr$\scriptscriptstyle\hfil\sim$}}}}0.7$ for this particular cluster redshift. In (3.3) we used the definitions $$\left\langle w^{k}\right\rangle=\int_{0}^{\infty}dz\;p_{s}(z)\;w^{k}(z)\qquad{% \rm and}\qquad f={\left\langle w^{2}\right\rangle\over\left\langle w\right% \rangle^{2}}\ ,$$ (3.4)3.4( 3.4 ) where $p_{s}(z)$ is the redshift distribution of the sources. ¿From (3.3) we find that the transformation $\kappa_{\infty}\to\kappa_{\infty}^{\prime}$ with $$\lambda\left(1-\kappa_{\infty}{\left\langle w^{2}\right\rangle\over\left% \langle w\right\rangle}\right)=\left(1-\kappa^{\prime}_{\infty}{\left\langle w% ^{2}\right\rangle\over\left\langle w\right\rangle}\right)\,$$ (3.5)3.5( 3.5 ) which implies that $\gamma^{\prime}_{\infty}=\lambda\gamma_{\infty}$, leaves the image ellipticities unchanged. Therefore, using only image ellipticities for the reconstruction, $1-f\left\langle w\right\rangle\kappa_{\infty}$ can be derived only up to a multiplicative constant. Using the relation between the gradient of the surface mass density and the derivatives of the shear (Kaiser 1995), we obtain from (3.3) with $K({\hbox{$\bf\theta$}}):=\ln[1-f\left\langle w\right\rangle\kappa_{\infty}({% \hbox{$\bf\theta$}})]$ $${\hbox{$\bf\nabla$}}{K}={1\over 1-f^{2}\left|\left\langle{\epsilon}\right% \rangle\right|^{2}}\left(\matrix{1-f\left\langle{\epsilon}_{1}\right\rangle&-f% \left\langle{\epsilon}_{2}\right\rangle\cr-f\left\langle{\epsilon}_{2}\right% \rangle&1+f\left\langle{\epsilon}_{1}\right\rangle\cr}\right)\left(\matrix{-f% \left\langle{\epsilon}_{1}\right\rangle_{1}-f\left\langle{\epsilon}_{2}\right% \rangle_{2}\cr-f\left\langle{\epsilon}_{2}\right\rangle_{1}+f\left\langle{% \epsilon}_{1}\right\rangle_{2}\cr}\right)=:{\hbox{$\bf u$}}({\hbox{$\bf\theta$% }})\ ,$$ (3.6)3.6( 3.6 ) with $\left\langle{\epsilon}\right\rangle=\left\langle{\epsilon}_{1}\right\rangle+{% \rm i}\left\langle{\epsilon}_{2}\right\rangle$ and gradients $\left\langle{\epsilon}_{i}\right\rangle_{j}=\partial\left\langle{\epsilon}_{i}% \right\rangle/\partial\theta_{j}$. Since the mean image ellipticity $\bar{\epsilon}$ provides an unbiased estimator of the expectation value $\left\langle{\epsilon}\right\rangle$, we set $\left\langle{\epsilon}\right\rangle\approx\bar{\epsilon}$. Then ${\hbox{$\bf u$}}({\hbox{$\bf\theta$}})$ can be determined from observations, for an assumed value of $f$, which characterizes the redshift distribution of the sources. From that, $K({\hbox{$\bf\theta$}})$ is obtained as $$K({\hbox{$\bf\theta$}})=\int_{\cal U}\;d^{2}\theta^{\prime}\;{\hbox{$\bf H$}}(% {\hbox{$\bf\theta$}},{\hbox{$\bf\theta$}}^{\prime})\cdot{\hbox{$\bf u$}}({% \hbox{$\bf\theta$}}^{\prime})+\bar{K}\ .$$ (3.7)3.7( 3.7 ) In Eq.(3.7), the kernel ${\hbox{$\bf H$}}({\hbox{$\bf\theta$}},{\hbox{$\bf\theta$}}^{\prime})$ is calculated for the data field $\cal U$ according to the method suggested by Seitz & Schneider (1995b). So far we have not calculated the kernel $\bf H$ for the irregularly-shaped WFPC2 field. Therefore, we reconstruct $K$ on two rectangular fields with side-length of about $2\hbox{$.\!\!^{\prime}$}5\times 1\hbox{$.\!\!^{\prime}$}25$ and $1\hbox{$.\!\!^{\prime}$}25\times 2\hbox{$.\!\!^{\prime}$}5$. Since we have an additive constant free, we shift one of the resulting $K$-maps such that the mean of $K$ inside the overlapping region of the two data fields is the same. Then, the resulting mass map is obtained by joining together these two independent reconstructions at the diagonal of the lower right CCD. That means that all mass maps shown here display a discontinuity at this diagonal; however, the jump across this line is always remarkably small, indicating the relative uncertainty of the reconstruction. The redshift distribution of the field galaxies down to the faint magnitude limits considered here is poorly known. Redshift surveys of considerably brighter galaxies indicate that the redshift distribution is fairly broad, and a high-redshift tail cannot be excluded (see Lilly 1993, Colless et al. 1993, and Cowie et al. 1995, and references therein). We therefore take the same parameterization of $p_{s}(z)$ as used in Brainerd, Blandford & Smail (1995), $$p_{s}(z)={\beta z^{2}\over\mathchar 256\relax\left(3/\beta\right)z_{0}^{3}}% \exp\left(-\left(z/z_{0}\right)^{\beta}\right)\ .$$ (3.8)3.8( 3.8 ) We consider different values of the parameter $\beta$ for which the mean redshift is given through $\left\langle z\right\rangle={z_{0}\mathchar 256\relax(4/\beta)/\mathchar 256% \relax(3/\beta)}$. In Fig. 4 we show the distribution $p_{s}(z)$ for $\beta=1,1.5,3$ and $\left\langle z\right\rangle\in\left\{0.8,1.0,1.5\right\}$ (right panels) and the moments $\left\langle w\right\rangle,\left\langle w^{2}\right\rangle$ and $f=\left\langle w^{2}\right\rangle/\left\langle w\right\rangle^{2}$ as a function of the mean redshift $\left\langle z\right\rangle$ for the redshift $z_{d}=0.41$ of CL 0939+4713 (left panels). Kneib et al. (1995) attempted using the Abell 2218 cluster-lens to determine the mean redshift distribution of the faint galaxies and found a mean value of $<z>\sim 0.8$ for $23.5<R<25.5$ which is above but consistent with the non-evolution expectation. Therefore, our choice of the parametrization of the redshift distribution shall be close to the true distribution of faint galaxies. Fig. 4. The redshift distribution $p_{s}(z)$, defined in Eq. (3.8), is shown in the right panels for a mean redshift of $\left\langle z\right\rangle=0.8$ (top) $\left\langle z\right\rangle=1$ (middle) and $\left\langle z\right\rangle=1.5$ (bottom), and for the parameters $\beta=1$ (solid line), $\beta=1.5$ (dotted line) and $\beta=3$ (dashed line). The left panels show the moments $\left\langle w\right\rangle$, $\left\langle w^{2}\right\rangle$ and the ratio $f=\left\langle w^{2}\right\rangle/\left\langle w\right\rangle^{2}$ defined in Eq. (3.4) as a function of mean redshift $\left\langle z\right\rangle$ for a cluster redshift $z_{d}=0.4$ 4. Results 4.1 The reconstructed mass distribution In Fig.5 we show the reconstructed surface mass density for different mean redshifts $\left\langle z\right\rangle$, using in each case galaxies with $R\in(23,25.5)$ and the invariance transformation (3.5) such that the minimum of the resulting $\kappa_{\infty}$-map is roughly zero to avoid unphysical negative surface mass densities. We see that for a mean redshift of about $\left\langle z\right\rangle=0.6-0.8$ of the faint galaxies in this magnitude interval, the cluster is quite strong and could indeed be (marginally) critical. We identify four main features, i.e., the two local maxima (the ‘first’ in the lower left quadrant of the field, and the ‘second’ at the boundary between the lower left and lower right quadrant), the overall increase of $\kappa_{\infty}$ towards the first maximum in the lower two quadrants, and a minimum in the upper right quadrant. Fig. 5. The reconstructed surface mass density of the cluster CL 0939+4713. For the reconstruction we use 295 galaxy images with $R\in(23,25.5)$ and assume that their redshift distribution is given through (3.8) with $\beta=1$ and $\left\langle z\right\rangle=0.6$ (upper left), $\left\langle z\right\rangle=0.8$ (upper right), $\left\langle z\right\rangle=1$ (lower left) or $\left\langle z\right\rangle=1.5$ (lower right). For all these reconstructions we use a smoothing length of $s=0\hbox{$.\!\!^{\prime}$}3$ in the weight function appearing in Eq.(2.2). Since we use no data on the upper left quadrant, and therefore cannot reconstruct the surface mass density there, we arbitrarily set $\kappa=0$ in this quadrant; this leads to the ‘funny’ shape in the level plots and the jumps in the corresponding contour plots, which are seen throughout this paper. The small discontinuity along the diagonal of the lower right quadrant is due to joining together two independent reconstructions, as described in the text Fig. 6. The reconstructed surface mass density obtained for the galaxy samples corresponding to the shear field shown in Fig. 3: $R\in(24,25.5)$ (upper left), $R\in(23,25.5)$ (upper right), $R\in(22,25.5)$ (lower left) and $R\in(21,25.5)$ (lower right). For all four reconstructions we assume that the redshift distribution is given through (3.8) with $\beta=1$ and $\left\langle z\right\rangle=0.8$ In Fig. 6 we show the mass density distributions corresponding to the four shear fields presented in Fig. 3 assuming the redshift distribution (3.8) with $\beta=1$ and $\left\langle z\right\rangle=0.8$. As expected from the shear fields, the mass distribution does not change dramatically. We find an overall decrease in the mass density for the brighter galaxy samples from $R\in(23,25.5)$ to $R\in(21,25.5)$. However, the faintest sample with $R\in(24,25.5)$ gives a mass distribution with a slightly smaller maximum than that of $R\in(23,25.5)$. We think that this is not significant and may be due to the fact that fewer images are used. Fig. 7. The reconstructed surface mass density for different values of the smoothing length $s$, with $\left\langle z\right\rangle=1$ and $\beta=1$. For the upper left panel we use $s=0\hbox{$.\!\!^{\prime}$}2$, for the upper right $s=0\hbox{$.\!\!^{\prime}$}25$, for the lower left $s=0\hbox{$.\!\!^{\prime}$}35$ and for the lower right $s=0\hbox{$.\!\!^{\prime}$}4$. Note that the main features are common to all mass distributions shown To investigate the stability of the reconstructed mass distribution, we repeat the reconstruction for the same parameters $\left\langle z\right\rangle=1$ and $\beta=1$ as used for the reconstruction shown in the lower left panel of Fig. 5 ($s=0\hbox{$.\!\!^{\prime}$}3$), but vary the smoothing length. The results are shown in Fig. 7 for $s=0\hbox{$.\!\!^{\prime}$}2$ (upper left) , $s=0\hbox{$.\!\!^{\prime}$}25$ (upper right), $s=0\hbox{$.\!\!^{\prime}$}35$ (lower left) and $s=0\hbox{$.\!\!^{\prime}$}4$ (lower right). We find that independent of the smoothing length all main features can be recovered. Obviously, a smoothing length $s=0\hbox{$.\!\!^{\prime}$}2$ gives a too noisy reconstruction, whereas $s=0\hbox{$.\!\!^{\prime}$}4$ may smooth out too much of the structure. From visual inspection we decieded to use a smoothing length of $s\approx 0\hbox{$.\!\!^{\prime}$}3$ in the remainder of this paper. We would like to note that a fixed smoothing length is not necessarily the best choice, but a smoothing length, adapted to the local signal strength, may be more appropriate. Such a local adaption can be objectively controlled with local $\chi^{2}$-statistics, or by using regularized maximum-likelihood inversion techniques (Bartelmann et al. 1995). As a further check for the stability and reliability of the reconstructed mass distribution, we perform a bootstrap analysis: we use the data set consisting of the position-vectors and ellipticities of the $N_{\rm gal}=295$ faint background galaxies with $R\in(23,25.5)$ and generate a number of synthetic data sets by drawing $N_{\rm gal}$ galaxies at a time with replacement from the original data set. For each of the synthetic data sets we perform the mass reconstruction. Mass distributions from three different bootstraps are shown in the upper left, upper right and lower left panels of Fig. 8. Taking into account that in the bootstrap analysis on average $1/e\approx 36\%$ of all galaxies are not used at all, the fact that the main features are still recovered increases our confidence in the reconstruction. The average mass density of 30 bootstraps, shown in the lower right panel of Fig. 8, is very similar to the mass distribution shown on the lower left of Fig. 5, where all galaxies and the same smoothing length $s=0\hbox{$.\!\!^{\prime}$}3$ are used. Comparing the mass reconstructions obtained from different bootstrapping realizations, one can see that the relative variations are considerably larger near the boundary of the data field. This is due to the fact that a point on the boundary has fewer neighboring galaxies, and thus takes into account less information of the local shear. We want to stress, however, that this is a ‘random’ noise components, and not a systemmatic boundary effect. Fig. 8. Three mass distributions (upper left and right, lower left) resulting from different bootstrapping realizations (see text) for $\left\langle z\right\rangle=1$, $\beta=1$ and $s=0\hbox{$.\!\!^{\prime}$}3$. The lower right panel shows the average of 30 bootstrapping mass distributions 4.2 Correlation between mass and light We want to compare the reconstructed mass distribution with the light distribution of different samples of galaxies. For this we calculate the gaussian-smoothed light distribution via $$\sigma_{\rm light}({\hbox{$\bf\theta$}})={\sum_{k=1}\exp\left(-{\left|{\hbox{$% \bf\theta$}}-{\hbox{$\bf\theta$}}_{k}\right|^{2}\over s^{2}}\right)\;10^{-0.4% \left(m_{k}-20\right)}\over\int_{\cal U}d^{2}\theta^{\prime}\exp\left(-{\left|% {\hbox{$\bf\theta$}}-{\hbox{$\bf\theta$}}^{\prime}\right|^{2}\over s^{2}}% \right)}\ ,$$ (4.1)4.1( 4.1 ) where $\cal U$ is the data field (i.e. the three quadrants), ${\hbox{$\bf\theta$}}_{k}$ and $m_{k}$ are the positions and magnitudes of the galaxies used, and a smoothing scale of $0\hbox{$.\!\!^{\prime}$}3$ is used. The denominator in (4.1) corrects for boundary effects. In Fig. 9a we show the light distribution of all galaxies [roughly $r\in(17,23)$] detected by Dressler & Gunn (1992) on a field of $4\hbox{${}^{\prime}$}\times 4\hbox{${}^{\prime}$}$. Comparing this with the mass distribution shown in Fig. 5 & 6 we detect a remarkable correlation: the position of the maximum in the mass density corresponds reasonably well with the position of the maximum in the light distribution, which is approximately located there where Dressler & Gunn (1992) proposed the cluster center. The secondary mass maximum corresponds to a group of bright galaxies. It is more prominent in the light than in the mass distribution and displaced slightly to the left relative to the position of the secondary maximum in the mass. The minimum of the mass distribution corresponds to a region where very few bright galaxies are observed. Dressler et al. (1994b) studied the morphology of the bright (cluster) galaxies with the HST (WFPC1); we show in Fig. 9b the light distribution of their identified E/S0 galaxies, tracing the old cluster galaxy population. The position of the secondary maximum in this light distribution corresponds better with the position of the secondary mass maximum and the correspondence with the other features is as good. Hence we conclude that there is a correlation between the reconstructed mass distribution and the light distribution of the bright galaxies, which are mostly cluster galaxies. Fig. 9. (a) The gaussian-smoothed light distribution defined through Eq. (4.1) of all galaxies [roughly with $r\in(17,23)$] detected by Dressler & Gunn (1992) on a field of $4\hbox{${}^{\prime}$}\times 4\hbox{${}^{\prime}$}$. We use a smoothing length $s=0\hbox{$.\!\!^{\prime}$}3$. (b) The gaussian-smoothed light distribution of all E/S0 galaxies identified in the field of about $2\hbox{$.\!\!^{\prime}$}5\times 2\hbox{$.\!\!^{\prime}$}5$ [WFPC1, Dressler et al. (1994b)]. The area covered by the HST (WFPC2) observations is indicated by the solid lines To investigate the correlation between mass and light more quantitatively, we calculate from each mass distribution $\kappa_{\infty}({\hbox{$\bf\theta$}})$ obtained in a bootstrap realization (see Sec. 4.1) the number $$V:={\sum_{\rm galaxies}\bigl{[}\kappa_{\infty}({{\hbox{$\bf\theta$}}}_{\rm galaxy% })-\left\langle\kappa_{\infty}\right\rangle\bigr{]}\over N}\ ,$$ (4.2)4.2( 4.2 ) for different samples of $N$ galaxies, where $\left\langle\kappa_{\infty}\right\rangle$ is the average of $\kappa_{\infty}$ over our data field ${\cal U}$. If the galaxies were randomly distributed, the expectation value of $V$ would be zero, whereas a positive (negative) correlation of galaxies with the reconstructed mass density is indicated by $V>0$ $(V<0)$. Fig. 10. (a): The normalized distribution $p(V)$ of the quantity $V$ defined in Eq. (4.2), calculated from reconstructed mass distributions for $1000$ bootstrap data sets drawn from the observed one ($R\in[23,25.5])$ with replacement. The solid curve shows the distribution for all galaxies detected from the WFPC2 observations, the dotted curve for all galaxies with $R\in[24,25.5]$, the long dashed curve for all galaxies detected in the two right CCD frames with $R\in[24,25]$ and the dashed curve for E/S0 galaxies identified Dressler et al. (1994b). (b): The mean correlation coefficient $\left\langle V\right\rangle$ for different galaxy samples chosen. The tilde indicates that the subsample has no well-determined flux threshold From $1000$ bootstrap realizations we find the distributions $p(V)$ shown in Fig.10a for the E/S0 galaxies identified by Dressler et al. (1994b) (DG: E/S0: dashed curve), all galaxies detected from the WFPC2 data regardless from their size (solid curve) and all galaxies with $R\in(24,25.5)$ (dotted curve) – also regardless of size. Clearly, a strong positive correlation between the reconstructed mass and the DG:E/S0 galaxies is detected. Next, a weaker positive correlation between ‘all’ galaxies and the mass distribution, and an anti-correlation between the mass distribution and the faint galaxies [$R\in(24,25.5)$] is visible. This anti-correlation appears surprising on first sight, as certainly some of the faint galaxies belong to the cluster, and, as we have argued above, the cluster galaxies are positively correlated with the mass. To investigate this point further, we have calculted the distribution of $V$ for the same magnitude interval, but leaving out the lower left CCD where the contribution from cluster galaxies is expected to be strongest. The resulting distribution is also plotted in Fig. 10a, indicated with an asterisk; it shown an even stronger anti-correlation. In Fig. 10b we show the mean correlation coefficient $\left\langle V\right\rangle$ from $1000$ simulations as a function of the magnitude range of the galaxies. We find that $\left\langle V\right\rangle$ is strongly correlated with the faintness of the magnitude interval $(m_{1},m_{2})$ chosen. It decreases towards the fainter samples and eventually becomes negative. This is due to the larger fraction of background galaxies contributing to the counts in $(m_{1},m_{2})$ for fainter slices. We now turn to a possible explanation for the anti-correlation of the faint galaxies with the reconstructed surface mass density: The locally observed number density $n_{L}(>S)$ of lensed background galaxies with flux larger than $S$ is related the unlensed number density $n_{0}(>S)$ through the local magnification $\mu$ caused by the cluster, $$n_{L}(>S,{\hbox{$\bf\theta$}})={1\over\mu({\hbox{$\bf\theta$}})}\;n_{0}\left(>% {S\over\mu({\hbox{$\bf\theta$}})},{\hbox{$\bf\theta$}}\right)\ ,$$ (4.3)4.3( 4.3 ) where $$\mu({\hbox{$\bf\theta$}})=\int_{0}^{\infty}dz\;p_{s}(z)\;{1\over\left|[1-w(z)% \kappa_{\infty}({\hbox{$\bf\theta$}})]^{2}-w(z)^{2}\gamma_{\infty}^{2}({\hbox{% $\bf\theta$}})\right|}\ ,$$ (4.4)4.4( 4.4 ) is the redshift-averaged local magnification, weighted by the redshift distribution of the galaxies. The first factor in (4.3) is due to the increase of the solid angle, whereas the argument of $n_{0}$ indicates that a magnified source can be ‘intrinsically’ fainter by a factor $\mu$ and still be included in a flux-limited sample. Which of the two competing processes wins depends on the slope $n_{0}(>S)$ of the sources. The observed galaxy counts in the R-band shows that it is well fitted by $N(R)\propto 10^{\gamma R}$ with $\gamma=0.32$. Therefore, we obtain from Eq .(4.3) $${n_{L}(>S)\over n_{0}(>S)}=\mu^{2.5\gamma-1}\ .$$ (4.5)4.5( 4.5 ) The magnification (see also Broadhurst, Taylor & Peacock 1995, hereafter BTP) can be obtained via $$\mu=\left[{n_{L}(>S)\over n_{0}(>S)}\right]^{\alpha}\qquad{\rm with}\qquad% \alpha={1\over 2.5\gamma-1}\ .$$ (4.6)4.6( 4.6 ) For $\gamma=0.32$, the exponent in Eq. (4.6) is $\alpha=-5$ and a suppression of background galaxies is expected in regions of high magnifications, or high surface mass density. In Fig. 11 we show the gaussian-smoothed number density [defined as in (4.1) without flux weighting] of the faint galaxies with $R\in(24,25.5)$. We see a local maximum where we detect the minimum of the mass, indicating that we found the expected anti-correlation. However, we also find two local maxima in the number density of the faint objects where we detect the maxima of the mass. Note that we have not corrected the faint galaxy density for occupation of some CCD area by bright (cluster) galaxies, which is of course strongest near the cluster center; this shows that the contribution of cluster members to the faint galaxy counts is slightly stronger than indicated by comparing Fig. 11 with Fig. 9. We thus conclude that a non-negligible fraction of the faint galaxies are cluster members, as also follows from Fig. 2. Fig. 11. The gaussian-smoothed number density of the faint galaxies with $R\in(24,25.5)$. We use a smoothing length $s=0\hbox{$.\!\!^{\prime}$}3$ Assuming that the cluster galaxies have an average correlation coefficient $\left\langle V\right\rangle_{c}>0$ with the mass distribution, independent of the magnitude of the galaxies, and that the background galaxies have an average correlation coefficient $\left\langle V\right\rangle_{b}<0$, again independent of their magnitudes, one can then derive the fraction $x$ of cluster galaxies in a magnitude slice $(m_{1},m_{2})$ through measuring $$\left\langle V\right\rangle_{m_{1},m_{2}}=x\left\langle V\right\rangle_{c}+(1-% x)\left\langle V\right\rangle_{b}\ .$$ Using $\left\langle V\right\rangle=\left\langle V\right\rangle_{c}$ from the galaxies with $R\in(18,22)$ and $\left\langle V\right\rangle_{b}$ from the galaxies with $R\in(24,25.5)*$ we estimate the fraction of cluster galaxies to be $83\%$ for the ‘DG E/S0’ sample, $76\%$ for the DG sample, $60\%$ for the galaxies with $R\in(18,24)$, $11\%$ for $R\in(23,25.5)$ and $5\%$ for $R\in(24,25.5)$. Of course these values are crude estimates only, but they do not appear unreasonable. To summary this subsection, the correlation of bright galaxies with the reconstructed surface mass density shows that in this cluster ‘mass follows light’ on average. Hence, overdensities of bright galaxies correspond to local maxima in the projected mass density. The also significant anti-correlation of faint galaxies with the reconstructed mass profile is most likely an effect of the magnification (anti)bias, which has been pointed out by Broadhurst, Taylor & Peacock (1995) and which was detected in the cluster A1689 (Broadhurst 1995). 4.3 Limits on the mass inside the data field Requiring that the surface mass density can not be negative one can obtain a lower limit on the mass inside the data field by applying the invariance transformation (3.5) such that the minimum of the resulting $\kappa_{\infty}$-map is zero. Using the galaxies with $R\in(23,25.5)$, we find as a lower bound on the total mass inside the data field (side length about $1$ Mpc $h^{-1}_{50}$) about $M/(10^{14}h_{50}^{-1}M_{\odot})\geq 6.3$ ($4.3,3.6,2.8$) for a mean redshift $\left\langle z\right\rangle=0.6$ ($0.8,1.0,1.5$). These limits depend only slightly on the actual form of the assumed redshift distribution, as shown in Fig.12. Fig. 12. Lower limits $M_{\eightrm min}$ on the total mass inside the data field in units of $M_{14}=10^{14}h_{50}^{-1}M_{\odot}$ as a function of the assumed mean redshift $\left\langle z\right\rangle$ (left) or the mean $\left\langle w\right\rangle$ (right) of the images used for the reconstruction, assuming the redshift distribution (3.8). Crosses show the results for $\beta=1$, triangles for $\beta=1.5$ and dashes for $\beta=3$. The smoothing length is $s=0\hbox{$.\!\!^{\prime}$}3$ The most conservative upper limit of the mass we can give is $M<10^{15}h^{-1}_{50}M_{\odot}$, because this corresponds to $\left\langle\kappa_{\infty}\right\rangle=1$ and would most probably produce several giant arcs which are not observed. Using Eq. (4.6), one can in principle derive the local magnification $\mu({\hbox{$\bf\theta$}})$ and could therefore break the global invariance transformation (3.5) with the measurement of $\mu({\hbox{$\bf\theta$}})$ at one particular point in the cluster. However, in practice we have the following difficulties: (1) since lensing effects are nowhere weak on the whole field we have no measurement of $n_{0}(>S)$; (2) we have no colour information of the galaxies and therefore we can not well distinguish between faint cluster galaxies and lensed background galaxies; (3) possible clustering of background objects and confusion with cluster member galaxies leads to a high noise in the estimate of $\mu({\hbox{$\bf\theta$}})$ and a poor resolution. Therefore, we can not derive an accurate high-resolution map of the magnification from Eq. (4.6). Nevertheless, we can use the magnification (anti) bias to derive (crude) estimates of the total mass inside the data field. Crucial for this is the assumption that the unlensed number counts $n_{0}(>S)$ can be taken from the literature. We use the amplitude and slope given in Smail et al. (1995b) which gives for the data field $\cal U$ and galaxies with $R\in(23,25.5)$ $N_{0}(23,25.5)=278$. The observed number of $N^{\rm obs}(23,25.5)=N^{FG}+N^{CG}=295$ is a sum of cluster galaxies (CG) and galaxies belonging to the faint galaxy field population (FG). Now, we assume that a fraction $x$ of the observed galaxies $N^{\rm obs}$ are cluster galaxies and obtain $$\left\langle\mu^{2.5\gamma-1}\right\rangle_{\cal U}={N^{FG}\over N_{0}}={(1-x)% N^{\rm obs}\over N_{0}}\ .$$ (4.6)4.6( 4.6 ) For $x=0.15$ ($0.2$) we obtain $\left\langle\mu^{2.5\gamma-1}\right\rangle_{\cal U}=0.849$ ($0.796$). We perform the reconstruction assuming a value of $\lambda$, or equivalently, a value of $\left\langle\kappa_{\infty}\right\rangle$, and calculate from the resulting mass map $\kappa_{\infty}$ and the shear map $\gamma_{\infty}$ locally the expectation value of the magnification (4.4) of the sources and finally average $\mu^{2.5\gamma-1}$ on the field $\cal U$ to obtain $\left\langle\mu^{2.5\gamma-1}\right\rangle_{\cal U}$. Next, we search for that value of $\left\langle\kappa_{\infty}\right\rangle$ which gives the value $\left\langle\mu^{2.5\gamma-1}\right\rangle_{\cal U}$ corresponding to a certain fraction $x$ of cluster galaxies. In Fig. 13 we show the mass estimates for $x=0.15$ (solid curve) and $x=0.2$ (dotted curve) as a function of the assumed mean redshift of the galaxies used for the reconstruction. For comparison we show again the minimum mass (dashed curve) as shown in Fig. 12 for $\beta=1$. From these lower limits on the mass we conclude that the fraction of cluster galaxies in the galaxy sample with $R\in(23,25.5)$ is ${{{{x\mathrel{\mathchoice{\lower 2.0pt\vbox{\halign{\cr}$\displaystyle\hfil>$\cr$% \displaystyle\hfil\sim$}}}{\lower 2.0pt\vbox{\halign{\cr}$\textstyle\hfil>$\cr% $\textstyle\hfil\sim$}}}{\lower 2.0pt\vbox{\halign{\cr}$\scriptstyle\hfil>$\cr% $\scriptstyle\hfil\sim$}}}{\lower 2.0pt\vbox{\halign{\cr}$\scriptscriptstyle% \hfil>$\cr$\scriptscriptstyle\hfil\sim$}}}}0.1$. Fig. 13. The mass $M$ inside the data field in units of $M_{14}=h^{-1}_{50}10^{14}M_{\odot}$ as a function of the assumed mean redshift $\left\langle z\right\rangle$ (left) or the mean $\left\langle w\right\rangle$ (right) of the images used for the reconstruction, assuming the redshift distribution (3.8) with $\beta=1$. The solid curve shows the total mass in the field $\cal U$ assuming that $x=15\%$ of all galaxies detected within $R\in(23,25.5)$ are cluster galaxies, the dotted curve shows the result for $x=20\%$. For comparison we show $M_{\eightrm min}$ from Fig. 12 (dashed curve) We note that recently we became aware of a paper by Belloni et al. (1995) which could allow for a better separation between cluster and background galaxies for the galaxies with $R\in(23,25.5)$: Belloni et al. used multiband photometry to get the redshifts of 275 bright galaxies with $R<22.5$, i.e., the fraction of cluster galaxies in the sample of galaxies with $R<22.5$. From that and the known slope of the faint field galaxies, we can probably derive a better estimate for the fraction of cluster galaxies in the faint galaxy sample with $R\in(23,25.5)$. 4.4 The mass to light ratio We calculate the total light of all galaxies inside the field $\cal U$ detected by Dressler et al. (1994b) leaving aside those galaxies whose measured redshift exclude a cluster membership. Magnitudes for these galaxies are given in gunn-r ($\bar{\lambda}=655$ nm), which correspond to a rest-frame wavelength of $\lambda\approx 464$ nm), i.e. the measured $r$ magnitudes correspond to the $B$ ($\bar{\lambda}=443$ nm) magnitudes in the rest-frame. As a result we obtain for this sample a total luminosity of $(L/L_{\odot})_{B}=5\times 10^{12}h^{-2}_{50}$. From this and the mass estimates shown in Fig. 13 we derive the $M/L$ values shown in Fig. 14. If the Dressler et al. (1994b) sample of galaxies with $r\in(17,23)$ represents the luminosity of cluster galaxies well and if the mean redshift of the faint galaxies [$R\in(23,25.5)$] used for the mass reconstruction is $\left\langle z\right\rangle=0.8$, then we derive from the non-negativity of the surface mass density a lower limit on ${{{{M/L\mathrel{\mathchoice{\lower 2.0pt\vbox{\halign{\cr}$\displaystyle\hfil>$\cr% $\displaystyle\hfil\sim$}}}{\lower 2.0pt\vbox{\halign{\cr}$\textstyle\hfil>$% \cr$\textstyle\hfil\sim$}}}{\lower 2.0pt\vbox{\halign{\cr}$\scriptstyle\hfil>$% \cr$\scriptstyle\hfil\sim$}}}{\lower 2.0pt\vbox{\halign{\cr}$% \scriptscriptstyle\hfil>$\cr$\scriptscriptstyle\hfil\sim$}}}}93h_{50}$ (dashed curve in Fig. 14) for the cluster CL0939+4713, and the values $M/L\approx 102h_{50}$ for a fraction of $x=0.1$ of cluster galaxies in the faint galaxy sample (solid curve) and $M/L\approx 142h_{50}$ for $x=0.2$ (dotted curve). From the absence of giant luminous arcs we derive - independently of the assumed redshift of the sources or the fraction of cluster galaxies - a robust upper limit ${{{{M/L\mathrel{\mathchoice{\lower 2.0pt\vbox{\halign{\cr}$\displaystyle\hfil<$\cr% $\displaystyle\hfil\sim$}}}{\lower 2.0pt\vbox{\halign{\cr}$\textstyle\hfil<$% \cr$\textstyle\hfil\sim$}}}{\lower 2.0pt\vbox{\halign{\cr}$\scriptstyle\hfil<$% \cr$\scriptstyle\hfil\sim$}}}{\lower 2.0pt\vbox{\halign{\cr}$% \scriptscriptstyle\hfil<$\cr$\scriptscriptstyle\hfil\sim$}}}}200h_{50}$. Of course, the sample chosen for calculating the luminosity of the cluster includes a certain fraction of background- and foreground galaxies, but this is partly compensated because we will miss a certain fraction of faint cluster galaxies. To derive a conservative upper limit of the total luminosity of the cluster, or an conservative lower limit on $M/L$ we calculate the total luminosity of all galaxies detected in the field (down to $R=26.5$) and obtain almost twice the value $(L/L_{\odot})_{B}$ found above, or half the values for $M/L$ as shown in Fig. 14. Fig. 14. The values of $(M/L)_{B}$ in units of $h_{50}(M_{\odot}/L_{\odot})_{B}$ obtained for the masses shown in Fig. 13. To calculate the luminosity $L$ we used all galaxies identified by Dressler & Gunn (1992) in that field, excluding those galaxies for which a measured redshift shows that they are no cluster members The fact that the $M/L$ values (see Fig. 14) are small compared to the $M/L$ found for other cluster from a weak lensing analysis (see Fahlman et al. (1994), or Smail et al. (1995a)), is not too surprising taking into account that CL0939+4713 is an optically selected (Abell) cluster with high redshift $z_{d}=0.41$. A moderately bright cluster at this redshift would probably not have entered the Abell catalog. In addition, the $M/L$-ratio quoted here is uncorrected for cosmic evolution, which can be quite substantial from $z=0.4$ to today in the B-band, so that the corresponding $M/L$ value ‘today’ would be considerably higher. 4.5 The Rosat PSPC image In Fig. 15 we show the Rosat PSPC image of the cluster obtained from a gaussian smoothing of the counts with a smoothing length of $s=0\hbox{$.\!\!^{\prime}$}2$. The center of the coordinate frame coincides with the lower left corner of the HST image shown in Fig. 1. To align the PSPC image with the HST image we used a star. Thus, the alignment slould be better than one PSPC pixel (pixelsize $15\hbox{${}^{\prime\prime}$}$). The PSPC image shows a main X-ray emission around the cluster center, which roughly corresponds to the region where we derive a maximum of the light and a maximum of the mass distribution. The PSPC data are analysed in a forthcoming paper by S. Schindler. In order to perform a more detailed comparison between the X-ray and mass distribution one has to wait for the ROSAT HRI image. Fig. 15. The PSPC image of the cluster CL0939+4713. The contour-lines show the photon counts, ranging from 9.5 down to 1.5 with a spacing of 1. North is at the bottom and east to the right 5 Discussion Using deep WFPC2 data, we have reconstructed the projected (dark) matter distribution of the cluster Cl0939+4713. The distortion of faint background galaxies was used to construct a ‘shear map’ of the cluster, from which an unbiased, nonlinear estimate of the surface mass density was constructed. The resulting mass map is defined up to an overall invariance transformation, a generalization of the so-called mass sheet degeneracy. The mass distribution is strongly correlated with the projected distribution of the bright cluster galaxies; in particular, the maximum of the mass map coincides with the cluster center as determined from the light distribution, a secondary maximum of the map corresponds to a concentration of cluster galaxies, and a deep mass minimum occurs where the number density of cluster galaxies is lowest. We also note that the main mass (and light) maximum correspond to maximum in the X-ray emission, as seen with the ROSAT PSPC. The anti-correlation of mass with faint galaxies is interpreted as the difference between a positive correlation of mass with faint cluster galaxies (mainly seen towards the cluster center), and a magnification anti-bias (BTP), which is expected due to the flatness of the galaxy number counts. Our analysis shows that the recently developed cluster inversion techniques can be applied to (sufficiently deep) WFPC2 data (in order to image precisely faint galaxies), despite the fact that its field-of-view is fairly limited. It is essential to use an unbiased finite-field inversion technique in this case, and also, since the cluster center is (nearly) critical, to account for strong lensing effects. Also, owing to the fairly large redshift of the lensing cluster, the redshift distribution of the background galaxies has to be taken into account explicitly; only in weak lensing regime does this distribution not enter the reconstruction, but only the mean of the distance ratio $D_{ds}/D_{s}$. We have checked the robustness of the mass reconstruction, by using different magnitude cuts for the galaxies and by extended bootstrap simulations. The main features of the mass map – the two mass maxima, the pronounced minimum, and the overall gradient toward the cluster center – are stable. The anti-correlation of mass with the faint galaxies, and the strong correlation with cluster galaxies, further increases our confidence in the reconstruction. We have derived estimates of the cluster mass contained within the WFC aperture, which depend on the assumed redshift of the background galaxies. A robust lower limit of the mass follows from the non-negativity of the surface mass density, a robust upper limit comes from the absence of giant luminous arcs. To derive a narrower mass range, one needs to fix the parameter $\lambda$ contained in the invariance transformation (3.5). This can be done by using the magnification anti-bias (BTP). In the only case where this effect has been demonstrated before (A1689, Broadhurst 1995), a color criterium was used to ensure that the galaxies are likely background galaxies. Since we lack color information, the fraction of the faint galaxies which are cluster members or foreground galaxies cannot be separated from background galaxies. Nevertheless, a plausible range for $\lambda$ can be obtained and led to the mass estimates shown in Fig. 13. The exploration of this novel method to reconstruct the density distribution of clusters has only just begun. In contrast to current ground-based data, for which image ellipticities have to be substantially corrected for seeing effects, WFPC2 data provide relatively ‘clean’ probes of image ellipticities. The small field of view of WFPC2 limits the extent to which clusters can be mapped (specially for nearby clusters), unless mosaics are taken, but high-resolution mass maps such as the one constructed here are invaluable tools for investigating substructure in cluster mass distributions and their relation to substructure in the distribution of galaxies and X-ray emission. Acknowledgement. CS would like to thank Hans-Walter Rix for many useful discussion and for introducing her to the IRAF software package and Sabine Schindler for help concerning the PSPC data. We gratefully acknowledge enthusiastic discussion with Ian Smail, Richard Ellis, Yannick Mellier and Bernard Fort on cluster-lenses. This work was supported by the “Sonderforschungsbereich 375-95 für Astro–Teilchenphysik” der Deutschen Forschungsgemeinschaft.JPK acknowledges support from a HCM-EU fellowship and the hospitality of the MPA during a visit when this project was started. References Bartelmann, M.  1995, A&A, 303, 643. Bartelmann, M. & Narayan, R.  1995, ApJ, 451, 60. Bartelmann, M., Narayan, R., Seitz, S. & Schneider, P.  1995, submitted to ApJ. Bertin & Arnouts  1995, A&A submitted. Belloni, P., Bruzual, A.G., Thimm, G.J. & Röser, H.J.  1995, 297, 61 Bonnet, H., Mellier, Y. & Fort, B.  1994, ApJ 427, L83. Brainerd, T.G., Blandford, R. & Smail, I.  1995, preprint. Broadhurst, T.J., Taylor, A.N. & Peacock, J.A.  1995, ApJ 438, 49. Broadhurst, T.G.  1995, preprint. Coless, M., Ellis, R.S., Broadhurst, T.J., Taylor, K. & Peterson, B.  1993, MNRAS 261, 19. Cowie, L.L., Hu, E.M. & Songaila, A.  1995, accepted for Nature. Dressler, A. & Gunn, J.E.  1992, ApJSS, 78, 1. Dressler, A., Oemler, A., Sparks, W.B. & Lucas, R.A.  1994a, ApJ 435, L23. Dressler, A., Oemler, A., Butcher, H. & Gunn, J.E.  1994b, ApJ 430, 107. Fahlman, G.G., Kaiser, N., Squires, G. & Woods, D.  1994, ApJ 437, 63. Fort, B. & Mellier, Y.  1994, A&AR 5, 239. Gorenstein, M.V., Falco, E.E. & Shapiro, L.I.  1988,ApJ 327, 693. Holtzman, J.A., Burrows, C.J. , Casterno, S., Hester, J.J., Trauger, J.T., Watson, A.M., Worthey, G., 1995, preprint. Kassiola, A., Kovner, I., Fort, B. & Mellier, Y.  1994, ApJ 429, L9. Kaiser, N. & Squires, G. 1993, ApJ 404, 441 (KS). Kaiser, N.  1995, ApJ 493, L1. Kaiser,N., Squires, G., Fahlman, G.G., Woods, D. & Braodhurst, T.  1994, preprint astroph/9411029. Kneib, J.P., Ellis, R.S., Smail, I.R., Couch, W.J., Sharples, R., 1995, ApJ submitted. Kochanek, C.S. 1990, MNRAS 247, 135. Lilly, S.  1993, ApJ 411, 501. Miralda-Escude, J. 1991, ApJ 370, 1. Schneider, P.  1995, A&A, in press. Schneider, P. & Seitz, C.  1995, A&A 294, 411. Smail, I., Ellis, R.S., Fitchett, M.J.  1995a, MNRAS 273, 277. Smail, I., Hogg, D.W., Yan, L. & Cohen, J.G.  1995b, ApJ, 449, L105. Seitz, C. & Schneider, P.  1995a, A&A 297, 287. Seitz, C. & Schneider, P.  1995c, in preparation. Seitz, S. & Schneider, P.  1995b, A&A, in press. Tyson, J.A., Valdes, F. & Wenk, R.A.  1990, ApJ 349, L1.
Star product representation of coherent state path integrals Jasel Berra–Montiel${}^{1,2}$ ${}^{1}$ Facultad de Ciencias, Universidad Autónoma de San Luis Potosí Campus Pedregal, Av. Parque Chapultepec 1610, Col. Privadas del Pedregal, San Luis Potosí, SLP, 78217, Mexico ${}^{2}$ Dual CP Institute of High Energy Physics, Mexico jasel.berra@uaslp.mx Abstract In this paper, we determine the star product representation of coherent path integrals. By employing the properties of generalized delta functions with complex arguments, the Glauber-Sudarshan P-function corresponding to a non-diagonal density operator is obtained. Then, we compute the Husimi-Kano Q-representation of the time evolution operator in terms of the normal star product. Finally, the optical equivalence theorem allows us to express the coherent state path integral as a star exponential of the Hamiltonian function for the normal product. , Keywords: star porduct, quasi-probability distributions, path integral 1 Introduction The path integral quantization remains up to date one of the main tools for understanding quantum mechanics and quantum field theory; in particular, the formalism has proved to be extremly helpful to study perturbative approximations of a wide diversity of physical phenomena [1]. The notion of path integration was extended to the complex plane with the introduction of coherent states [2], [3], [4], motivating a prolific progress on the production of semiclassical methods focused on the analysis of non-integrable systems and quantum chaos [5]. From another perspective, the phase space formulation of quantum mechanics, based on the early works of Wigner [6], Weyl [8] and Moyal [8] enabled to represent the quantum theory as a statistical theory defined on the classical phase space. Within this picture, the employment of coherent states have found a wide-ranging applications in quantum optics and quantum information [9], [10]. The main reason lies on the pioneering works proposed by Cahill and Glauber [11], where a family of $s$-parametrized quasi-probability distributions was introduced in order to characterize non-classical effects by means of the overcomplete basis of coherent states, independently of the adopted ordering prescription. Later on, in 1978 P. Sharan showed that the Feyman path integral, in the position and momentum representation, can be expressed as a Fourier transform of the star exponential for the Moyal product. This result was generalized to the case of scalar field theories in [13], by means of Berezin calculus and the c-equivalence of star products. In this paper, we analyze the coherent state path integrals within the context of the star product quantization. By employing the properties of generalized delta functions with complex arguments, the Glauber-Sudarshan P-function corresponding to a non-diagonal density operator is obtained. Then, we compute the Husimi-Kano Q-representation of the time evolution operator in terms of the normal star product. Finally, the optical equivalence theorem allows us to express the coherent path integral as a star exponential for the normal product. Determining this star product representation, by means of quasi-probability distributions, may allow us to analyze several aspects related to some inconsistences associated to regularization techniques and the continuum limit, encountered in coherent path integrals [14]. From another perspective, it will also be relevant to implement the formalism described here in order to establish the star product representation of spin foam integrals, in the context of loop quantum gravity and loop quantum cosmology, since recently, a family of quasi-probability distributions has been determined [15], [16], [17]. This paper is organized as follows, in section 2, we briefly review the basic construction of the coherent path integration. In section 3, by using the Glauber-Sudarshan and the Husimi-Kano quasi-probability distributions, the star product representation of the coherent state path integral is obtained. Finally, we introduce some concluding remarks in section 4. 2 Path integral quantization in the coherent state representation In this section, we derive the matrix elements associated to the time evolution operator (in natural units with $\hbar=1$) $$\hat{U}(T)=e^{-iT\hat{H}(\hat{a},\hat{a}^{\dagger})},$$ (1) by means of the path integral quantization technique within the coherent state representation, where $\hat{H}(\hat{a},\hat{a}^{\dagger})$ corresponds to the normal ordered Hamiltonian operator and $T=t_{f}-t_{i}$ stands for the total time lapse. Let us denote by $\ket{\alpha_{i}}$ and $\ket{\alpha_{f}}$ two arbitrary initial and final coherent states. Thus, the matrix element $\braket{\alpha_{f}}{\hat{U}(T)}{\alpha_{i}}$ can be expressed as $$\braket{\alpha_{f}}{e^{-iT\hat{H}(\hat{a},\hat{a}^{\dagger})}}{\alpha_{i}}=% \lim_{\epsilon\to 0\,N\to\infty}\braket{\alpha_{f}}{\left(1-i\epsilon\hat{H}(% \hat{a},\hat{a}^{\dagger})\right)^{N}}{\alpha_{i}},$$ (2) where, as usual, a partition of the time interval $T=t_{f}-t_{i}$, into $N$ infinitesimal time slices, each of lenght $\epsilon$, has been introduced. Now, following the formulation established, for example in [18], let us insert an over-complete set of coherent states $\ket{\alpha}$, at each time $t_{j}$ of the partition, through the introduction of the completeness relation given, as an integral over the complex plane $$\int\frac{d^{2}\alpha}{\pi}\ket{\alpha}\bra{\alpha}=1,$$ (3) where $d^{2}\alpha=dRe(\alpha)dIm(\alpha)$. The coherent states $\ket{\alpha}$ are defined as $$\ket{\alpha}=\hat{D}(\alpha)\ket{0},$$ (4) for $\alpha\in\mathbb{C}$, where $\hat{D}(\alpha)=\exp(\alpha\hat{a}^{\dagger}-\alpha^{*}\hat{a})$ corresponds to the displacement operator [9], and $\ket{0}$ stands for the vacuum ground eigenstate of a harmonic oscillator, such that $\hat{a}\ket{0}=0$. By expanding the exponential appearing in the diplacement operator $\hat{D}(\alpha)$, and using the Baker-Campbell-Hausdorff formula [19] makes it clear that $$\ket{\alpha}=\hat{D}(\alpha)\ket{0}=e^{-\frac{1}{2}|\alpha|^{2}}\sum_{n=0}^{% \infty}\frac{\alpha^{n}}{\sqrt{n!}}\ket{n},$$ (5) for $\ket{n}$ the number state satisfying $(\hat{a}^{\dagger})^{n}\ket{0}=\sqrt{n!}\ket{n}$. In addition, the product of two coherent states proves to be $$\braket{\beta}{\alpha}=e^{-\frac{1}{2}|\alpha|^{2}-\frac{1}{2}|\beta|^{2}+% \beta^{*}\alpha},$$ (6) which means that the coherent states are not orthogonal. With this information at hand, we compute that $$\displaystyle\braket{\alpha_{f}}{\left(1-i\epsilon\hat{H}(\hat{a},\hat{a}^{% \dagger})\right)^{N}}{\alpha_{i}}$$ $$\displaystyle=$$ $$\displaystyle\int\left(\prod_{j=1}^{N-1}\frac{d^{2}\alpha_{j}}{\pi}\right)\exp% \left[-\frac{1}{2}|\alpha_{0}^{2}|-\frac{1}{2}|\alpha_{N}|^{2}\right.$$ (7) $$\displaystyle\left.-\sum_{j=1}^{N-1}|\alpha_{j}|^{2}+\sum_{j=1}^{N}\alpha_{j}^% {*}\alpha_{j-1}-i\sum_{j=1}^{N}\epsilon H(\alpha_{j-1},\alpha^{*}_{j})\right],$$ where $H(\alpha_{j-1},\alpha^{*}_{j})=\braket{\alpha_{j}}{\hat{H}(\hat{a},\hat{a}^{% \dagger})}{\alpha_{j-1}}$ is the function obtained from the normal ordered Hamiltonian operator $\hat{H}(\hat{a},\hat{a}^{\dagger})$ by working the substitutions $\hat{a}^{\dagger}\to\alpha^{*}_{j}$ and $\hat{a}\to\alpha_{j-1}$, and also we have designated the following boundary conditions $\alpha_{0}=\alpha_{i}$, $\alpha_{N}=\alpha_{f}$. By making use of the identity [20] $$\mskip-90.0mu \sum_{j=1}^{N}\alpha^{*}_{j}\alpha_{j-1}-\sum_{j=1}^{N-1}|\alpha% _{j}|^{2}=\frac{1}{2}(\alpha^{*}_{N}\alpha_{N-1}+\alpha^{*}_{1}\alpha_{0})+% \frac{\epsilon}{2}\sum_{j=1}^{N-1}\left(\alpha_{j}\frac{\alpha^{*}_{j+1}-% \alpha^{*}_{j}}{\epsilon}-\alpha^{*}_{j}\frac{\alpha_{j}-\alpha_{j-1}}{% \epsilon}\right),$$ (8) the continuum limit $(\epsilon\to 0,N\to\infty)$ of the matrix element depicted in (2) reads (for further details see [18], [20]) $$\braket{\alpha_{f}}{e^{-iT\hat{H}(\hat{a}^{\dagger},\hat{a})}}{\alpha_{i}}=% \int\mathcal{D}^{2}\left(\frac{\alpha}{\pi}\right)e^{\frac{1}{2}(|\alpha_{f}|^% {2}-|\alpha_{i}|^{2})}e^{-\int_{t_{i}}^{t_{f}}dt\,\mathcal{L}(\alpha(t),\alpha% ^{*}(t))},$$ (9) where $$\mathcal{L}(\alpha(t),\alpha^{*}(t))=\alpha^{*}(t)\dot{\alpha}(t)+iH(\alpha(t)% ,\alpha^{*}(t)),$$ (10) and we have used the notation $$\mathcal{D}^{2}\left(\frac{\alpha}{\pi}\right)=\lim_{N\to\infty}\prod_{j=1}^{N% }\frac{dRe(\alpha_{j})dIm(\alpha_{j})}{\pi},$$ (11) to denote the formal integration measure. As argued in [18], these expressions comply consistency conditions according to the coherent state representation of transition amplitudes at zero temperature [21], [22], and also fulfill the correct requirements stated by functional methods for calculating integrals in the complex plane. In case the functional integrals are Gaussians, the partition functions can be completely determined by means of the stationary phase method. 3 The Star product representation In order to construct the star product representation of the coherent state path integral, depicted in (9), let us introduce the family of $s$-parametrized quasi-probability distributions in an integral form [9], [11], given as $$F(\alpha,s)=\frac{1}{\pi^{2}}\int d^{2}\beta\,G(\beta,s)e^{\alpha\beta^{*}-% \alpha^{*}\beta}$$ (12) where $G(\beta,s)$ denotes the $s$-ordered generalized characteristic function $$G(\beta,s)=\tr\left\{\hat{D}(\beta)\hat{\rho}\right\}e^{\frac{s}{2}|\beta|^{2}}.$$ (13) In the former expression, the operator $\hat{D}(\beta)$ corresponds to the displacement operator and $\hat{\rho}$ stands for the density operator of a quantum system. The parameter $s$ is related to the different ordering prescription of the operators $\hat{a}$ and $\hat{a}^{\dagger}$; for the value $s=1$ we obtain the Glauber-Sudarshan P-function (normal ordering) [23], [24], for $s=0$ we get the Wigner distribution (Weyl-symmetric ordering) [6], and $s=-1$ determines the Husimi-Kano Q function (anit-normal ordering) [10], [11] (for a generalzation to fields see [26]). Let us now consider the case of the non-diagonal density operator defined by $\hat{\rho}=\ket{\alpha_{i}}\bra{\alpha_{f}}$. For any operator $\hat{B}$, we can make use of the $s$-ordered quasi-probability distributions (12) in order to express the matrix element $\braket{\alpha_{f}}{\hat{B}}{\alpha_{i}}$ as $$\displaystyle\braket{\alpha_{f}}{\hat{B}}{\alpha_{i}}$$ $$\displaystyle=$$ $$\displaystyle\tr\left\{\hat{B}\hat{\rho}\right\}$$ (14) $$\displaystyle=$$ $$\displaystyle\frac{1}{\pi^{2}}\int d^{2}\alpha\,d^{2}\beta\,tr\left\{\hat{D}(% \beta)\hat{B}\hat{\rho}\right\}e^{\frac{s}{2}|\beta|^{2}+\alpha\beta^{*}-% \alpha^{*}\beta}.$$ Since the path integral representation, analyzed in the previuous section, was determined through the normal ordering prescription, for the case $s=1$ the matrix element (14) can be written as $$\displaystyle\braket{\alpha_{f}}{\hat{B}}{\alpha_{i}}$$ $$\displaystyle=$$ $$\displaystyle\int d^{2}\alpha\,P(\alpha)\braket{\alpha}{\hat{B}}{\alpha},$$ (15) $$\displaystyle=$$ $$\displaystyle\int d^{2}\alpha\,P(\alpha)B_{Q}(\alpha,\alpha^{*}),$$ where $P(\alpha)$ corresponds to the Glauber-Sudarhan P-representation of the density operator $\hat{\rho}$, and $B_{Q}(\alpha,\alpha^{*})$ stands for the Husimi-Kano Q-representation of the operator $\hat{B}$ [9], [25]. To be more precise, $P(\alpha)$ is a quasi-probability distribution of the form [27]: $$P(\alpha)=\frac{e^{|\alpha|^{2}}}{\pi^{2}}\int d^{2}\beta\,\braket{-\beta}{% \hat{\rho}}{\beta}e^{|\beta|^{2}+\beta^{*}\alpha-\beta\alpha^{*}},$$ (16) and the function $B_{Q}(\alpha,\alpha^{*})$ fulfills the properties associated to a probability distribution in a sense of positibity, expressed as $$B_{Q}(\alpha,\alpha^{*})=\braket{\alpha}{\hat{B}}{\alpha}.$$ (17) The equation (15) is known in the literature as the optical equivalence theorem [11], [23], and states that the expectation value of any normally ordered operator $\hat{B}$, is given by the average of the Q-representation of the operator $\hat{B}$ over the P-representation associated to the density operator $\hat{\rho}$. Now, with the aim to obtain the star product representation of coherent path integrals, let us represent the matrix element $\braket{\alpha_{f}}{e^{-iT\hat{H}(\hat{a},\hat{a}^{\dagger})}}{\alpha_{i}}$ in terms of quasi-probabity distributions. By using the optical equivalence theorem (15), we see that $$\braket{\alpha_{f}}{e^{-iT\hat{H}(\hat{a},\hat{a}^{\dagger})}}{\alpha_{i}}=% \int d^{2}\alpha\,P(\alpha)\braket{\alpha}{e^{-iT\hat{H}(\hat{a},\hat{a}^{% \dagger})}}{\alpha},$$ (18) where $\hat{H}(\hat{a},\hat{a}^{\dagger})$ corresponds to the normally ordered Hamiltonian operator, and $P(\alpha)$ denotes the Glauber-Sudarshan P-representation of the non-diagonal density operator $\hat{\rho}=\ket{\alpha_{i}}\bra{\alpha_{f}}$. In order to compute the integral (18), first it is necessary to analyze the Husimi-Kano Q-representation of the time evolution operator $\hat{U}(T)=e^{-iT\hat{H}(\hat{a},\hat{a}^{\dagger})}$. Given $\hat{B}(\hat{a},\hat{a}^{\dagger})$ and $\hat{C}(\hat{a},\hat{a}^{\dagger})$ two normally ordered operators, this means that we can express both as the normally ordered expansions $$\hat{B}=\sum_{m,n=0}^{\infty}b_{mn}(\hat{a}^{\dagger})^{m}\hat{a}^{n},\;\;\;\;% \hat{C}=\sum_{p,q=0}^{\infty}c_{pq}(\hat{a}^{\dagger})^{p}\hat{a}^{q},$$ (19) with $b_{mn},c_{pq}\in\mathbb{C}$. In terms of the expectation value over coherent states [28] , we observe that $$\displaystyle\braket{\alpha}{\hat{B}\hat{C}}{\alpha}$$ $$\displaystyle=$$ $$\displaystyle\sum_{m,n,p,q}^{\infty}b_{mn}c_{pq}\braket{\alpha}{(\hat{a}^{% \dagger})^{m}\hat{a}^{n}(\hat{a}^{\dagger})^{p}\hat{a}^{q}}{\alpha},$$ (20) $$\displaystyle=$$ $$\displaystyle\int\frac{d^{2}\beta}{\pi}\sum_{m,n,p,q}^{\infty}b_{mn}c_{pq}(% \alpha^{*})^{m}\alpha^{q}\braket{\alpha}{\hat{a}^{n}}{\beta}\braket{\beta}{(% \hat{a}^{\dagger})^{p}}{\beta},$$ $$\displaystyle=$$ $$\displaystyle\int\frac{d^{2}\beta}{\pi}\sum_{m,n,p,q}^{\infty}b_{mn}c_{pq}(% \alpha^{*})^{m}\alpha^{q}\beta^{n}(\beta^{*})^{p}\braket{\alpha}{\beta}\braket% {\beta}{\alpha},$$ $$\displaystyle=$$ $$\displaystyle\int\frac{d^{2}\beta}{\pi}B(\beta,\alpha^{*})C(\alpha,\beta^{*})|% \!\braket{\alpha}{\beta}\!|^{2}.$$ The expression (20) is known as the integral representation of the normal star product between the functions $B(\alpha,\alpha^{*}):=\braket{\alpha}{\hat{B}}{\alpha}$, $C(\alpha,\alpha^{*}):=\braket{\alpha}{\hat{C}}{\alpha}$, and is commonly denoted as $(B\star_{N}C)(\alpha,\alpha^{*})$. Alternatively, one can perform a series expansion, yielding the differential representation of the normal star product [29], [30] $$(B\star_{N}C)(\alpha,\alpha^{*})=B(\alpha,\alpha^{*})\exp\left\{\frac{% \overleftarrow{\partial}}{\partial\alpha}\frac{\overrightarrow{\partial}}{% \partial\alpha^{*}}\right\}C(\alpha,\alpha^{*}).$$ (21) which results completely equivalent to the expression (20). Considering that the evolution operator $\hat{U}(T)$, according to the spectral theorem [19], can be defined by a convergent power series $$\hat{U}(T)=e^{-iT\hat{H}(\hat{a},\hat{a}^{\dagger})}=1-iT\hat{H}(\hat{a},\hat{% a}^{\dagger})+\frac{(-iT)^{2}}{2!}\hat{H}^{2}(\hat{a},\hat{a}^{\dagger})+\cdots,$$ (22) then, the Husimi-Kano Q-representation of the evolution operator can be written as $$\braket{\alpha}{e^{-iT\hat{H}(\hat{a},\hat{a}^{\dagger})}}{\alpha}=\exp_{\star% _{N}}\left\{-iTH(\alpha,\alpha^{*})\right\},$$ (23) where the normal star exponential is given by $$\\ exp_{\star_{N}}\left\{-iTH(\alpha,\alpha^{*})\right\}=1-iTH(\alpha,\alpha^{*})% +\frac{(-iT)^{2}}{2!}(H\star_{N}H)(\alpha,\alpha^{*})+\cdots.$$ (24) Our next task is to determine the Glauber-Sudarshan P-function associated to the non-diagonal density operator $\hat{\rho}=\ket{\alpha_{i}}\bra{\alpha_{f}}$. At first glance, this seems to be problematic since the P-function is usually related to the diagonal representation of the density operator in a coherent state basis [23]. These problems can be avoided by introducing the positive P-function [31], which corresponds to a generalization of the P-representation and is obtained by doubling the dimension of the phase-space along with solving a Fokker-Planck type equation. For our purporses, however, it is more convenient to employ generalized delta functions to describe the non-diagonal parts of the denisty operator. A generalized delta function $\tilde{\delta}(z)$ is a delta function with complex arguments, symbolically represented as [32], $$\tilde{\delta}(z)=\frac{1}{2\pi}\int_{-\infty}^{\infty}dx\,e^{-izx}$$ (25) such that $$\int_{\infty}^{\infty}dx\,f(x)\tilde{\delta}(x-z)=f(z),$$ (26) where $z\in\mathbb{C}$. The test function $f(z)$ is analytic on the complex variable $z$, and also must decay sufficiently rapidly at large distances along the real axis, that is, $f(x+iy)$ is of order $O(|x|^{-N})$ as $|x|\to\infty$ for all $N$. As described in [33] the idea of introducing generalized delta functions is to extend the ordinary Dirac delta function to the complex plane by making use of contour integrals (see [32], [33], for more details). One of the most compelling properties of the generalized delta function involves separate integrations over real and imaginary parts of a given function, that is $$\displaystyle\int_{-\infty}^{\infty}dRe(\alpha)\,f(Re(\alpha),Im(\alpha))% \tilde{\delta}(Re(\alpha)-z)$$ $$\displaystyle=$$ $$\displaystyle f(z,Im(\alpha)),$$ (27) $$\displaystyle\int_{-\infty}^{\infty}dIm(\alpha)\,f(Re(\alpha),Im(\alpha))% \tilde{\delta}(Re(\alpha)-z)$$ $$\displaystyle=$$ $$\displaystyle f(z,Re(\alpha)),$$ (28) here $\alpha\in\mathbb{C}$ and $Re(\alpha),Im(\alpha)$ stands for its real and imaginary parts respectively. These formulas has previuously been applied in the positive P-representation to determine the Glauber-Sudarshan P-function associated to a cat state [31], [33]. With the preceding formulas, we now compute the P-representation corresponding to the non-diagonal density operator $\ket{\alpha_{i}}\bra{\alpha_{f}}$. By using the explicit expression for $P(\alpha)$ in (16), and the definition of the generalized delta function (25), we obtain $$\displaystyle P(\alpha)$$ $$\displaystyle=$$ $$\displaystyle\frac{e^{|\alpha|^{2}}}{\pi^{2}}\int d^{2}\beta\,\braket{-\beta}{% \alpha_{i}}\braket{\alpha_{f}}{\beta}e^{|\beta|^{2}+\beta^{*}\alpha-\beta% \alpha^{*}},$$ (29) $$\displaystyle=$$ $$\displaystyle\tilde{\delta}\left(\frac{\alpha_{i}+\alpha_{f}^{*}}{2}-Re(\alpha% )\right)\tilde{\delta}\left(i\frac{\alpha_{i}-\alpha_{f}^{*}}{2}-Im(\alpha)% \right).$$ Finally, we are ready to obtain the star product representation of the matrix element $\braket{\alpha_{f}}{e^{-iT\hat{H}(\hat{a}^{\dagger},\hat{a})}}{\alpha_{i}}$. By substituting the Husimi-Kano representation of the evolution operator (23) and the Glauber-Sudarshan P-function of the density operator (29) into the expression (18), using the properties of the generalized delta function (27), (28), we have $$\braket{\alpha_{f}}{e^{-iT\hat{H}(\hat{a},\hat{a}^{\dagger})}}{\alpha_{i}}=% \braket{\alpha_{f}}{\alpha_{i}}\left.\exp_{\star_{N}}\left\{-iTH(\alpha,\alpha% ^{*})\right\}\right|_{\alpha=\alpha_{i},\alpha^{*}=\alpha_{f}^{*}}.$$ (30) Since the coherent path integral is given by the matrix element (9), its corresponding star product representation follows consequently from equation (30). This means, that the path integral for coherent states is realized by the normal star exponential of the classical Hamiltonian function. It is worthwhile to mention that if we instead choose $\braket{\alpha_{f}}{\alpha_{i}}=e^{\alpha_{f}^{*}\alpha_{i}}$, as the inner product between coherent states, the formula (30) agrees with the expression found in [13], where the star-quantization of the free scalar field was introduced by means of Berezin calculus and the c-equivalence between the Moyal and normal star products in the holomorphic representation. However, since the formalism presented here is developed by means of the analysis of $s$-parametrized quasi-probability distributions, it may demonstrate relevant in order to explore the repercussion of changing the ordering on the path integral by continuously varying the parameter $s$ without making use of the c-equivalence of star products, which in some cases may result quite demanding. 4 Conclusions In this paper, the star product representation of coherent state path integrals was determined. In particular, by employing the properties of generalized delta functions with complex arguments, we obtain the Glauber-Sudarshan P-function associated to a non-diagonal density operator. Then, we compute the Husimi-Kano Q-representation of the time evolution operator in terms of the normal star product. Finally, the optical equivalence theorem allows us to express the coherent path integral in terms of the star exponential of the classical Hamiltonian for the normal product. We claim that our developments will be helpful to analyze several inconsistencies related to regularization and continuum limits, encountered in coherent state path integration [14]. From a different perspective, it will also be relevant to implement the techniques described here to establish the star product representation of spin foam models in the loop quantum gravity and loop quantum cosmology framework, since recently, a family $s$-parametrized quasi-probability distributions and its relation with the Wigner-Weyl representation has been obtained [15],[16],[17]. We intend to dedicate a future publication to address this work in progress. Acknowledgments The author would like to acknowledge financial support from CONACYT-Mexico under the project CB-2017-283838. References References [1] Kleinert H., Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, (World Scientific, Singapore, 2006). [2] Klauder J. R. and Skagerstam B. S., Coherent States, Applica- tions in Physics and Mathematical Physics, (World Scientific, Singapore, 1985). [3] Baranger M.,de Aguiar M. A. M, Keck F., Korsch H. J. and Schellhaass B., Semiclassical Approximations in Phase Space with Coherent States, J. Phys. A 34 7227 (2001), arXiv:quant-ph/0105153. [4] Kochetov E. A., Quasiclassical path integral in coherent-state manifolds, J. Phys. A: Math. Gen. 31 4473 (1998). [5] Gutzwiller M. C., Chaos in classical and quantum physics, (Springer, New York, 1990). [6] Wigner E., On the Quantum Correction For Thermodynamic Equilibrium, Phys. Rev. 40 749 (1932). [7] Weyl H., Group Theory and Quantum Mechanics, (Dover, New York, 1931). [8] Moyal J. E., Quantum mechanics as a statistical theory, Poc. Cambridge Philos. Soc. 45 99-124 (1949). [9] Scully O. M. and Zubairy M. S., Quantum Optics, (United Kingdom, Cambridge University Press, 2001). [10] Schleich W. P., Quantum Optics in Phase Space, (Wiley-VCH, Berlin, 2001). [11] Cahill K. E. and Glauber R. J., Ordered expansions in boson amplitude operators, Phys. Rev. 177 1857–1881 (1969). [12] Sharan P., Star-product representation of path integrals, Phys. Rev. D 20 414 (1979). [13] Dito J., Star-Product Approach to Quantum Field Theory: The Free Scalar Field, Lett. Math. Phys. 20 125–134 (1990). [14] Wilson J. H. and Galitski V., Breakdown of the Coherent State Path Integral: Two Simple Examples, Phys. Rev. Lett. 106 110401 (2011), arXiv:1012.1328 [quant-ph]. [15] Berra-Montiel J. and Molgado A., Polymer quantum mechanics as a deformation quantization, Class. Quantum Grav. 36 025001 (2019), arXiv:1805.05943 [gr-qc]. [16] Berra-Montiel J., The Polymer representation for the scalar field: A Wigner functional approach, Class. Quantum Grav. 37 025006 (2020), arXiv:1908.09194 [gr-qc]. [17] Berra-Montiel J. and Molgado A. Quasi-probability distributions in Loop Quantum Cosmology, (2020), arXiv:2007.01324 [gr-qc]. [18] Su J. C. ,Correct functional-integral expressions for partition functions in the coherent-state representation, Phys. Lett. A 268 279–285 (2000). [19] Hall B. C., Quantum Theory for Mathematicians, (New York, Springer, 2013). [20] Novikov A., Kleinekathöfer U. and Schreiber M., Coherent-state path integral approach to the damped harmonic oscillator,J. Phys. A: Maht. Gen. 37 3019–3040 (2004). [21] Itzykson C. and Zuber J. B., Quantum Field Theory, (McGraw-Hill, New York, 1980). [22] Faddeev L. D. and Slavnov A. A, Gauge Fields: Introduction to Quantum Theory, (Benjamin Cummings Company Inc, Massachusetts , 1980). [23] Sudarshan E. C. G., Equivalence of Semiclassical and Quantum Mechanical Descriptions of Statistical Light Beams, Phys. Rev. Lett. 10 277 (1963). [24] Klauder J.  R. and Sudarshan E. C. G, Fundamentals of Quantum Optics, (W. A. Benjamin, Inc., New York, 1968). [25] Gerry C. C. and Knight P. L. ,Introductory Quantum Optics, (Cambridge, Cambridge University Press, 2005). [26] Berra-Montiel J. and Molgado A., Coherent representation of fields and deformation quantization, (2020), arXiv:2005.14333 [quant-ph]. [27] Mehta C. L., Diagonal Coherent-State Representation of Quantum Operators, Phys. Rev. Lett. 18 752 (1967). [28] Lizzi F. and Vitale P., Matrix Bases for Star Products: a Review, SIGMA 10 086 (2014), arXiv:1403.0808 [hep-th]. [29] Hirshfeld A. C. and Henselder P., Star Products and Perturbative Quantum Field Theory, Ann. Phys. 298 382–393 (2002), arXiv:hep-th/0208194. [30] Alexanian G., Pinzul A. and Stern A., Generalized coherent state approach to star products and applications to the fuzzy sphere, Nucl. Phys. B 600 531 (2000), arXiv:hep-th/0010187). [31] Drummond P. D. and Gardiner C. W., Generalised P-representations in quantum optics, J. Phys. A: Math. Gen. 13 2353–2368 (1980). [32] Poularkis A. D., Transforms and Applications Handbook, 2nd ed. (CRC Press, Boca Raton, 2000). [33] Brewster R. A. and Franson J. D., Generalized Delta Functions and Their Use in Quantum Optics, J. Math. Phys. 59 012102 (2018), arXiv:1605.04321 [quant-ph].
Directional axion detection [    [    [    [    [ Abstract We develop a formalism to describe extensions of existing axion haloscope designs to those that possess directional sensitivity to incoming dark matter axion velocities. The effects are measurable if experiments are designed to have dimensions that approach the typical coherence length for the local axion field. With directional sensitivity, axion detection experiments would have a greatly enhanced potential to probe the local dark matter velocity distribution. We develop our formalism generally, but apply it to specific experimental designs, namely resonant cavities and dielectric disk haloscopes. We demonstrate that these experiments are capable of measuring the daily modulation of the dark matter signal and using it to reconstruct the three-dimensional velocity distribution. This allows one to measure the Solar peculiar velocity, probe the anisotropy of the dark matter velocity ellipsoid and identify cold substructures such as the recently discovered streams near to Earth. Directional experiments can also identify features over much shorter timescales, potentially facilitating the mapping of debris from axion miniclusters. a]Stefan Knirck, a]Alexander J. Millar, b]Ciaran A. J. O’Hare, a,b]Javier Redondo, a]Frank D. Steffen Prepared for submission to JCAP \trfont MPP-2018-56 Directional axion detection ${}^{a}$ Max-Planck-Institut für Physik (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München, Germany ${}^{b}$ Universidad de Zaragoza, P. Cerbuna 12, 50009 Zaragoza, España E-mail: cohare@unizar.es, knirck@mpp.mpg.de, millar@mpp.mpg.de, jredondo@unizar.es, steffen@mpp.mpg.de   Contents 1 Introduction 2 Axions and dark matter 2.1 The local axion field 2.2 Detecting axions 2.3 The velocity distribution 2.4 Streams 2.5 Signal modulations 3 Directional axion haloscopes 3.1 General formalism 3.2 Resonant cavities 3.3 Dielectric haloscope 3.4 Benchmark experimental parameters 4 Statistical analysis 4.1 Profile likelihood ratio test 4.2 Measuring modulations 4.3 Parameter constraints 5 Results 5.1 Measuring the daily modulation 5.2 Measuring the anisotropy of the DM halo 5.3 Measuring a stream 5.4 Prospects for minicluster streams 6 Summary A Lab velocity and modulation parameters B Analytic formulae for the test statistic B.1 Linear experiments B.2 Quadratic experiments   1 Introduction The axion is a very light pseudoscalar particle that appears as a consequence of the solution of Peccei and Quinn [1, 2] to the strong CP problem of quantum chromodynamics (QCD). The axion has long been an alluring particle candidate to explain the dark matter (DM) that seems to dominate the mass content of the Universe. But now in recent years, with the persistent lack of unambiguous positive signals for any weakly interacting massive particles (WIMPs) from direct and indirect probes, the axion has been enjoying growing popularity. Through a variety of mechanisms, axions can sizably contribute to the abundance of dark matter. The subject of axion cosmology is reviewed comprehensively in ref. [3]. Cold dark matter can be produced via the oscillations of the axion field associated with the vacuum realignment mechanism [4, 5, 6, 7]. In the scenario in which the Peccei–Quinn symmetry is broken before inflation and not restored thereafter, this contribution depends on the one initial misalignment angle $\theta_{\mathrm{I}}$ in our observable patch of the Universe, with any contributions from topological defects diluted away by inflation. In contrast, in the scenario with post-inflationary Peccei-Quinn symmetry breaking, many different $\theta_{\mathrm{I}}$ values occur and effects associated with topological defects (domain walls and cosmic strings) have to be taken into account [8, 9, 10, 11, 12]. Furthermore, in this scenario sufficiently overdense regions of the axion field that enter matter-radiation equality earlier than their surroundings will have their axions gravitationally bound faster than the surrounding Hubble expansion. The collapse of the mass inside the horizon at this time leaves behind stable clumps of axions called ‘miniclusters’ [13, 14, 15, 16, 17, 18, 19]. These miniclusters may also host solitonic oscillating configurations of the axion field, variously called oscillatons, axitons [15], axion stars [20, 21, 22, 23], bose stars [24, 25] or drops [26]. If any of these objects are abundant enough (and indeed stable enough to survive the formation of galactic halos) there may be prospects for their direct [27, 28], or indirect detection [29, 30, 31, 32, 33, 34, 35, 36, 37] today. Laboratory searches for axions, and their phenomenological generalisation, the axion-like particle (ALP), predominantly rely on their coupling to photons $g_{a\gamma}$. This coupling conveniently allows for the mixing of axions to photons inside magnetic fields. Hence if such particles exist there is the possibility for a measurable flux of ALPs emitted by the Sun (potentially to be observed by the helioscope CAST [38] and in the future by IAXO [39]). Moreover, ALPs could be produced and detected in a purely laboratory setup (such as in the ‘light-shining-through-a-wall’ experiment [40] ALPS [41]). However if axions comprise a significant fraction of galactic DM then the value of the axion field should be perpetually oscillating around us at the frequency of the axion mass. So if an experiment can proffer our local DM population a strong enough magnetic field in which to convert, and we are able to precisely measure the subsequent electromagnetic (EM) response, then we will find the axion. Of course the axion mass is unknown, and existing constraints on the ALP-photon coupling tell us that a signal, if present, must be terribly small. So in looking for the axion an experiment must be able to cover a range of frequencies as well as somehow enhance the signal to something measurable. Historically the most popular way to enhance a potential axion signal experimentally is to couple it to the resonant mode of a cavity. The ADMX collaboration [42] found great success with this method and have recently achieved sufficient sensitivity to probe the DFSZ QCD axion model for the first time in a dark matter search [43]. ADMX are now followed by fervent activity from bright-eyed resonant cavity enthusiasts such as HAYSTAC [44, 45, 46, 47, 48], CULTASK [49, 50, 51], Orpheus [52], ORGAN [53, 54] and RADES [55]. The resonant cavity can, and indeed has, set extremely stringent constraints on the axion-photon coupling thanks to rapid development in highly sensitive receiver and amplification technology with noise temperatures nearing the quantum limit. However there are substantial difficulties to be encountered in designing cavities for higher $m_{a}$ since higher resonant frequencies generally require smaller volumes. Such smaller experiments would suffer in signal strength and therefore sensitivity, unless novel modifications and complex structures are employed, as envisioned in the recent RADES proposal [55]. Cavities are well suited to cover axion masses in the range 1–40 $\mu$eV. The search towards higher masses however might be better handled by entirely different designs. For instance some are considering measuring DM axion-induced photon emission from magnetised surfaces. This property is to be exploited by MADMAX [56, 57] which is designed to coherently enhance the emitted photons with a series of dielectric disks (see e.g. ref. [58]). Similarly BRASS [59] is planned to measure this effect as well, but inside a dish antenna configuration, thus achieving a huge effective volume (see also refs. [60, 61, 62, 63]). These experiments will be free from the volume-frequency restriction of the resonator, so are the natural choice to probe larger values of $m_{a}$. The low mass window, below the reach of ADMX, still waits to be explored as well. The vanguard of this region are experiments that persuade the axion field to generate a secondary magnetic flux by circulating the primary axion-induced electric field [64, 65]. The ABRACADABRA [65] and DM-Radio [66] groups are making progress with this approach, as well as BEAST [67] which looks to measure the axion-induced electric field directly. In this paper we neglect the discussion of the detection of axion couplings to fermions, suffice to say that there are experiments in the planning such as CASPEr [68, 69] and QUAX [70, 71, 72, 73] to look for them. For an up to date review of all past, present and future experimental searches for axions see ref. [74]. The primary calling of an axion search experiment is, one will be surprised to hear, to find the axion. However there is good motivation for asking what such a fortunate experiment might be able to offer particle and astrophysics, beyond the initial identification of the axion mass. One possible avenue that has recently been spotted beyond the horizon is the possibility of haloscopes fulfilling their namesake and becoming devices for doing astronomy. Although usually unimportant when exploring over a relatively large range of masses, the thermal distribution of DM velocities would cause a very small spread in the frequency of emitted photons with a width roughly given by the virial velocity dispersion of the DM halo [75]. Past axion searches with ADMX have incorporated some of these astrophysical uncertainties, for example by searching for discrete flows of axions [76, 77, 78] or applying constraints to different halo models [79, 80]. Furthermore there would also be an order 1% modulation of this lineshape in time due to the relative velocity of the Earth and Sun with respect to the DM halo ‘wind’ [81, 82, 80]. These are the signals that we can be confident must be present in any successful axion detection and would be essential cross-checks for confirming the discovery of DM. However irregularities in the shape of the axion spectrum and its time evolution would naturally be expected in a halo formed from the hierarchical merger and accretion of subhalos. These irregularities are of additional interest for the study of the history of the Milky Way (MW), galaxy formation in general, as well as improving our understanding of the mechanisms of axion DM production mentioned earlier. More fundamentally the phase space structure of the DM halo on solar system scales ($<$mpc) can only be explored by a terrestrial DM experiment. This epistemology, ‘axion astronomy’, was introduced and studied in detail recently in refs. [83, 84]. In this paper we aim to enhance the prospects of axion astronomy in future haloscopes by introducing directional effects, first suggested in ref. [85] in the context of cavities. The advantages offered by directional DM detection are well known in the WIMP community [86], especially with regard to DM astronomy [87, 88]. We demonstrate here the prospects for the case for axions, which in many cases (as with the findings of the aforementioned non-directional studies) greatly exceed the prospects for WIMPs. The most striking effect when considering directionality in axion experiments is the extremely prominent $\mathcal{O}(1)$ daily modulation present when an experiment has an elongated axis. For comparison the daily modulation in a non-directional experiment is at the $0.2\%$ level. We suggest that one might be able to construct some manner of axion observatory, if multiple experiments are placed adjacent to one another, pointing along orthogonal axes. Although the axion velocity effects can be written in a unified framework, we highlight the technical restrictions on doing astronomy in three example haloscope designs — two using a resonant cavity setup and one using layered dielectric disks — covering axion masses between 10 and 100 $\mu$eV. We illustrate these designs in figure 1. See table 1 and section 3.4 for further numerical details on the required experimental parameters for each. We structure this paper as follows. To begin in section 2 we sketch a description of the behaviour of the axion DM field at ultralocal scales, this will inform our input to the calculation of the expected signal and will allow us to connect a detected signal with the astrophysical velocity distribution for DM, which we also review briefly in this section. Then in section 3 we develop our formalism for describing directional effects in axion experiments. We show a general description at first before detailing how one would apply this formalism in practice. In section 4 we outline the statistical analysis methodology we will adopt in order to give analytic estimates to the experimental requirements for axion astronomy. We present these results in section 5, before concluding in section 6. 2 Axions and dark matter 2.1 The local axion field The axion DM field $a(\mathbf{x},\,t)$ is born as a coherent state that retains a very large occupation number until today. It is appropriate then to describe it as a classical field. We consider a large box of volume $V_{\odot}$ centered around the Solar System and describe the axion field as a superposition of plane waves of momentum $\mathbf{p}$, $$a(\mathbf{x},t)=\sqrt{V_{\odot}}\int\frac{\textrm{d}^{3}\mathbf{p}}{(2\pi)^{3}% }\frac{1}{2}\left[a(\mathbf{p})e^{i(\mathbf{p}\cdot\mathbf{x}-\omega_{\bf p}t)% }+a^{*}(\mathbf{p})e^{-i(\mathbf{p}\cdot\mathbf{x}-\omega_{\bf p}t)}\right]\,,$$ (2.1) where $\omega_{\bf p}$ is given implicitly by the dispersion relation111In the gravitational field of the Galaxy the dispersion relation is modified by the gravitational potential, $\Phi$, by $m_{a}^{2}\to m_{a}^{2}(1+2\Phi(\mathbf{x}))$ at first order. The overall effect of the Galaxy can be reabsorbed in a redefinition of time while the spatial variations due to local inhomogeneities in our volume will be neglected. $\omega_{\bf p}^{2}=|\mathbf{p}|^{2}+m_{a}^{2}$. The average energy density is, $$\bar{\rho}_{a}=\frac{1}{V_{\odot}}\int_{V_{\odot}}{\rm d}^{3}\mathbf{x}\,\rho_% {a}(\mathbf{x})=\int\frac{\textrm{d}^{3}\mathbf{p}}{(2\pi)^{3}}\frac{1}{2}% \omega_{\bf p}^{2}|a(\mathbf{p})|^{2},$$ (2.2) which must be consistent with local determinations of the dark matter density inferred astronomically at relatively large scales, $\sim\mathcal{O}(100\,{\rm pc}$ – ${\rm kpc})$. We have densities $\bar{\rho}_{a}\simeq\rho_{0}\simeq 0.4$ GeV cm${}^{-3}$ locally, where $\rho_{a}$ is the density of axions and $\rho_{0}$ is the astronomically measured value. The group velocity of axion waves is $\mathbf{v}=d\omega/d\mathbf{p}=\mathbf{p}/\omega$. A change of variables in eq. (2.2) allows us to identify the DM velocity distribution with the Fourier decomposition, $$\bar{\rho}_{a}\equiv\bar{\rho}_{a}\int\textrm{d}^{3}\,\mathbf{v}f(\mathbf{v})% \quad,\quad f(\mathbf{v})\simeq\frac{1}{\bar{\rho}_{a}}\frac{m_{a}^{3}}{(2\pi)% ^{3}}\frac{1}{2}m_{a}^{2}|a(\mathbf{p})|^{2}\,,$$ (2.3) where we have used $\omega\sim m_{a}$ in the multiplicative factors. Since DM velocities are of the order of $10^{-3}$ they amount to corrections of order $10^{-6}$ in the formula222We use natural units $\hbar=c=1$ throughout except at certain points when we reintroduce $c$ for clarity.. We have a relatively clear idea of the distribution of DM on $\lesssim$ kpc scales, both from observations as well as from N-body and hydrodynamic simulations: the density ought to be essentially homogenous and the velocity distribution will be something resembling a Maxwellian, $$f({\bf v})\sim\exp\left(-\frac{|{\bf v}|^{2}}{2\sigma_{v}^{2}}\right)\,.$$ (2.4) The precise description of this is dealt with in section 2.3. We must admit however a degree of ignorance when we discuss the DM distribution on the much smaller scales we can probe in an experimental campaign. In 10 years of observation, our laboratories together with the Sun sample only $\sim 2$ mpc of the MW halo. At these scales we have no direct handle of the distribution of DM in simulations or through observation, so we must rely on methods of extrapolation. In particular the question of the ultrafine homogeneity of the MW halo is such a critical one for any successful direct detection of DM, that many attempts have already been made to address it. The possibility of a distribution too clumpy to realistically observe from Earth is a grave one. To soothe one’s anxiety, take note of the result of Vogelsberger & White [89]. In this study the authors follow particle trajectories placed inside an N-body distribution, to trace the subgrid evolution of accreted structure. They find that the typical DM distribution we would sample at Earth is the sum of many $\sim 10^{14}$ ancient streams, with half of all particles contained in streams with densities less than $10^{-7}\rho_{0}$ today. With these claims — supported by other analyses using a range of alternative approaches to the same problem [90, 91, 92, 93, 94] — we notice that the general opinion tends towards the conclusion of relative homogeneity on the relevant mpc scales. Nevertheless, we must keep in mind the possibility of any non-gravitational ‘beyond-CDM’ interactions that would not be accounted for in these particle-agnostic studies. Even the case of axionic DM alone would warrant a devoted analysis, but this is beyond the scope of our paper. Instead we simply adopt the assumption of homogeneity (as suggested by the aforementioned simulations). This is far from a new argument — almost every theoretical study of direct DM detection works from this assumption — but in the case of axions there are unexpected consequences. So we should identify the impact of this assumption on our analysis. The assumption of homogeneity is usually done in an statistical way. The axion density at a point can be expanded into the modes of the field (see ref. [19] for a similar treatment in the context of miniclusters), $$\rho_{a}({\bf x})=\int\frac{{\rm d}^{3}{\bf p}}{(2\pi)^{3}}\int\frac{{\rm d}^{% 3}{\bf q}}{(2\pi)^{3}}\frac{m_{a}^{2}}{2}\left[e^{i({\bf p}-{\bf q})\cdot{\bf x% }}\cos\big{(}(\omega_{{\bf q}}-\omega_{{\bf p}})t\big{)}a({\bf p})a^{*}({\bf q% })+...\right]$$ (2.5) where the ellipsis stands for factors of order ${\cal O}(a^{2}p^{2}/m_{a}^{2})$ and therefore negligible. At ${\bf x}~{}=~{}0$, $t=0$, the density is given by the square of the integral over the complex amplitudes of the modes, $m_{a}\int d^{3}{\bf p}\,a({\bf p})$. Assuming the distribution of amplitudes with momentum $\mathbf{p}$ is Gaussian, the integral is also a Gaussian. This means that the modulus squared (i.e. the energy density) will be distributed according to an exponential distribution, $$\frac{d{\cal P}}{d\rho_{a}}=\frac{1}{\bar{\rho}_{a}}\exp\left(-\frac{\rho_{a}}% {\bar{\rho}_{a}}\right).$$ (2.6) We can use this distribution for any point since ${\bf x}={\bf 0}$ should not be particularly special, it only follows from the randomness of the amplitude coefficients. In our local volume however, $a({\bf p})$ is not in fact a statistical variable at all, it is just one fixed complex number (that we would like to eventually measure). But when we sum these complex numbers to measure $\rho_{a}({\bf x})$ over a volume swept out during an observation, we must account for the phase factor $e^{i{\bf p}\cdot{\bf x}}$ and the oscillatory cosine which are not constant. Therefore, beyond a certain length and time, the phases at one end of the integral will be uncorrelated with the ones at ${\bf x}={\bf 0}$ and the density we observe will be drawn again from the exponential distribution. The length and time of coherence can be read from the distribution of modes, noting that $|a({\bf p})|^{2}$ are exponentially suppressed above $p_{c}\sim m_{a}\sigma_{v}$. So ${\bf p}\cdot{\bf x}\ll 1$ is only true for length scales $|{\bf x}|\ll 1/p_{c}\equiv L_{c}$ and timescales $t\ll 1/(p_{c}^{2}/2m_{a})\equiv t_{c}$. With these coherence scales in mind, consider making repeated observations that sweep out a large enough volume where $V\gg L_{c}^{3}$. The fluctuations in the measured density between each of these volumes will now be smaller than suggested by the exponential distribution. Because in each $V$ we have $N=V/L_{c}^{3}$ coherence volumes, meaning the integral encodes a random walk over many uncorrelated phases. Eventually the standard deviation of eq. (2.6) will get suppressed by $1/\sqrt{N}\propto\sqrt{L_{c}^{3}/V}$. Importantly for us, this argument will also apply to the fraction of energy associated to axions with frequencies between $\omega$ and $\omega+{\rm d}\omega$, ${\rm d}\rho(\omega)$ and hence a measurement of the velocity distribution. As long as the phases of the integrals in eq. (2.5) really are random, the statistics of eq. (2.6) and its suppression as we sum over many coherence volumes will follow, $${\rm d}\bar{\rho}_{a}\propto|a({\bf p})|^{2}{\rm d}\omega\,.$$ (2.7) where the proportionality factor can be read from eq. (2.2). In any case, the coherence time for modes in a small bin of frequencies is much longer $t_{c}\sim 1/{\rm d}\omega$ and thus much longer observations are required for the measured density to be drawn again from the distribution. The fundamental statistical nature of the measurement of an axion DM signal was identified only recently by ref. [84] since it was missed in previous in work. The argument sketched here agrees in the final statistical distribution of the signal but is derived in a different way. A word of warning is in order with respect to the randomness of the Fourier coefficients, their phases in particular. Even if we do believe that the assumption of homogeneity may adequately reflect axion DM produced in the pre-inflationary scenario, in the post-inflationary scenario there is the issue of miniclusters. They have been shown to form in simulations of the axion field at early cosmological times from density perturbations collapsing and decoupling from the Hubble flow (see e.g. refs. [13, 14, 15, 16, 17]). The characteristic mass of a minicluster is set by the horizon size at formation, typically around the mass of a large asteroid, $M_{\rm mc}\sim 10^{-12}\,M_{\odot}$. The abundance of miniclusters at formation can be quite high, potentially constituting the leading fraction of the DM [17]. Sadly, it is highly unlikely that we will pass through one in our lifetime333Though a prediction like this depends on the mass function, density profile, spatial extent and overall abundance of a minicluster population, all of which are being actively investigated [19, 37].. Even if the entirety of the dark matter were in the form of miniclusters and there were on the order of $\sim 10^{19}~{}\textrm{kpc}^{-3}$ locally, a direct encounter would occur less than once every $10^{5}$ years. With the assumption of completely random coefficients, large upwards fluctuations of the density are relatively rare. For instance, for an axion mass of $10\,\mu\rm eV$, with $L_{c}\sim 1/m_{a}\sigma_{v}\sim 20$ m, we expect only $\sim 1$ volume $L_{c}^{3}$ in the entire local kpc${}^{3}$ that would have the phases and amplitudes arranged in such a way to give a measurement of an overdensity $\sim 100\rho_{0}$. On the other hand, the typical minicluster can easily reach an overdensity many orders of magnitude larger than this, even though the large scale averaged velocity distribution for miniclusters and a homogenous axion field should be the same. So how can it be that the same distribution of Fourier amplitudes $|a({\bf p})|$ can describe both a consistently observable smooth population of dark matter, and an almost unobservable sparse distribution of miniclusters? The information of such extreme clumpiness can only be encoded in the correlations of the Fourier phases. For miniclusters the phases are such that only around one particular fine tuned place do they add coherently. For an example, consider the following model for a Gaussian minicluster of radius $R$. Taking the Fourier transform of this lump of axions we have, $$a\sim a_{0}e^{-\frac{|\mathbf{x}-\mathbf{x}_{0}|^{2}}{2R^{2}}}\cos(m_{a}t)% \quad\to\quad a(\mathbf{p})\propto a_{0}e^{im_{a}t}e^{-\frac{|\mathbf{p}|^{2}R% ^{2}}{2}}e^{i\mathbf{x}_{0}\cdot\mathbf{p}}$$ (2.8) revealing $|a(\mathbf{p})|\propto a_{0}$ and ${\rm arg}\left[a({\bf p})\right]=\mathbf{x}_{0}\cdot\mathbf{p}$. The Gaussian envelope retains no information about the position of our lump, but the phase does. It is clear from the spatial representation that for $|\mathbf{x}_{0}|\gg R$ our detectors will not see the axion DM lump. In Fourier space this is encoded in the correlated but extremely quickly varying phase if $|\mathbf{x}_{0}|\gg 1/p_{m}$ where $p_{m}$ is the characteristic momentum of the minicluster distribution. So in a sense, if one is far outside of the lump then the phase is oscillating so wildly between momenta that each ‘step’ in the random walk is cancelling the previous one. On the other hand, inside the lump the phase can vary slowly and allow the measured density to build up to a very large value. In this paper we will assume that a smooth distribution of axion DM at kpc scales is still valid at the mpc scales relevant for experiments and axions are not overwhelmingly bound up in miniclusters. In any case, our study begins from the hypothesis that the axion has already been found in an experiment, so the argument is at the very least self-consistent. 2.2 Detecting axions We explore directional effects in haloscope experiments, i.e. those that exploit the axion coupling to the photon $g_{a\gamma}$ allowing a mixing between axion and EM fields inside static magnetic fields. The QCD axion-photon coupling is related to the axion mass via the relation, $$\frac{g_{a\gamma}}{{\rm GeV}^{-1}}=2.0\times 10^{-16}C_{a\gamma}\frac{m_{a}}{% \mu{\rm eV}}\,.$$ (2.9) Where the $\mathcal{O}(1)$ number $C_{a\gamma}$ is model dependent (see ref. [74] for a discussion). Throughout we make the supposition that the discovered axion turned out to be from the KSVZ model [95, 96] so $|C_{a\gamma}|=1.92$. Since signals in haloscope experiments depend on the coupling as $g_{a\gamma}^{2}$, one should use this fact to rescale our results to match any alternative QCD axion (or indeed ALP) model at the quoted masses444The same is true for the axion density and fraction of axionic dark matter which would scale the signal linearly; though throughout we assume $\bar{\rho}_{a}=\rho_{0}=0.4\,{\rm GeV\,cm}^{-3}$.. The derivation of the effects in question begin with the axion-modified Maxwell’s equations for magnetic and electric fields $\mathbf{B}$ and $\mathbf{E}$, $$\displaystyle\nabla\cdot\mathbf{E}$$ $$\displaystyle=$$ $$\displaystyle\rho_{q}-g_{a\gamma}\mathbf{B}\cdot\nabla a$$ (2.10) $$\displaystyle\nabla\times\mathbf{B}-\dot{\mathbf{E}}$$ $$\displaystyle=$$ $$\displaystyle\mathbf{J}+g_{a\gamma}(\mathbf{B}\,\dot{a}-\mathbf{E}\times\nabla a)$$ (2.11) $$\displaystyle\nabla\cdot\mathbf{B}$$ $$\displaystyle=$$ $$\displaystyle 0$$ (2.12) $$\displaystyle\nabla\times\mathbf{E}+\dot{\mathbf{B}}$$ $$\displaystyle=$$ $$\displaystyle 0$$ (2.13) $$\displaystyle(\Box+m^{2}_{a})a$$ $$\displaystyle=$$ $$\displaystyle g_{a\gamma}\mathbf{E}\cdot\mathbf{B}\,,$$ (2.14) where $\rho_{q}$ and $\mathbf{J}$ are the electric charge density and current. In the following we will assume that a static magnetic field $\mathbf{B}_{e}$ is applied over some experimental volume $V$, and the resulting axion-photon oscillations are enhanced through a coupling to a resonant mode, or through the correct spacing of a series of dielectric disks. The dependence on $f(\mathbf{v})$ appears when one offers these equations an axion plane wave $a\sim a_{0}\exp{\left[i(\mathbf{p}\cdot\mathbf{x}-\omega t)\right]}$. The plane wave will have some frequency and momentum selected from the DM velocity distribution that will be reselected over the characteristic coherence length and time. For a typical speed of $300\textrm{ km s}^{-1}$ these are, $$t_{c}=\frac{2\pi}{m_{a}v^{2}}\,=40\,\mu{\rm s}\left(\frac{100\,\mu{\rm eV}}{m_% {a}}\right)\,,$$ (2.15) $$L_{c}=\frac{\pi}{m_{a}v}=6.2\,{\rm m}\left(\frac{100\,\mu{\rm eV}}{m_{a}}% \right)\,.$$ (2.16) The characteristic time of coherence is considerably shorter by many orders of magnitude than the typical integration times of most experiments (even for lower masses than the benchmark used here). So the Fourier transform of the signal collected over many thousands of these durations will approach the speed distribution $f(v)$ up to some exponentially distributed coefficient at each speed/frequency bin coming from the uncorrelated nature of the phases as described earlier. This type of measurement is the focus of refs. [83, 84]. Here we account for an additional effect; if the linear scale of $V$ is larger than the typical $L_{c}$ then the axion will oscillate with a slightly different phase across the dimensions of the experiment. So any measured signal will be modified slightly by how out of phase the oscillation is at one end of the device compared with the other. The size of this effect at some instant will be given by the angle between the axion direction and the preferred axis for the experiment. Accounting for this effect on a power spectrum measurement over some finite time essentially constitutes a correction from a weighted integral of the velocity distribution $f(\mathbf{v})$. This effect was introduced in ref. [85] but how one can exploit it to make a measurement of $f(\mathbf{v})$ in 3D has not been studied in detail before. 2.3 The velocity distribution Most dark matter detection analyses are performed under a simple assumption for the MW halo known as the standard halo model (SHM) [97]. This is a spherically symmetric isothermal halo model. Its $1/r^{2}$ density profile yields a Maxwell-Boltzmann velocity distribution with peak speed $v_{0}$ and dispersion $\sigma_{v}=v_{0}/\sqrt{2}$. The distribution ought to be truncated at the escape speed of the Galaxy [98], but given the exponential suppression of fast $v$, this has an extremely marginal effect for most axion direct detection signals. The velocity distribution in the galactic frame is given by: $$f(\mathbf{v})=\frac{1}{(2\pi\sigma_{v}^{2})^{3/2}}\,\exp\left(-\frac{|\mathbf{% v}|^{2}}{2\sigma_{v}^{2}}\right)\,.$$ (2.17) We may also allow for the velocity distribution to be anisotropic in the galactic frame. We discuss this possibility and prospects for detection in section 5.2. There have been long-standing concerns raised by the results of DM-only N-body simulations that the SHM may be a poor reflection of the real MW halo [99, 100, 101]. Interestingly however, more recent analyses of hydrodynamic simulations have found that the simple Maxwellian distribution of the SHM may, at least functionally, be sufficient to describe the local velocity distribution for the purposes of direct detection [102, 103, 104, 105]. However there are quantitative disagreements about whether the local $f(v)$ should be shifted higher or lower peak speeds from the SHM value of $220\textrm{ km s}^{-1}$. The solution suggested by ref. [104] is the correlation between the circular rotation speed (which is related to the peak speed) and the stellar mass of the halo. Despite these quantitative discrepancies, they do agree that the addition of baryons improves the fit to the Maxwellian locally. In the absence of a detection, a narrower speed distribution strengthens constraints on axions since a narrower line shows up more strongly over thermal noise. In the case of a detection (which is the focus of our work) the issue is immaterial since we simply measure the peak and width of the distribution directly. Indeed the comparison between this direct measurement and the aforementioned simulations will be an excellent way to refine the mass model and evolution history of the galaxy, in particular the relationship between the stellar and dark matter halos. Ultimately though a measurement from Earth is the most direct way to learn about the structure of dark matter halos on the scales inaccessible to simulations, and about our galaxy in particular. However along these lines we must mention recent work showing that information on a slightly larger scale about our DM velocity distribution could be determined empirically using astrometric survey data. Reference [106] showed that the kinematics of metal-poor stars, those which populate the stellar part of the halo, can be used as tracers for the velocity distribution of the virialised dark matter part of the halo. A determination was made applying this method to stars from RAVE and Gaia in ref. [107]. They observe a narrower distribution than the SHM prediction, in agreement with the N-body inspired axion lineshape [105] which is currently used by ADMX [43]. In the future these three complementary methods — simulations, astronomy and direct detection — will comprise a powerful multi-perspective view of the structure and growth of galactic halos on a wide range of scales. 2.4 Streams One of the most interesting questions we can ask of our local population of dark matter is about the presence of substructure. For instance streams are seen generically in simulations of Milky Way-like galaxies as smaller subhalos become absorbed by their larger host. In fact they are an inevitable consequence of the hierarchical growth of structure. The early numerical simulations of ref. [108] suggested that there was an $\mathcal{O}(1)$ probability for a stream to make up 1–5% of our local density. We now know of many examples of such substructure in the inner Milky Way [109, 110, 111, 112, 113]. Nearby streams can be identified either by looking for overdensities of individual stars or as phase space structures that have remained kinematically cold. Some have been known for many years, for example the stream from the famous Sagittarius dwarf [114, 115, 116, 117] (a favourite benchmark for direct detection theory papers [87, 118, 119, 84, 83, 88]). Unfortunately after several years of mapping across the sky with multiple stellar tracer populations, the Sagittarius stream is now known to not pass close to the Sun [120, 121]. Nevertheless our local neighbourhood may not be so bereft of streams after all. Thanks to the transformative data set from Gaia [122], more candidates have been found, including six stream-like or ‘clumpy’ objects which were shown to approach the Solar position [110]. One object in particular denoted ‘S1’ is certainly a stream and with a judicious selection of stars in phase space can be shown to have a mean position consistent with our galactic location [113]. The S1 stream has a galactocentric velocity of around 300 km s${}^{-1}$ and is incoming in the same direction as the dark matter wind. Whilst these velocities can be well-measured, there is still some doubt regarding how much one can assume about the dark component of a stream from its stars. S1 is believed to have an infallen over a time of $\gtrsim$ 9 Gyr from a progenitor with a total mass of around $10^{10}\,M_{\odot}$ (around the mass of the largest MW dwarf spheroidal, Fornax), so there is a good case to be made for a sizable dark matter component. Furthermore there may indeed be streams from dark subhalos that never contained stars to begin with. It is expected that around $\sim 100$–$200$ streams will be found in the inner halo of the MW over the next few years with the phase space method [111]. For us there is no need to make any assumptions, but these objects are attractive as a first set of benchmarks that are in some way grounded in reality. Again, the mysteries of the dark hearts of streams ought to be unveiled by detecting dark matter! We discuss the detectability of streams more in section 5.3. If a stream passed through the solar system it would exist as a distinct component of the local dark matter phase space distribution with speeds tightly concentrated around a single velocity $\mathbf{v}_{\mathrm{str}}$. The velocity distribution of a stream can be written similarly555In using models like this one should decide whether the stream is an additional contribution to dark matter on top of $\rho_{0}\sim 0.4$ GeV cm${}^{-3}$ or if it would comprise a fraction of $\rho_{0}$. The former would be best if the substructure is small enough in extent to not affect local determinations of the dark matter density with stars beyond a few parsecs away, i.e. the stream surrounds the Earth but not nearby stars. On the other hand if the substructure is on the order a few hundreds of parsecs in size or larger (as is expected for streams from dwarfs) then it would contribute to the local gravitational potential and hence determinations of $\rho_{0}$., $$f_{\mathrm{str}}(\mathbf{v})=\frac{1}{(2\pi\sigma_{\textrm{str}}^{2})^{3/2}}\,% \exp\left(-\frac{(\mathbf{v}-\mathbf{v}_{\textrm{str}})^{2}}{2\sigma_{\textrm{% str}}^{2}}\right)\,,$$ (2.18) where $\sigma_{\rm str}$ would be $\mathcal{O}(10)\textrm{ km s}^{-1}$. We (like others before us [123, 118, 84]) will focus on substructure in the form of streams, since the case for their presence nearby is the most compelling. However other creatures have been suggested variously in the literature such as debris flows [124, 125, 126, 127], shadow bars [128, 129] and dark disks [130, 131, 132, 133, 134]. The latter of these would lead to an enhancement of $f(v)$ at low speeds. Such a situation would be of no great concern for the detection of axions, in fact an enhanced low-speed population of dark matter would only increase the signal strength (the reverse is true for WIMPs [135]). In any case the dark disk scenario is believed to be unlikely since they are usually formed after a significant late merger [133] and can be constrained with astrometric data, as in ref. [134] for example. Finally we comment that the bestiary of substructure roaming our local halo may be enriched by the mechanisms involved in the cosmological production of dark matter. As discussed in the previous subsection, for axions produced in the post-inflation scenario, substructure in the form of miniclusters is expected. We mentioned that it is highly unlikely that we will pass through an individual minicluster in our lifetime, but an interesting prospect for direct detection is if this initial population of axion miniclusters are tidally disrupted by stellar interactions inside a galactic halo over many orbits through the disk and bulge [27, 28]. This could result in a network of streams wrapping the Milky Way each with much smaller radii than tidal streams from the stripping of satellites. A journey through this network would be characterised by temporary enhancements in the axion signal over timescales between a few hours to many days depending on the size of the original minicluster. Clearly if we wish to detect a ministream we need an experiment that can measure signals that tell us its velocity components within this duration. 2.5 Signal modulations We observe the velocity distribution of DM particles in the rest frame of the laboratory, so the $f(\mathbf{v})$ that we use to construct our power spectrum must undergo a Galilean transformation into to the lab rest frame by the time dependent velocity $\mathbf{v}_{\mathrm{lab}}(t)$. In section 3 we describe how we can build experiments that are most sensitive in a particular direction. So to measure the velocity distribution in 3D it is sensible to arrange three of these experiments orthogonal to one another in a $(\hat{\mathcal{N}},\,\hat{\mathcal{W}},\,\hat{\mathcal{Z}})=({\rm North,\,West% ,\,Zenith})$ coordinate system. We assume that the experiment is located at latitude and longitude $(\lambda_{\textrm{lab}},\,\phi_{\textrm{lab}})$. The angle between $\mathbf{v}_{\mathrm{lab}}$ and these axes will be diurnally modulated by the rotation of the Earth. We describe the calculation of these three daily modulations in appendix A. For now we skip to the final result which is the daily modulation of $\mathbf{v}_{\mathrm{lab}}(t)$ projected along each axis, $$\displaystyle v_{\rm lab}^{\mathcal{N}}/v_{\rm lab}=\cos{\theta_{\rm lab}^{% \mathcal{N}}}(t)$$ $$\displaystyle=$$ $$\displaystyle b_{0}\cos{\lambda_{\rm lab}}-b_{1}\sin{\lambda_{\rm lab}}\cos{% \left(\omega_{d}t+\phi_{\rm lab}+\psi\right)}\,,$$ (2.19) $$\displaystyle v_{\rm lab}^{\mathcal{W}}/v_{\rm lab}=\cos{\theta_{\rm lab}^{% \mathcal{W}}}(t)$$ $$\displaystyle=$$ $$\displaystyle b_{1}\cos{\left(\omega_{d}t+\phi_{\rm lab}+\psi-\pi\right)}\,,$$ (2.20) $$\displaystyle v_{\rm lab}^{\mathcal{Z}}/v_{\rm lab}=\cos{\theta_{\rm lab}^{% \mathcal{Z}}}(t)$$ $$\displaystyle=$$ $$\displaystyle b_{0}\sin{\lambda_{\rm lab}}+b_{1}\cos{\lambda_{\rm lab}}\cos{% \left(\omega_{d}t+\phi_{\rm lab}+\psi\right)}\,,$$ (2.21) where the frequency is $\omega_{d}=2\pi/$(1 siderial day), and the constants $b_{1}$, $b_{2}$ and $\psi$ vary slowly over the year, but can be taken as approximately constant over a duration of a couple of days. These constants can be inverted to find the three components of the Solar velocity, $\mathbf{v}_{\odot}$, see eq. (A.26). The angle between the Earth’s equator and $\mathbf{v}_{\mathrm{lab}}(t)$ varies between 41${}^{\circ}$ and $54^{\circ}$ degrees over the year so locations between these latitudes would be optimally placed to have a large daily modulation in all three experiments throughout the whole year666The locations of CAST, ADMX, HAYSTAC and ABRACADABA, as well as the proposed site for MADMAX already satisfy this condition.. An experiment could also be placed on a tilt to mimic the effect of being at a different latitude. Every example we use takes the location of the experiment to be Munich with coordinates $(\lambda_{\rm lab},\,\phi_{\rm lab})=(48^{\circ},\,12^{\circ})$. The modulations due to the movement of the laboratory with respect to the DM wind are the only ones we consider since we make the assumption of homogeneity in the smooth component of the axion field on our mpc scales. However we would like to briefly note that we know that there will certainly be inhomogeneities induced even more locally than this due to the gravitational field of the Sun [136]. This effect of gravitational focusing was identified as an issue for axion astronomy by the authors of ref. [84] who implement it perturbatively at leading order in $G$ as a correction to the velocity distribution, see ref. [137]. We are behind the Sun with respect to the DM wind during March so the greatest amount of focusing is observed during Northern Hemisphere spring. The effect is around 1–2% at the level of the distribution and is largest for small values of $v$. This means that the measurement of signals which modulate with a period of a year or those at low speeds in the distribution will be biased by not taking this effect into consideration. The modification turns out to be at a higher harmonic order than a simple amplitude or phase shift. We neglect gravitational focusing here since the bulk of our analysis involves comparing diurnally modulating signals as well as fast substructure such as streams. For these focusing amounts to an essentially negligible correction that comes with a rather large computational expense. However as demonstrated in ref. [84], to make an unbiased measurement of the phase and amplitude of annual modulation, the focusing effect should be accounted for. 3 Directional axion haloscopes 3.1 General formalism To have a consistent discussion of directionally sensitive experiments, we need a unified framework on to which we can map specific experimental designs. The following subsections 3.2 and 3.3 will deal with cavity and dielectric experiments respectively. Fortunately, both of these haloscope designs permit an overlap integral formalism. This can be seen either by classical electromagnetic calculations or from the lowest order of perturbation in quantum field theory [138, 58, 139]. The latter is useful as we only need to do one calculation to cover both cavities and dielectric haloscopes. The inverse lifetime for a single axion with energy $\omega_{a}$ to convert to a photon is $$\Gamma_{a\to\gamma}=2\pi\sum_{\bf k}|{\cal M}|^{2}\,\delta(\omega_{a}-\omega_{% \bf k})\,.$$ (3.1) Here ${\cal M}=\langle{\rm f}|H_{a\gamma}|{\rm i}\rangle$ is the matrix element of the interaction Hamiltonian between the initial and final state given by $${\cal M}=\frac{g_{a\gamma}}{2\omega V}\int{\rm d}^{3}{\bf x}\,\,{\bf B}_{\rm e% }({\bf x})\cdot{\bf E}^{*}_{\bf k}({\bf x})e^{i{\bf p}\cdot{\bf x}}\,,$$ (3.2) for $\omega=\omega_{a}=\omega_{\mathbf{k}}$, where ${\bf E}^{*}_{\bf k}$ is the free photon wave function and ${\bf B}_{\rm e}$ is the external magnetic field. In the dielectric haloscope ${\bf E}_{\bf k}$ is given by a Garibian wave function [139]. In general ${\bf k}$ denotes some general set of quantum numbers that describe the photon wave functions, for example momenta or mode numbers. Note that the quantum field calculation described above has a limitation: formally one must know the final state to which the axion converts, which to be detectable must be a state that extends outside the cavity. Exactly how the signal leaves the cavity is usually unspecified in the literature when discussing a generic setup. We assume for simplicity that the energy leaves the cavity via photons. The part of the photon wave function outside the cavity will generally be oscillatory so does not contribute to the integral. Thus the only way the external, measurable part of the photon wave function enters the calculation is in the normalisation of ${\bf E}_{\bf k}$. So for a cavity ${\bf E}_{\bf k}$ is given by the resonant mode up to some normalisation from a quality factor, which describes the transition rate of photons inside the system to energy outside of the system. Thus the power generated for a given axion momentum $\bf p$ is $$P_{\bf p}\propto\left|\int{\rm d}^{3}{\bf x}\,{\bf E_{\bf k}}({\bf x})\cdot{% \bf B}_{\rm e}e^{i{\bf p}\cdot{\bf x}}\right|^{2}\,.$$ (3.3) If we write the number of axions inside the device at a time $t$ with velocity $\bf v$ as $N(\mathbf{v};t)$ then we can write the corresponding power in a form more familiar to those conversant with cavity experiments, $$P_{\bf p}=\kappa g^{2}_{a\gamma}B_{\rm e}^{2}C({\bf v})N(\mathbf{v};t)Q_{\rm eff% }\,,$$ (3.4) where $\kappa$ is the coupling efficiency, $V$ is the volume of the device, $Q_{\rm eff}$ is some effective “quality factor” and the form factor $C$ is given by $$C({\bf v})=\frac{2\left|\int{\rm d}V{\bf E_{\bf k}}({\bf x})\cdot{\bf B}_{\rm e% }e^{i{\bf p}\cdot{\bf x}}\right|^{2}}{B_{\rm e}^{2}V\int{\rm d}V\left[\epsilon% ({\bf x})|{\bf E_{\bf k}}({\bf x})|^{2}+\mu({\bf x})|{\bf B_{\bf k}}({\bf x})|% ^{2}\right]}\,,$$ (3.5) with $\epsilon({\bf x})$ being the relative permittivity and $\mu({\bf x})$ the permeability (which will generally be set to 1). Formally non-resonant devices do not have a quality factor, however an analogous quantity can be defined for dielectric haloscopes [56]. In the case of resonant cavities, rather than a general proof, which requires detailed knowledge of the final photon state, we note that the normalisation of the photon wave function is unaffected by the velocity of the axion, and thus must agree with Sikivie’s original calculation [138]. However, one can explicitly show that eq. (3.4) holds for more specific cases where the final state is specified. For example, for open resonators this normalisation was shown in refs. [58, 139]777The former reference shows that in the zero velocity limit, Sikivie’s original calculation agrees with the classical calculation of dielectric haloscopes in a resonant limit, and the latter reference shows that such a calculation is equivalent to a perturbative quantum field calculation as described here.. Such an argument can be applied to a rectangular cavity, under the assumption that the leaked power is due to a non-zero transmissivity in the end caps. We will make the assumption here that the DM density measured in the experiment agrees with the average local DM density $\bar{\rho}_{a}$, so we see that the total power is given by $$P=\int{\rm d}^{3}{\bf v}P_{\bf p}=\kappa g^{2}_{a\gamma}B_{\rm e}^{2}V\frac{% \bar{\rho}_{a}}{m_{a}}Q_{\rm eff}\int{\rm d}^{3}{\bf v}f({\bf v})C({\bf v})\,.$$ (3.6) To see how $C$ depends on $\bf v$, we note that there are only two effects from the velocity of the axion: a change in the frequency, and a change of phase. Only the change in phase of the axion can provide a directional sensitivity but since the velocity is multiplied by the dimensions of the device, such an effect can be very significant. Expanding the axion phase we have $$C({\bf v})\propto\left|\int{\rm d}^{3}{\bf x}\,{\bf E_{\bf k}}({\bf x})\cdot{% \bf B}_{\rm e}\left(1+i{\bf p}\cdot{\bf x}-\frac{({\bf p}\cdot{\bf x})^{2}}{2}% +...\right)\right|^{2}\,.$$ (3.7) Note that if ${\bf E_{\bf k}}$ is a standing wave then it has no spatial phase variation so can be treated as real. Then after taking the modulus squared no linear order terms in the velocity can survive since they enter purely imaginarily. Cavity haloscopes always satisfy this condition, meaning that they never have a linear dependence on the velocity. However, it is possible to design dielectric haloscopes for which the free photon wave function has traveling behaviour [58]. Thus at lowest order the geometry factor will be either linearly ($\ell$) or quadratically ($q$) dependent on the velocity components, allowing us to define $$C({\bf v})=C_{0}\left(1-\mathcal{G}_{\ell,\,q}({\bf v})\right)\,,$$ (3.8) containing either, $$\mathcal{G}_{\ell}(\mathbf{v})=\sum_{i=1}^{3}\mathpzc{g}_{\ell}^{i}v_{i}\,,$$ (3.9) or, $$\mathcal{G}_{q}(\mathbf{v})=\sum_{i=1}^{3}\sum_{j=1}^{3}\mathpzc{g}_{q}^{ij}v_% {i}v_{j}\,,$$ (3.10) where we have pulled out the form factor $C_{0}$ in the 0-velocity limit. Keep in mind that if ${\bf p}\cdot{\bf x}\sim 1$ then one cannot only consider just the lowest order contributions. To gain directional sensitivity, we enforce the primary effect on the geometry factor to come from a single direction, corresponding to an elongated dimension. While in general eq. (3.10) could contain cross terms, we will see that in our examples there are only factors proportional to $v_{i}^{2}$, so we will drop one of the superscripts and just give $\mathpzc{g}_{q}^{i}$. In these instances there is the unfortunate aspect that the geometry factor is insensitive to the sign of $v_{i}$. The type of signal we want to use to measure the velocity distribution is a power spectrum $\mathrm{d}P/\mathrm{d}\omega$, obtained in practice by taking the Fourier transform of whatever EM signal was being tracked. Since a power spectrum is one dimensional, only a distribution of speeds will be measurable in one device. A non-directional device would have access to $f(v)$ whereas a single directionally sensitive one would in addition have access to projections of the velocity distribution that would rotate with the Earth. As with the form factor we can write the power spectrum as the sum of a non-directional part and a directional part given by a geometry-weighted speed distribution $f_{\mathcal{G}}(v;t)$, $$\displaystyle$\textrm{d}P/\textrm{d}\omega$(t)$$ $$\displaystyle=$$ $$\displaystyle P_{0}\frac{\textrm{d}v}{\textrm{d}\omega}\left(\int{\rm d}\Omega% _{v}\,v^{2}\,f(\mathbf{v};t)-\int{\rm d}\Omega_{v}\,v^{2}\mathcal{G}(\mathbf{v% })\,f(\mathbf{v};t)\right)\bigg{|}_{|\mathbf{v}|=v(\omega)}$$ (3.11) $$\displaystyle=$$ $$\displaystyle P_{0}\frac{\textrm{d}v}{\textrm{d}\omega}\bigg{(}f[v(\omega);t]-% f_{\mathcal{G}}[v(\omega);t]\bigg{)}\,,$$ (3.12) where $P_{0}$ is the total signal power in the 0-velocity limit (i.e. eq. (3.6) calculated using $C_{0}$ instead of $C({\bf v})$ or with $f(\mathbf{v})=\delta^{3}({\mathbf{v}})$). Since we can focus an experiment to have sensitivity to frequencies only within the axion bandwidth we can ignore any frequency dependence in $P_{0}$ and simply pick a benchmark value based on some experimental configuration as we discuss shortly. The derivative $\textrm{d}v/\textrm{d}\omega$ is simply introduced to write the differential power spectrum with frequency in terms of speed distributions which have dimensions of inverse speed. Optimal sensitivity to the three dimensions of $f(\mathbf{v})$ would be achieved if one constructs an experiment consisting of three devices each with a non-zero $\mathpzc{g}_{\ell}^{i}$ or $\mathpzc{g}_{q}^{ii}$ in individual linearly independent directions. Depending on how much information on the velocity distribution one wanted, not all of these devices would be necessary, but the signal in each is analogous so we can treat the setup generally to begin with. The power spectrum in each experiment will be influenced by $v_{i}$ or $v_{i}^{2}$ for $\ell$ and $q$-type experiments, so we rearrange the measured directional and non-directional powers by writing down functions of $\omega$ and $t$ which describe only the directional corrections to the power spectra $$\Delta^{i}_{\ell}(\omega,t)\equiv\frac{p^{i}_{\rm dir.}-p_{\rm non-dir.}}{p_{% \rm non-dir.}}=-\frac{f_{\mathcal{G}}[v(\omega);t]}{f[v(\omega);t]}=-\frac{% \mathpzc{g}_{\ell}\int{\rm d}\Omega_{v}\,v_{i}\,f({\bf v};t)}{\int{\rm d}% \Omega_{v}\,\,f({\bf v};t)}\bigg{|}_{|\mathbf{v}|=v(\omega)}=-\mathpzc{g}_{% \ell}\langle v_{i}(\omega,t)\rangle_{\Omega_{v}}\,,$$ (3.13) for linearly dependent experiments or $$\Delta^{i}_{q}(\omega,t)\equiv\frac{p^{i}_{\rm dir.}-p_{\rm non-dir.}}{p_{\rm non% -dir.}}=-\frac{f_{\mathcal{G}}[v(\omega);t]}{f[v(\omega);t]}=-\frac{\mathpzc{g% }_{q}\int{\rm d}\Omega_{v}\,v^{2}_{i}\,f({\bf v};t)}{\int{\rm d}\Omega_{v}\,\,% f({\bf v};t)}\bigg{|}_{|\mathbf{v}|=v(\omega)}=-\mathpzc{g}_{q}\langle v^{2}_{% i}(\omega,t)\rangle_{\Omega_{v}}\,,$$ (3.14) for quadratic dependence. For notational convenience we use the labelling $p=\textrm{d}P/\textrm{d}\omega$ here but in our statistical analysis this will be replaced by the power in one frequency bin. Reducing the power spectrum to a directional correction we can see that they essentially amount to the angular average of $v_{i}$ or $v_{i}^{2}$ over a shell of radius $v$. We can evaluate these integrals for the Maxwellian $f(\mathbf{v})$ by first performing a rotation such that $\mathbf{v}_{\mathrm{lab}}$ points along the axis of the experiment. This will introduce a dependence on the angle between $\mathbf{v}_{\mathrm{lab}}$ and the axis, $\cos{\theta^{i}_{\rm lab}(t)}$ (refer to eq. (2.19) for the full time dependence). For each direction and for linear and quadratic experiments we have, $$\displaystyle\ell-{\rm type}:\Delta^{i}_{\ell}(\omega,t)$$ $$\displaystyle=$$ $$\displaystyle-\zeta_{\ell}(\omega)~{}\cos(\theta^{i}_{\rm lab}(t))~{}\mathpzc{% g}_{\ell}$$ (3.15) $$\displaystyle q-{\rm type}:\Delta^{i}_{q}(\omega,t)$$ $$\displaystyle=$$ $$\displaystyle-\left[\zeta_{q1}(\omega)+\zeta_{q2}(\omega)~{}\cos^{2}(\theta^{i% }_{\rm lab}(t))\right]\mathpzc{g}_{q},\,,$$ (3.16) where the functions $\zeta_{\ell,q1,q2}$ are determined only by quantities of the velocity distribution. Explicitly, $$\displaystyle\zeta_{\ell}(\omega)$$ $$\displaystyle=$$ $$\displaystyle v(\omega)\coth\left(\frac{v(\omega)v_{\rm lab}}{\sigma_{v}^{2}}% \right)-\frac{\sigma_{v}^{2}}{v_{\rm lab}},$$ (3.17) $$\displaystyle\zeta_{q1}(\omega)$$ $$\displaystyle=$$ $$\displaystyle\frac{\sigma_{v}^{2}}{v_{\rm lab}}~{}\zeta_{\ell}(\omega),$$ (3.18) $$\displaystyle\zeta_{q2}(\omega)$$ $$\displaystyle=$$ $$\displaystyle v(\omega)^{2}-3\zeta_{q1}(\omega).$$ (3.19) These will in principle be functions of time also, but as long as one does not exceed experimental durations longer than a few tens of days one can treat $v_{\textrm{lab}}$ as constant in time and hence the $\zeta$’s as functions of only $\omega$. We reiterate here again that the case for a stream is identical after replacing $\mathbf{v}_{\mathrm{lab}}\rightarrow\mathbf{v}_{\mathrm{lab}}-\mathbf{v}_{% \mathrm{str}}$ and $\sigma_{v}\rightarrow\sigma_{\rm str}$. Now with this general formalism in hand, we can turn our attention to specific cases exhibiting a directional sensitivity. In figure 2 we show the shapes of the directionally corrected differential power spectra $p^{\rm dir.}$ as well as the directional corrections isolated, $\Delta_{\ell,\,q}$. The distribution corresponds to a Maxwellian halo that contains a 10% contribution from the S1 stream. The quantity being shown in each is a ratio of powers, since we have multiplied each differential power value by a binwidth in frequency $\Delta\omega$ and then divided by the total power $P_{0}$, this way we can clearly illustrate the shape of the effects in frequency and time. As with all our examples we assume the observation begins on Jan 1 at Munich. We assume a linear geometry factor of $\mathpzc{g}_{\ell}c=71$ and a quadratic one of $\mathpzc{g}_{q}c^{2}=10^{5}$ (see section 3.4 for how we settle on these values). The quadratic correction is only negative but the linear correction can be both positive or negative depending on the orientation of the DM/stream wind with respect to the experimental axis. Importantly for making measurements, the phases and amplitudes of the modulating part of the power spectrum in each experiment are distinct from one another. We now describe the specific experimental designs that can achieve these directional effects in practice. 3.2 Resonant cavities We define a rectangular cavity in an $(x,y,z)$ coordinate system with dimensions $(L_{x},L_{y},L_{z})$ to have a homogeneous magnetic field ${\bf B}_{\rm e}=(0,B_{e},0)$. The electric fields of the modes that are permitted in the cavity can be separated into time and spatial dependent parts as ${\bf E}(t,{\bf x})=\sum_{i}E_{i}(t){\bf e}_{i}$. Of interest to us are the transverse electric modes TE${}_{l0n}$ which have for the spatial part888The factor of 2 in this formula comes from the normalisation condition $\int{\rm d}V{\bf e}_{i}\cdot{\bf e}_{j}=V\delta_{ij}$ that the modes must satisfy., $${\bf e}_{l0n}=\left(0,2\sin{\left(\frac{\pi lx}{L_{x}}\right)}\sin{\left(\frac% {\pi nz}{L_{z}}\right)},0\right)\,.$$ (3.20) The resonant frequency of this mode is given by, $$\omega^{2}=\left(\frac{l\pi}{L_{x}}\right)^{2}+\left(\frac{n\pi}{L_{z}}\right)% ^{2},$$ (3.21) where we assume that $L_{y}\lesssim L_{x},L_{z}$ to isolate the fundamental mode. Plugging this mode geometry into the form factor we get, $$\displaystyle C_{l0n}$$ $$\displaystyle=\left|\frac{2}{V}\int_{V}{\rm d}x\,{\rm d}y\,{\rm d}z\,\left[% \sin{\left(\frac{\pi lx}{L_{x}}\right)}\sin{\left(\frac{\pi nz}{L_{z}}\right)}% \,e^{i{\bf p}\cdot{\bf x}}\right]\right|^{2}\,,$$ (3.22) $$\displaystyle=\left|2i\pi^{2}ln\frac{((-1)^{l}e^{iq_{x}}+1)(e^{iq_{y}}-1)((-1)% ^{n}e^{iq_{z}}+1)}{q_{y}(l^{2}\pi^{2}-q_{x}^{2})(n^{2}\pi^{2}-q_{z}^{2})}% \right|^{2}\,,$$ (3.23) where $q_{x}=m_{a}L_{x}v_{x}$ etc. Next, we can evaluate the absolute value signs. Expanding and keeping only terms up to $v^{2}$ we get something that can be written as, $$\displaystyle C_{l0n}$$ $$\displaystyle\simeq$$ $$\displaystyle\frac{16\left((-1)^{l}-1\right)\left((-1)^{n}-1\right)}{\pi^{4}l^% {2}n^{2}}-\frac{4}{3\pi^{6}l^{4}n^{4}}{\bigg{[}}6q_{x}^{2}n^{2}\left((-1)^{n}-% 1\right)\left((-1)^{l}\pi^{2}l^{2}+4+(-1)^{l+1}4\right)$$ (3.24) $$\displaystyle+6q_{z}^{2}l^{2}((-1)^{l}-1)\left((-1)^{n}\pi^{2}n^{2}+4+(-1)^{n+% 1}4\right)$$ $$\displaystyle+q_{y}^{2}\pi^{2}l^{2}n^{2}((-1)^{l}-1)\left((-1)^{n}-1\right)% \bigg{]}.$$ If $l,n$ are even this vanishes, but if $l,n$ are odd then it reduces to $$\displaystyle C_{l0n}$$ $$\displaystyle\simeq$$ $$\displaystyle\frac{64}{\pi^{4}l^{2}n^{2}}\left(1-\left(\frac{1}{4}-\frac{2}{% \pi^{2}l^{2}}\right)q_{x}^{2}-\left(\frac{1}{4}-\frac{2}{\pi^{2}n^{2}}\right)q% _{z}^{2}-\frac{q_{y}^{2}}{12}\right)$$ (3.25) $$\displaystyle=$$ $$\displaystyle C_{0}(1-\sum_{i=1}^{3}\mathpzc{g}_{q}^{i}v_{i}^{2})\,,$$ where $$(\mathpzc{g}_{q}^{x},\mathpzc{g}_{q}^{y},\mathpzc{g}_{q}^{z})=m_{a}^{2}\left(L% _{x}^{2}\left(\frac{1}{4}-\frac{2}{\pi^{2}l^{2}}\right),\frac{L_{y}^{2}}{12},L% _{z}^{2}\left(\frac{1}{4}-\frac{2}{\pi^{2}n^{2}}\right)\right)\,.$$ (3.26) As suggested by how we set up our formalism, we see here that the corrections turn out to be negative, however if we assume we have sufficient signal to noise to detect the axion we need only focus on its modulation. With the expression for $C_{l0n}$ written in this way we can see that it contains the usual zero velocity form factor for the TE${}_{l0n}$ mode in a rectangular cavity, and a second term expressed as a velocity dependent geometry factor. To get directional sensitivity, we desire our device to be elongated in one direction. We will leave $L_{z}$ small with $n=1$. We foresee two options for making $L_{x}\gg L_{z,y}$. One would be to use the fundamental mode, leaving $l=1$. In this case, $$\mathpzc{g}_{q}^{x}\simeq\frac{\pi^{2}-8}{4}\left(\frac{L_{x}}{L_{z}}\right)^{% 2}\gg\mathpzc{g}_{q}^{y,z}\,,$$ (3.27) with $$\omega\sim\frac{\pi}{L_{z}}\left(1+\frac{L_{z}^{2}}{2L_{x}^{2}}\right)\,.$$ (3.28) This gives a potentially excellent velocity dependence for large $L_{x}$, however one could eventually run into problems of mode crowding. For very large $L_{x}$ the frequency difference between different low $l$ values becomes extremely small, actually being less than the axion line width at $L_{x}/L_{z}={\cal O}(10^{3})$. An alternative approach to extend one dimension would be to use a higher order mode ($l\gg 1$). Ensuring the resonant frequency remains at $m_{a}$ by setting $L_{x}=\sqrt{2}\pi l/m_{a}$, we see that $$\mathpzc{g}_{q}^{x}\simeq\frac{L_{x}^{2}m_{a}^{2}}{4}=\frac{\pi^{2}l^{2}}{2}\,.$$ (3.29) Since $C_{0}\propto l^{-2}$, the total form factor for the velocity dependent terms is constant with increasing $l$. One might worry from this line of thinking that one gains nothing by moving to higher and higher modes, however such a concern only arises because of the overly complicated way in which the power from resonant cavities is usually expressed. Remembering eq. (3.4) we see that there is also a factor of $QV/m_{a}$. The quality factor of a cavity is defined to be $$Q=\omega\frac{\int{\rm d}V\epsilon({\bf x})|{\bf E_{\bf k}}({\bf x})|^{2}}{P_{% \text{loss}}},$$ (3.30) where $P_{\text{loss}}$ is the power lost from the cavity. Assuming that $P_{\text{loss}}$ is constant, then as one goes to higher mode numbers the quality factor also increases by a factor $l$, i.e., higher order modes have narrower resonances999Of course, one is still limited by the axion line width so for $Q\gtrsim 10^{6}$ only part of the axion spectrum is measured.. Overall we see $$P_{\bf p}\propto\frac{1}{m_{a}^{2}}\left(1-\frac{\pi^{2}l^{2}v_{x}^{2}}{2}% \right)=\frac{1}{m_{a}^{2}}-\frac{L_{x}^{2}v_{x}^{2}}{4}.$$ (3.31) Thus if we keep the size of the cavity fixed, and go to higher masses, one loses out by a factor of $m_{a}^{2}$ when going to higher order modes. However, for the case we are interested in, keeping $m_{a}$ fixed and looking at high mode cavities, the velocity effects indeed increase with $l^{2}$. To get a large velocity effect from a cavity in this way, one must go to very high modes. However such an experiment may not be so impractical. For high mode numbers in a cavity, the vast majority of the empty volume does not need to be magnetised. Since the integral of the waves in the middle of the device cancels on average, the central region of the cavity plays no role in signal generation aside from allowing the axion to undergo a change of phase. One would only need to have a magnetic field within approximately a half wavelength of the ends of the device, obtaining the same $\mathpzc{g}_{q}$ as in eq. (3.29). However, the form factor is actually enhanced, $$C_{0}=\frac{256}{l^{2}\pi^{4}}\,.$$ (3.32) This enhancement is because each magnetised half wavelength adds constructively to the produced power. When fully magnetised, the field everywhere in the cavity cancels aside from only one half wavelength. Partial magnetisation would result in a dramatic reduction in magnet costs, though still requires operation at high cavity modes. Thus higher order modes may be a good way to achieve strong velocity dependence without prohibitive magnet requirements, albeit at the cost of signal power. Unfortunately, the issue of mode crowding is not avoided, with typical spacing between modes being $\sim\omega/2l$. A similar concept would be to have two cavities separated by some large distance, and add the signal from each of them together. Such an idea has some advantages in avoiding higher modes, however one would have to very precisely match the phases of each cavity in real time over long distances. From the above discussion we have seen that there are competing problems of mode crowding at low-$l$ and redundant cavity volume at high-$l$. Fortunately it may be possible to mitigate both issues by loading the cavity with dielectrics to modify the modes [140] or by combining multiple coupled cavities [141, 55]. One could also use wires with currents to modify $B_{e}$ [52], however in this case the issue of mode crowding remains. Each of these methods has the advantage of increasing the form factor significantly, as well as potentially increasing the quality factor. To estimate what we could gain such setups, we can take the simplest case, placing a series of transparent (phase thickness $\pi$) dielectrics a half wavelength apart. Such a description captures the essential behaviour of all three possible setups, though depending on the realisation the details might differ. We approximate the dielectric loaded cavity mode by simply using $|\sin(\frac{\pi lx}{L_{x}})|$ instead of $\sin(\frac{\pi lx}{L_{x}})$ in the integrand of eq. (3.22), giving $$C_{l01}\sim\frac{64}{\pi^{4}}-\frac{16l^{2}v_{x}^{2}}{3\pi^{2}}\,,$$ (3.33) where we again take the mode where $n=1\ll l$ and $L_{x}=l/m_{a}$. Now not only does the velocity independent part of the form factor not decrease with increasing $l$, the velocity dependent part actually increases. Note that in this case $l$ is not a mode number, rather $l-1$ gives the number of inserted dielectrics. Thus a dielectric loaded resonator would win over an empty cavity at high modes by a factor of $l^{2}\propto(L_{x}/m_{a})^{2}$. Such a setup would be ideal for gaining a strong directional sensitivity, while avoiding mode crossings. Lastly, we remark that unlike the zero velocity limit, it is possible for one of $l,n$ to be even. In this case, it is well known that the velocity independent term vanishes, but the velocity dependent terms do not. If only one of $l,n$ is odd then $$C_{l0n}=\frac{4L_{z,x}^{2}m_{a}^{2}v_{z,x}^{2}}{\pi^{4}l^{2}n^{2}}.$$ (3.34) In this case the form factor actually increases for higher values; if the signal-to-noise was high enough one would have an excellent way of studying the tail of the velocity distribution. By modifying the aspect ratios of cavities, there are several ways to gain a strong sensitivity to the axion velocity. In this paper we remain agnostic to the various practical issues as each different realisation has potential advantages and pitfalls, which can only be illuminated by in depth design studies. While any of these devices would be challenging, no frequency scanning is required so one could devote considerable time and resources into perfecting the performance for a single known frequency. 3.3 Dielectric haloscope We now turn our attention to dielectric haloscopes, which consist of a series of dielectric disks placed parallel to a magnetic field, illustrated in figure 1. As discussed in ref. [142], it is possible to either increase the separations of the disks, or add disks, to increase the length of the device to a decent fraction of the axion coherence length. The subsequent velocity effects considered there were described using a classical transfer matrix formalism. To make the connection with our universal notation, we must instead extend the formalism developed in ref. [139] to include the axion velocity. As discussed in section 3.1 the produced power is given by an overlap of the axion and photon wave functions. While the axion has a trivial plane wave function, the photon’s wave function is distorted by the presence of dielectric media. We will assume that the transverse area of the dielectric disks is large, so that the momenta in the transverse directions are approximately conserved. Conservation of momentum requires that these momenta are the same as that of the axion. Thus the photon wave function is given by $${\bf E}_{\bf k}({\bf x})={\bf E}_{\bf k}(x)e^{i{\bf p_{||}}\cdot{\bf x}}\,.$$ (3.35) The ${\bf E}_{\bf k}(x)$ is simply the “Garibian” wave function considered in [139], which consists of an incoming plane wave of unit amplitude which is then split by the haloscope into a transmitted and a reflected component. The space is spanned by two such wave functions, depending on the side of the haloscope on which they are incident. The axion velocity only induces a small shift in frequency of $m_{a}v^{2}/2$. We thus see that $${\cal M}=\frac{g_{a\gamma}}{2\omega L}\int{\rm d}x\,\,{\bf B}_{\rm e}({\bf x})% \cdot{\bf E}^{*}_{\bf k}(x)e^{ip_{x}x}\,,$$ (3.36) where $L$ is the length of the haloscope in the $x$ direction. One can define an effective quality factor $Q_{\rm dh}$ by $$Q_{\rm dh}=m_{a}\frac{1}{4}\frac{\int{\rm d}V\left[\epsilon({\bf x})|{\bf E_{% \bf k}}({\bf x})|^{2}+|{\bf B_{\bf k}}({\bf x})|^{2}\right]}{2A}\,,$$ (3.37) where ${\bf E}_{\bf k},{\bf B}_{\bf k}$ are the $E$ and $B$-fields of the Garibian wave function. The overall power in EM waves $P_{\bf p}$ is still given by eq. (3.4), with the coupling efficiency $\kappa=1$. In general $Q_{\rm dh}$ and $C({\mathbf{v}})$ (and thus $P_{\bf p}$) can be different for photons emerging from either side of the device. We saw in section 3.1 that a cavity haloscope always has quadratic dependence on the velocity of the axion. However, a linear dependence on $v$ would potentially provide the full directional information of the velocity distribution. To achieve linear dependence, from eq. (3.7) we know that the free photon wave function must have some travelling wave behaviour (i.e. a spatial variance in the phase). If a dielectric haloscope is strongly resonant, or there is a metallic mirror in the haloscope, the Garibian wave functions will form a standing wave. A simple example which can obtain linear dependence would be a series of transparent dielectric disks. If the phase thickness of the disk is $\delta_{\rm t}=\pi$, then each disk is transparent to radiation but still emits photons in the presence of axions [143]. This transparent setup does not use resonances to increase signal power, only constructive interference. To calculate the produced power we must solve for the Garibian wave functions of the system. Photons may be emitted from either side of the device, however due to the symmetry of the system the only differences in produced power could come from the direction of the axion velocity itself. Thus we will only consider photons emitted from a single direction, and simply use $v\to-v$ to obtain the other direction. In general a dielectric haloscope consists of $m-1$ dielectric regions, between some positions $x_{1}$ and $x_{m}$, with interfaces at distances $x_{r}$. For simplicity we consider the case where the $E$-field is inserted as a right moving wave on the left hand side of the haloscope, which allows us to compute the power being emitted from the left side of the haloscope. In order to evaluate the integral in eq. (3.37), we can break it up into regions of different dielectric material. In each region $r$ we can break up the $E$-field into left and right moving parts, $$E_{\bf k}^{r}(x)=R^{r}e^{in_{r}\omega\Delta x_{r}}+L^{r}e^{-in_{r}\omega\Delta x% _{r}}\,,$$ (3.38) where $R^{r}$ is the amplitude of the right-moving component, $L^{r}$ the left-moving component, with $\Delta x_{r}=x-x_{r}$. We take $\Delta x_{0}=\Delta x_{1}$ and follow the same convention as ref. [58], so that the field amplitudes $R^{r}$ and $L^{r}$ of the right and left moving EM waves are defined at the left boundary of every region, except for $R^{0}$ and $L^{0}$ which are defined at $x_{1}$, i.e., the leftmost interface. The $E$-fields for different regions are connected by the boundary conditions, with $E_{||}$ and $H_{||}$ being conserved. Consider a series of $N$ transparent disks of refractive index $n$ (thickness $\pi/n\omega$) with a distance $\delta_{\rm s}/\omega$ between each disk, where we will call $\delta_{\rm s}$ the phase separation. For the $j$th dielectric disk, $R$ and $L$ are given by $$\begin{pmatrix}R^{2j-1}\\ L^{2j-1}\end{pmatrix}=\frac{e^{i(j-1)(\delta_{\rm s}+\pi)}}{2n}\begin{pmatrix}% 1+n\\ n-1\end{pmatrix}\,,$$ (3.39) and in the $j$th vacuum region $R$ and $L$ are given by $$\begin{pmatrix}R^{2j}\\ L^{2j}\end{pmatrix}=e^{i[(j-1)\delta_{\rm s}+j\pi]}\begin{pmatrix}1\\ 0\end{pmatrix}\,.$$ (3.40) Using these expressions, we can evaluate eq. (3.37) to get $$Q_{\rm dh}=\frac{m_{a}}{4}\sum_{s=1}^{m-1}{\rm d}x\,n_{s}^{2}(x_{s+1}-x_{s})(|% R_{s}|^{2}+|L_{s}|^{2})=\frac{1}{8}\left(n\pi N\left(1+\frac{1}{n^{2}}\right)+% 2\delta_{\rm s}(N-1)\right)\,.$$ (3.41) To find the effects of the axion velocity, we must evaluate the overlap integral in the form factor $C({\bf v})$. In terms of our left and right moving waves, we can write $$\displaystyle\int{\rm d}x\,E_{\bf k}({x}){B}_{\rm e}e^{i{p_{x}}{x}}\sim$$ $$\displaystyle\,\frac{B_{\rm e}}{i\omega}\Bigg{[}R_{0}-L_{0}-(R_{m}-L_{m})e^{i% \omega v_{x}x_{m}}$$ $$\displaystyle+\sum_{s=1}^{m-1}\frac{e^{i\omega v_{x}x_{s-1}}}{n_{s}}R_{s}\left% (e^{i\omega d_{s}(n_{s}+v_{x})}-1\right)-L_{s}\left(e^{-i\omega d_{s}(n_{s}-v_% {x})}-1\right)\Bigg{]}\,,$$ (3.42) where we have neglected some subdominant terms in the velocity which enter outside of the argument of a phase. Using eqs. (3.39) and (3.40) we then see that $$\displaystyle\int{\rm d}x\,E_{\bf k}({x}){B}_{\rm e}e^{i{p_{x}}{x}}=$$ $$\displaystyle\frac{B_{\rm e}}{2i\omega}\Bigg{(}2+\frac{\left(1+e^{\frac{i\pi v% _{x}}{n}}\right)\left(-1+e^{\frac{iN(\delta_{\rm s}n(v_{x}+1)+\pi(n+v_{x}))}{n% }}\right)}{n^{2}\left(1+e^{i\left(\delta_{\rm s}(1+v_{x})+\frac{\pi v_{x}}{n}% \right)}\right)}$$ $$\displaystyle-\frac{\left(-1+e^{i\delta_{\rm s}(v_{x}+1)}\right)\left((-1)^{N}% e^{i\left(\delta_{\rm s}(N-1)(v_{x}+1)+\frac{\pi Nv_{x}}{n}\right)}+e^{\frac{i% \pi v_{x}}{n}}\right)}{1+e^{i\left(\delta_{\rm s}(1+v_{x})+\frac{\pi v_{x}}{n}% \right)}}\Bigg{)}\,.$$ (3.43) Unfortunately this expression is too complicated in general to be reduced to something like eq. (3.25). Depending on the choice of $\delta_{\rm s}$, the velocity dependence can enter either quadratically or linearly. If the disks are arranged for maximal constructive interference at zero velocity ($\delta_{\rm s}=\pi$) the system becomes symmetric with respect to the axion velocity, and so must have a quadratic dependence. Specifically, in this case one can show that [142] $$c\mathpzc{g}_{\ell}=0,\,\,\,\,c^{2}\mathpzc{g}_{q}\simeq\frac{N^{2}\pi^{2}}{12% }\,.$$ (3.44) An experiment using this transparent setup (with the addition of a mirror) has been proposed to operate at the optical range, using photon counting rather than linear amplification as we consider here [144]. Photon counting experiments generally have insufficient energy resolution to measure the axion lineshape. However, the second phase of this experiment would use a huge number of dielectric layers, up to $N=1000$. In this case the axion velocity has a major impact on the signal power, leading to significant systematic uncertainties due to the unknown velocity distribution. Of course, if such an experiment had discovered ALPs or hidden photons previously the power modulation could be used for axion astronomy in a similar way as described in this paper, albeit without resolving the line width. Due to the different nature of the statistics for a photon counting experiment, we will not consider this scenario further. To achieve a linear velocity dependence we can imagine placing each disk slightly out of phase with respect to case where maximal constructive interference occurs at $v_{x}=0$. Then a velocity in one direction will increase the constructive interference and a velocity in the other decrease it, giving us a discrimination between the two directions. In figure 2(a) we show the relative form factor corresponding to the power produced from the left and right side of a device consisting of 400 dielectric disks with $n=5$ and $\delta_{\rm s}=3.138$. The power at zero velocity is given by $$P_{0}^{\rm Transp}=3\times 10^{-21}\,{\rm W}\left(\frac{B_{\rm e}}{15\,{\rm T}% }\right)^{2}\left(\frac{A}{1\,{\rm m}^{2}}\right)\left(\frac{\bar{\rho}_{a}}{0% .4\,{\rm GeV}\,{\rm cm}^{-3}}\right)\left(\frac{C_{a\gamma}}{1.92}\right)^{2}\,,$$ (3.45) where $A$ is the transverse area of disks. At lowest order the velocity effects are linear, with $c\mathpzc{g}_{\ell}=383$. Beyond this setup, if we were to use more dielectric disks, the disparity between power produced in each direction can grow until there is an almost complete discrimination between the two. We show such a case in figure 2(b), which shows the relative form factor corresponding to the power emitted from each side of a device consisting of 1000 dielectric disks with $n=5$ and $\delta_{\rm s}=3.138$. Despite each side being only sensitive velocities in one direction, together the combination covers the full range of realistic galactic velocities. One would then see a modulation of the signal as a transference between the power being emitted from each side. The correlation between the two measurements would also serve as a useful systematic check, though one would need to take care with possible reflections between the two detector and antenna setups. We will not consider such a case in full detail, as the approximation of the velocity dependence as being linear or quadratic clearly breaks down, which would make the analytic calculations in section 4 significantly more complicated. So far in this discussion, we have completely neglected possible resonant enhancements of the signal strength. While as argued above a strongly resonant behaviour precludes the possibility of a linear velocity dependence, it is actually possible to achieve a stronger absolute linear shift in the power as function of velocity at the expense of the relative size of the effect compared to the total signal power. For resonant behaviour to occur, the disks must be partially reflecting. To show that such a situation can indeed occur, in figure 2(c) we show a dielectric haloscope consisting of 400 dielectric disks with $n=5$ and a phase thickness of $\delta_{\rm t}=3.163$, with $\delta_{\rm s}=3.138$. While the relative size of linear effects is smaller than the previous examples, with $c\mathpzc{g}_{\ell}=71$ and $c^{2}\mathpzc{g}_{q}=105000$, the total power is much greater with $$P_{0}^{\rm Res}=2.6\times 10^{-19}\,{\rm W}\left(\frac{B_{\rm e}}{15\,{\rm T}}% \right)^{2}\left(\frac{A}{1\,{\rm m}^{2}}\right)\left(\frac{\bar{\rho}_{a}}{0.% 4\,{\rm GeV}\,{\rm cm}^{-3}}\right)\left(\frac{C_{a\gamma}}{1.92}\right)^{2}\,,$$ (3.46) which we label “resonant”, though the resonant behaviour is relatively mild. To see whether a given setup is more sensitive to direction of the axion, we can calculate $P_{L}(v_{x})-P_{R}(v_{x})$. Normalised to $P_{0}^{\rm Res}$ we show the comparison between 400 transparent and 400 mildly resonant disks in figure 2(d). In these terms, the more resonant setup is an order of magnitude more sensitive to the sign of the velocity. Further, one can measure both linear and quadratic behaviour, giving essentially the first and second moment of the velocity distribution. Thus we will use the more resonant example as our benchmark dielectric haloscope. Dielectric haloscopes have the unique ability to discern the sign of the axion velocity in a specific direction. They further have immense flexibility to enhance the power generated at a specific axion velocity, being tuneable to almost any situation. While only a few examples were shown here, this flexibility would be very beneficial when it comes to the practical design of an experiment, allowing, for example, one to design a device that focused exclusively on the tail of $f(\mathbf{v})$ or on the velocity of a stream. However, this flexibility and ability to obtain a linear velocity dependence can only be achieved if one sacrifices strongly resonant behaviour, which will limit the achievable signal power. We assume that the dielectrics are arranged in a left-right symmetric manner as above such that difference between the power emitted from each side is simply given by $v_{i}\to-v_{i}$. In terms of our general formalism this means we can isolate either linear or quadratic effects simply with how the left and right hand side powers are combined. This is a clear advantage offered by the dielectric haloscope setup. In terms of our general formalism we can recover the directional corrections introduced in eqs. (3.13) and (3.14) only with a slight modification to the formula, $$\Delta^{i}_{\ell}=\frac{(p^{i}_{\rm R,dir.}-p^{i}_{\rm L,dir.})}{2p_{\rm non-% dir.}}\,,$$ (3.47) and $$\Delta^{i}_{q}=\frac{(p^{i}_{\rm L,dir.}+p^{i}_{\rm R,dir.})/2-p_{\rm non-dir.% }}{p_{\rm non-dir.}}\,.$$ (3.48) 3.4 Benchmark experimental parameters Given our three model haloscope designs — a low and high-$l$ cavity (with quadratic-$v$ directionality) as well as a dielectric disk haloscope (quadratic and/or linear-$v$ directionality) — we now summarise the size requirements for each experiment to achieve some benchmark geometry factors, $\mathpzc{g}_{\ell}$ and $\mathpzc{g}_{q}$, and signal power $P_{0}$. Although the design parameters are challenging, we emphasise that this experiment would only have to be built and calibrated once. For instance, one would only need to design the cavity for a single resonant frequency, or for a single set of disk spacings in the case of the dielectric haloscope101010We also refer the reader to the excellent prospects for quantum limited noise in higher mass experiments ($m_{a}>40\,\mu$eV) with use of single photon detectors [145].. As displayed in figure 1, we focus on three axion masses in the range $10$–$100\,\mu$eV. For the lower end we require the cavities, fixing masses of 10 $\mu$eV and 40 $\mu$eV, and assigning them a partially magnetised setup with a high-$l$ and a fully magnetised setup with $l=1$ respectively. For larger masses dielectric haloscopes are preferable; we assign it a 100 $\mu$eV axion in this case. We emphasise that our later sections give scaling relations that can be used to reapply our results to any value of $P_{0}$. This section is merely to highlight that the specific values of $P_{0}$ that are used for certain figures are experimentally reasonable. For a reasonable dielectric haloscope setup we showed earlier that we can achieve (now in units of$\textrm{ km s}^{-1}$), $$\displaystyle\mathpzc{g}_{\ell}$$ $$\displaystyle\simeq$$ $$\displaystyle 2.4\times 10^{-4}\,\,{\rm km}^{-1}\,{\rm s}\,,$$ (3.49a) $$\displaystyle\mathpzc{g}_{q}$$ $$\displaystyle\simeq$$ $$\displaystyle 1.1\times 10^{-6}\,\,{\rm km}^{-2}\,{\rm s}^{2}\,,$$ (3.49b) with the total power from the $v=0$ calculation, $$m_{a}=100\,\mu{\rm eV}\,:\quad P_{0}^{\rm dh}=2.6\times 10^{-19}\,{\rm W}\left% (\frac{B_{\rm e}}{15\,{\rm T}}\right)^{2}\left(\frac{A}{1\,{\rm m}^{2}}\right)% \left(\frac{\bar{\rho}_{a}}{0.4\,{\rm GeV}\,{\rm cm}^{-3}}\right)\left(\frac{C% _{a\gamma}}{1.92}\right)^{2}\,.$$ (3.50) Since the dielectric haloscope observes both a linear and a quadratic effect we use its values of $\mathpzc{g}_{\ell}$ and $\mathpzc{g}_{q}$ as benchmarks. They also give conveniently similar sized directional effects, $$\mathcal{G}_{\ell}=7.1\%\left(\frac{v}{300\textrm{ km s}^{-1}}\right)\,,\quad% \mathcal{G}_{q}=10.0\%\left(\frac{v}{300\textrm{ km s}^{-1}}\right)^{2}\,.$$ (3.51) Now we only need to reproduce these values in our cavities. Firstly for a partially magnetised high-$l$ cavity to achieve the same $\mathpzc{g}_{q}$ we require $l=142$ by eq. (3.27). Then since we need to fix the frequency to $m_{a}=10\,\mu$eV inside a cavity with one long dimension $L_{x}$ and two shorter dimensions $L_{z}=L_{y}$ this means we need to have $L_{z,y}=\sqrt{2}\pi/m_{a}=8.7$ cm and $L_{x}=lL_{z}=12.5$ m. As already discussed this cavity suffers a reduction in $C_{0}$ by a factor of $l^{2}/4$ with respect to the fully magnetised $l=1$ case. Secondly, for the fully magnetised cavity resonating at $l=1$ with $m_{a}=40\,\mu$eV, we need to set the aspect ratio using $\mathpzc{g}_{q}=(\pi^{2}-8)/2(L_{x}/L_{z})^{2}$. After enforcing the resonant frequency this gives values of $L_{x}=7.16$ m and $L_{y,z}=2.20$ cm. Using eq. (3.4) for the total power in the $v=0$ limit, we get for each of these cavities $$\displaystyle m_{a}=10\,\mu{\rm eV}\,:\quad P^{\rm cav}_{0}$$ $$\displaystyle=$$ $$\displaystyle 1.6\times 10^{-23}\,{\rm W}\,\left(\frac{B_{\rm e}}{15\,{\rm T}}% \right)^{2}\left(\frac{\bar{\rho}_{a}}{0.4\,{\rm GeV\,cm}^{-3}}\right)\left(% \frac{C_{a\gamma}}{1.92}\right)^{2}$$ (3.52) $$\displaystyle m_{a}=40\,\mu{\rm eV}\,:\quad P^{\rm cav}_{0}$$ $$\displaystyle=$$ $$\displaystyle 1.2\times 10^{-20}\,{\rm W}\,\left(\frac{B_{\rm e}}{15\,{\rm T}}% \right)^{2}\left(\frac{\bar{\rho}_{a}}{0.4\,{\rm GeV\,cm}^{-3}}\right)\left(% \frac{C_{a\gamma}}{1.92}\right)^{2}\,.$$ (3.53) We summarise the inputs to these calculations in table 1. Whilst our dielectric haloscope benchmark produces more power than the cavities, this is due in part to the very large magnetised volume of the experiment, which would come at high cost. Thus the various benchmarks provide the reader examples of less and more ambitious experiments, ranging from the relatively budget oriented partially magnetised cavity to the more complex and large volume dielectric haloscope. 4 Statistical analysis To estimate the sensitivity required to do axion astronomy with a directional experiment we utilise a statistical methodology based on the popular profile likelihood ratio test. A related method was used in ref. [83], who performed parameter estimation by first generating mock data using a certain set of input axion and astrophysical parameters and then using the maximum likelihood to reconstruct those parameters. A similar but extended approach was taken in ref. [84], who also made use of mock data but in addition provided analytic relations using the Asimov data set, see ref. [146]. To more straightforwardly and efficiently compare our two different classes of directional experiment the Asimov method is attractive here as well. 4.1 Profile likelihood ratio test To build a likelihood we must decide on the format that our signal will take, and parameterise the noise level that the measurement of such a signal would suffer. We follow a similar procedure to ref. [44]. Ultimately we desire that our experiments measure a power spectrum, which can be obtained by taking the Fourier transform of some timestream. The frequency resolution of the subsequent power spectrum will be given by the inverse of the duration of the timestream sample, $\Delta\nu=1/\delta t$. The power spectrum would have an extent in frequency equal to the bandwidth of the experiment which we label $\Delta\Omega$. For a single power spectrum taken in this way, thermal and quantum noise defined by a system temperature $T_{\rm sys}$ would be white and exponentially distributed across many realisations. The expected power in each frequency bin of the resulting spectrum is $P_{N}=k_{B}T_{\rm sys}\Delta\nu$, and since it is exponentially distributed the standard deviation has the same numerical value. Then we imagine that some large number $\mathcal{N}$ of these power spectra are taken and averaged over a time $\Delta t=\mathcal{N}\delta t$ so that in accordance with the central limit theorem the noise approaches a Gaussian distribution with the same expectation value $P_{N}$ in each bin but with an uncertainty suppressed by $\sqrt{\mathcal{N}}$, $$\sigma_{N}=kT_{\rm sys}\sqrt{\frac{\Delta\nu}{\Delta t}}\,.$$ (4.1) The argument is precisely the same for the statistics of the fluctuations in the signal which are similarly suppressed by this stacking, except that the mean value in each bin is given by the axion power which is a function of frequency. In specific examples later on we assume a noise temperature of $T_{\rm sys}=4$ K, but explicitly quote how one would scale the results for other temperatures. This value is realistic for a dielectric haloscope which needs a large magnetised volume. For the cavities $T_{\rm sys}=4$ K could be argued is slightly pessimistic, since there may also be the option of quantum limited noise, however the volumes we require here are also larger than is currently used. Additionally for this temperature, the thermal fluctuations will always have the dominant effect on our signal-to-noise relative to the size of the random fluctuations in the signal. At much lower temperatures they will begin to compete but since the statistics of both the noise and the signal are the same the arguments we make here still hold. Now that we can assume we have a Gaussian noise spectrum over some time $\Delta t$, we then iterate this entire process of stacking over an even longer time $t_{\rm obs}$ so that we have a total of $N_{t}=t_{\rm obs}/\Delta t$ grand power spectra all with Gaussian noise. If $t_{\rm obs}$ is longer than $\mathcal{O}({\rm hours})$ then we also expect our signal to have modulated in this time as well. There are some restrictions on the lengths of the various times at play here. First we must have our smallest interval of time $\delta t$ long enough to resolve the signal lineshape. For example the minimum duration required to achieve a speed resolution of $\Delta v$ or smaller, $$\delta t>\frac{2\pi}{m_{a}v\Delta v}=0.03\,{\rm s}\,\left(\frac{40\,\mu{\rm eV% }}{m_{a}}\right)\left(\frac{300\,{\rm km\,s}^{-1}}{v}\right)\left(\frac{1\,{% \rm km\,s}^{-1}}{\Delta v}\right)\,.$$ (4.2) The next longest interval $\Delta t$ must be long enough such that we have enough $\mathcal{N}$ power spectra to stack to make the assumption of Gaussian noise e.g. $\Delta t/\delta t\sim 1000$ [76]. Then we also must make the assumption that our $\Delta t$ is short enough to assume that the signal does not modulate too much within this time, i.e. that we can approximate the signal within this bin to be signal obtained at the time of the centre of the bin. Since most signals will modulate with a period of a day, as long as we have $\Delta t\lesssim 1$ hour this approximation would be suitable. Then we must also require that our longest time $t_{\rm obs}$ is long enough to see whatever property of the signal we desire, e.g. a day in the case of a daily modulation. We make these arguments simply to demonstrate the timescales that would be required by a real experiment for the steps taken to derive our analytic formulae to be valid, in particular in approximating our later sums over frequency and time bins as integrals. Fortunately these three durations of time are sufficiently distinct from one another for all three mass benchmarks that we believe the signal modelling assumptions to be quite safe. We can now write down the likelihood for such a dataset, given some model to describe the signal and noise it contains. To summarise, we have a total of $N_{t}$ power spectra which each have a total of $N_{\omega}=\Delta\Omega/2\pi\Delta\nu$ frequency bins across the bandwidth. In each of these bins the noise is normally distributed with standard deviation $\sigma_{N}$ so we can construct a likelihood from the products of the probabilities of seeing measured powers $P_{ij}\simeq\Delta\omega(\textrm{d}P(\omega_{i},t_{j})/\textrm{d}\omega)$ in each bin, given the expectation $P^{\rm exp}_{ij}(\Theta)$. This will be dependent on some set of model parameters $\Theta$ that are free in the model $\mathcal{M}$. We write the log likelihood as, $$\ln\mathcal{L}(P|\mathcal{M},\Theta)=-\frac{1}{2}\sum_{i=1}^{N_{t}}\sum_{j=1}^% {N_{\omega}}\left(\frac{P_{ij}-P^{\rm exp}_{ij}(\Theta)}{\sigma_{N}}\right)^{2% }\,,$$ (4.3) where we have left out the constants from the normalisation of each individual probability that will cancel when we take ratios of likelihoods. Here in assuming a flat standard deviation $\sigma_{N}$ we have assumed that the dominant statistical fluctuation in the value of the binned power is from thermal noise, neglecting the random fluctuations in the signal. Finally if we wish to build our observatory by combining the signal from multiple experiments, this essentially constitutes an additional sum over each one. For ease of reading we neglect this sum for now, but reintroduce implicitly later in our final results once we are armed with our final analytic formulae. The profile likelihood ratio test comprises a hypothesis test of some model $\mathcal{M}_{1}(\Theta)$ (named the alternative hypothesis) against the null hypothesis $\mathcal{M}_{0}(\Theta)$. One organises $\mathcal{M}_{0}$ to be a subset of the alternative model, usually by setting some parameter in $\Theta$ to zero. First we define the maximum likelihood ratio $\Lambda$ which is the ratio between the values of the likelihood that are maximised when $\Theta=\hat{\Theta}$ under model $\mathcal{M}_{1}$ and $\Theta=\hat{\hat{\Theta}}$ under model $\mathcal{M}_{0}$, $$\Lambda=\frac{\mathcal{L}(P|\mathcal{M}_{1},\hat{\Theta})}{\mathcal{L}(P|% \mathcal{M}_{0},\hat{\hat{\Theta}})}\,.$$ (4.4) If our null model $\mathcal{M}_{0}$ is recovered after the application of a constraint on the more general $\mathcal{M}_{1}$ then we can define a profile likelihood ratio test statistic $D=-2\ln\Lambda$. According to Wilks’ theorem the test statistic is $\chi^{2}_{\mu_{1}-\mu_{0}}\text{-distributed}$, where the degree of freedom for the $\chi^{2}$-distribution is given by the difference of free parameters $\mu_{1}-\mu_{0}$ between the two models [147]. So for example if we have some data set and we are trying to test for the presence of one parameter that separates the null and alternative hypotheses. Then we would observe some value of $D=D_{\rm obs}$ and calculate the $\chi_{1}^{2}$ cumulative distribution function from this value, which would give the probability of measuring at most $D_{\rm obs}$ if the null hypothesis is indeed false, usually called the significance of the result e.g. $$\mathcal{S}=1-\int_{D_{\rm obs}}^{\infty}\chi^{2}_{1}(D)\,{\rm d}D=\sqrt{D_{% \rm obs}}\,,$$ (4.5) (note that this equates to $\sqrt{D_{\rm obs}}$ only in the case of one parameter). One way to determine how sensitive an experiment must be to test for some property of a model (e.g. daily modulation or a tidal stream) would be to Monte Carlo generate many sets of mock data and compute the test statistic on each one thus building a distribution of values of $D$. This way one can account for the look elsewhere effect by quoting the required sensitivity in terms of a statistical power $\mathcal{P}$, defined as the probability of obtaining a given result if the alternative hypothesis is true. In other words, the significance is a measure of rejecting the null hypothesis but the power is a measure of accepting the alternative hypothesis. So we could require that our generated distribution of $D$ was such that a fraction $\mathcal{P}$ of them had a significance greater than $\mathcal{S}$. Say if we required $\mathcal{P}=0.9$ and $\mathcal{S}=0.95$ then an experiment that generated a distribution of $D$ under $\mathcal{M}_{1}$ that passed these criteria would be able to successfully measure the effect in question to a 95% significance, 90% of the time. However we will not do this. In fact we can use a much simpler method that does not require us to expensively Monte Carlo many mock datasets, while simultaneously allowing us to obtain analytic relationships between experimental requirements and astronomical goals across wide parameter spaces. First we must define the Asimov data set, i.e. that in which the data in each bin exactly matches the expectation for that bin given model $\mathcal{M}_{1}$, $P_{ij}=P^{\rm exp}_{ij}$. The likelihood under $\mathcal{M}_{1}$ will then be correctly maximised and thus equal to 0, but the likelihood under $\mathcal{M}_{0}$ will be left with a piece that corresponds to the difference between the two models. As long as the number of observations (bins in this case) is high then the Asimov data set will give an excellent estimate to the median value of $D$ one would expect if one were to Monte Carlo the problem. This method is advantageous as it saves significant computational expense and in our case allows more enlightening analytic formulae to be obtained. 4.2 Measuring modulations We search for modulations in the directional correction to the power spectrum, defined in terms of our general formalism in eqs. (3.14) and (3.13). Under a discretisation in frequency and time we construct the likelihood function from the data set, $$P_{ij}=P_{0}\Delta\omega\bigg{[}f(\omega_{i})-f_{\mathcal{G}}(\omega_{i},\,t_{% j})\bigg{]}\,,$$ (4.6) where we have written the distributions as functions of $\omega$ using $f(\omega)=\frac{\textrm{d}v}{\textrm{d}\omega}f(v)$ etc. For daily modulations we can treat the function $f(\omega)$ as the sole contribution under the null hypothesis and assume that only the directional correction modulates sinusoidally with time. This adds three parameters, the mean value, amplitude and phase of the modulation: $\{c_{0},\,c_{1},\,\phi\}$ respectively. Recalling eq. (3.15) we have, $$\displaystyle\ell-{\rm type}:f_{\mathcal{G}}(\omega_{i},t_{j})$$ $$\displaystyle=$$ $$\displaystyle\mathpzc{g}_{\ell}f(\omega_{i})\,\zeta_{\ell}(\omega_{i})~{}\left% [c_{0}+c_{1}\cos(\omega_{d}t_{j}+\phi)\right]$$ (4.7) $$\displaystyle q-{\rm type}:f_{\mathcal{G}}(\omega_{i},t_{j})$$ $$\displaystyle=$$ $$\displaystyle\mathpzc{g}_{q}f(\omega_{i})\left[\zeta_{q1}(\omega_{i})+\zeta_{q% 2}(\omega_{i})~{}\left(c_{0}+c_{1}\cos(\omega_{d}t_{j}+\phi)\right)^{2}\right]\,,$$ (4.8) for linear and quadratic experiments respectively. We then want to compare the unmodulated and modulated powers, $$\displaystyle\mathcal{M}_{0}$$ $$\displaystyle:\quad P_{ij}=P_{0}\Delta\omega f(\omega_{i}),\quad\Theta=\{P_{0}% ,v_{\mathrm{lab}},\sigma_{v}\}\,,$$ (4.9) $$\displaystyle\mathcal{M}_{1}$$ $$\displaystyle:\quad P_{ij}=P_{0}\Delta\omega[f(\omega_{i})-f_{\mathcal{G}}(% \omega_{i},t_{j})],\quad\Theta=\{P_{0},v_{\mathrm{lab}},\sigma_{v},c_{0},c_{1}% ,\phi\}\,.$$ (4.10) Computing $D$ using Asimov data just ends with needing to sum over the directional corrections, $$D=\sum_{i,j}\left(\frac{\int_{\rm bin}{\rm d}\omega\,P_{0}f_{\mathcal{G}}(% \omega_{i},t_{j})}{\sigma_{N}}\right)^{2}\,.$$ (4.11) Assuming that the bin size is small enough and our data contain most of the signal, we can approximate the sum with an integral $$\displaystyle D\approx$$ $$\displaystyle\frac{\Delta\omega}{\Delta t}\int_{0}^{t_{\rm obs}}{\rm d}t\int_{% m_{a}}^{\infty}{\rm d}\omega~{}\frac{\left(P_{0}f_{\mathcal{G}}(\omega,t)% \right)^{2}}{\sigma_{N}^{2}}\,.$$ (4.12) Notice that while in the linear-type case the correction only modulates, in the quadratic case we have a modulation as well as an overall offset that persists over time. We in fact only have to calculate parts of the test statistic for these 3 cases individually: $$\displaystyle\ell{\rm-type:}$$ $$\displaystyle D_{\ell}$$ $$\displaystyle\approx\,\frac{\Delta\omega}{\Delta t}\int_{0}^{t_{\rm obs}}{\rm d% }t\int_{m_{a}}^{\infty}{\rm d}\omega~{}\left[\frac{\mathpzc{g}_{\ell}\,P_{0}f(% \omega)\,\zeta_{\ell}(\omega)\,\cos\theta_{\rm lab}(t)}{\sigma_{N}}\right]^{2}$$ $$\displaystyle q{\rm-type\,(offset):}$$ $$\displaystyle D_{q1}$$ $$\displaystyle\approx\frac{\Delta\omega}{\Delta t}\int_{0}^{t_{\rm obs}}{\rm d}% t\int_{m_{a}}^{\infty}{\rm d}\omega~{}\left[\frac{\mathpzc{g}_{q}\,P_{0}f(% \omega)\,\zeta_{q1}(\omega)}{\sigma_{N}}\right]^{2}$$ $$\displaystyle q{\rm-type\,(modulation):}$$ $$\displaystyle D_{q2}$$ $$\displaystyle\approx\frac{\Delta\omega}{\Delta t}\int_{0}^{t_{\rm obs}}{\rm d}% t\int_{m_{a}}^{\infty}{\rm d}\omega~{}\left[\frac{\mathpzc{g}_{q}\,P_{0}f(% \omega)\,\zeta_{q2}(\omega)\,\cos^{2}\theta_{\rm lab}(t)}{\sigma_{N}}\right]^{% 2}\,.$$ The integrals over time and frequency can be separated in each case. After replacing $\sigma_{N}$ using eq. (4.1) we find we can write the $\ell$-type test statistic as, $$D_{\ell}=2\pi\left(\frac{P_{0}}{k_{B}T_{\rm sys}}\right)^{2}\mathpzc{g}_{\ell}% ^{2}\,\mathcal{I}^{\ell}_{\omega}\,\mathcal{I}^{\ell}_{t}\,,$$ (4.13) where $\mathcal{I}^{\ell}_{\omega}$ and $\mathcal{I}^{\ell}_{t}$ encode the integrals over $\omega$ and $t$ respectively. The quadratic experiments need to include the offset and modulation term, $$\displaystyle D_{q}=2\pi\left(\frac{P_{0}}{k_{B}T_{\rm sys}}\right)^{2}% \mathpzc{g}_{q}^{2}\,\left(\mathcal{I}^{q1}_{\omega}\,\mathcal{I}^{q1}_{t}+% \mathcal{I}^{q2}_{\omega}\,\mathcal{I}^{q2}_{t}+\mathcal{I}^{q12}_{\omega}\,% \mathcal{I}^{q12}_{t}\right)\,.$$ (4.14) We use the label ‘$q1$’ for the integrals of the offset term and ‘$q2$’ for the integrals of the modulation term. Since we integrate over the square of the directional correction (which contains both) we need to include the mixing term which we label ‘$q12$’. All the integrals in the above formulae can be written analytically for both the SHM and a stream. The integrals over $\omega$ contain the dependence on the shape of the linewidth, i.e. $\sigma_{v}$, $v_{\rm lab}$, and therefore scale $\propto{m_{a}}^{-1}$. The integrals over $t$ encode the information gained from the modulation of the signal, i.e. $c_{0},c_{1},\phi$, and thus scale $\propto t_{\rm obs}$. We list these in full in Appendix B. 4.3 Parameter constraints To estimate the uncertainty on some parameter measurement, we look towards the unmaximised likelihood ratio $$\displaystyle d(\Theta)=2\ln\frac{\mathcal{L}(P|\mathcal{M}_{1},\,\Theta)}{% \mathcal{L}(P|\mathcal{M}_{\rm 0},\,\Theta)}\,,$$ (4.15) where if $\Theta$ only contains the parameters of interest, $d(\hat{\Theta})=D$. The uncertainty on a model parameter $\vartheta\in\Theta$ can be estimated from the curvature of $d$ around the value $\hat{\vartheta}$ that maximises the likelihood under $\mathcal{M}_{1}$ $$\displaystyle\sigma_{\vartheta}^{-2}=-\frac{1}{2}\frac{\partial^{2}}{\partial% \vartheta^{2}}d\big{|}_{\vartheta=\hat{\vartheta}}\,\,.$$ (4.16) For a daily modulation we are interested in the set $\Theta=\left\{c_{0},c_{1},\phi\right\}$. If the true values are at $\Theta_{\rm true}$, we may compute $d$ for the Asimov data set giving, $$d(\Theta)=-\sum_{i,j}\chi_{ij,\,1}^{2}(\Theta)+\sum_{ij}\chi_{ij,\,0}^{2}\,,$$ (4.17) where, $$\displaystyle\chi_{ij,\,1}^{2}(\Theta)$$ $$\displaystyle\equiv$$ $$\displaystyle\frac{1}{\sigma^{2}_{N}}\left[\int_{\rm bin}\textrm{d}\omega\,P_{% 0}\big{(}f_{\mathcal{G}}(\omega_{i},\,t_{j}\,|\,\Theta)-f_{\mathcal{G}}(\omega% _{i},\,t_{j}\,|\,\Theta_{\rm true})\big{)}\right]^{2}$$ (4.18) $$\displaystyle\chi_{ij,\,0}^{2}$$ $$\displaystyle\equiv$$ $$\displaystyle\frac{1}{\sigma^{2}_{N}}\left[\int_{\rm bin}\textrm{d}\omega\,P_{% 0}f_{\mathcal{G}}(\omega_{i},\,t_{j}\,|\,\Theta_{\rm true})\right]^{2}\,.$$ (4.19) Here only the first sum depends on $\Theta$, so taking the derivative with respect to any $\vartheta\in\Theta$ sees the second one vanish. Taking the $\ell$-type case first we see that we in fact just have the expression, $$\displaystyle d(\Theta)\approx$$ $$\displaystyle\,2\pi\left(\frac{P_{0}}{k_{B}T_{\rm sys}}\right)^{2}\mathpzc{g}_% {\ell}^{2}\int_{0}^{t_{\rm obs}}{\rm d}t\int_{0}^{\infty}{\rm d}\omega~{}\left% [f(\omega)\,\zeta_{\ell}(\omega)\,(\cos(\theta_{\rm lab})|_{\Theta}-\cos(% \theta_{\rm lab})|_{\Theta_{\rm true}})\,\right]^{2}\,,$$ (4.20) which involve the same integrals that have already been introduced in computing $D$. For the $\ell$-type case this leads to, $$\displaystyle\sigma_{\vartheta}^{-2}=$$ $$\displaystyle\,-\pi\left(\frac{P_{0}}{k_{B}T_{\rm sys}}\right)^{2}\mathpzc{g}_% {\ell}^{2}\,\mathcal{I}^{\ell}_{\omega}\,\frac{\partial^{2}d_{t}}{\partial% \vartheta^{2}}\bigg{|}_{\Theta={\Theta}_{\rm true}}\,,$$ (4.21) where the derivatives of the time integral $d_{t}$ for each parameter are, $$\displaystyle-\frac{\partial^{2}d_{t}}{\partial c_{0}^{2}}\bigg{|}_{{\Theta}_{% \rm true}}$$ $$\displaystyle=2t_{\rm obs}$$ (4.22) $$\displaystyle-\frac{\partial^{2}d_{t}}{\partial c_{1}^{2}}\bigg{|}_{{\Theta}_{% \rm true}}$$ $$\displaystyle=t_{\rm obs}-\frac{\sin(2\phi_{\rm true})}{2\omega_{d}}+\frac{% \sin(2(t_{\rm obs}\omega_{d}+\phi_{\rm true}))}{2\omega_{d}}$$ (4.23) $$\displaystyle-\frac{\partial^{2}d_{t}}{\partial\phi^{2}}\bigg{|}_{{\Theta}_{% \rm true}}$$ $$\displaystyle={c_{1,{\rm true}}}^{2}\left(t_{\rm obs}+\frac{\sin(2\phi_{\rm true% })}{2\omega_{d}}-\frac{\sin(2(t_{\rm obs}\omega_{d}+\phi_{\rm true}))}{\omega_% {d}}+\frac{\sin(2t_{\rm obs}\omega_{d}+2\phi_{\rm true})}{2\omega_{d}}\right)\,.$$ (4.24) The $q$-type offset is insensitive to any $\{c_{0},c_{1},\phi\}$ but the $q$-type modulation leads to more lengthy terms, analogous to eqs. (4.22)– (4.24) so we make an approximation for large times $t_{\rm obs}$ so that the $\sin$ terms are negligible and get $$\displaystyle-\frac{\partial^{2}d_{t}}{\partial c_{0}^{2}}\bigg{|}_{{\Theta}_{% \rm true}}$$ $$\displaystyle\approx(8c_{0,{\rm true}}^{2}+4c_{1,{\rm true}}^{2})t_{\rm obs}\,,$$ (4.25) $$\displaystyle-\frac{\partial^{2}d_{t}}{\partial c_{1}^{2}}\bigg{|}_{{\Theta}_{% \rm true}}$$ $$\displaystyle\approx(4c_{0,{\rm true}}^{2}+3c_{1,{\rm true}}^{2})t_{\rm obs}\,,$$ (4.26) $$\displaystyle-\frac{\partial^{2}d_{t}}{\partial\phi^{2}}\bigg{|}_{{\Theta}_{% \rm true}}$$ $$\displaystyle\approx(4c_{0,{\rm true}}^{2}c_{1,{\rm true}}^{2}+c_{1,{\rm true}% }^{4})t_{\rm obs}\,.$$ (4.27) Including the offset term ends exactly as above since it does not depend on $\Theta$. This is as expected, since a known offset should not alter the level at which the parameters of an oscillation can be inferred. As one would expect from a Gaussian likelihood, the uncertainty on each modulation parameter scales with $t_{\rm obs}^{-1/2}$ and inversely with signal-to-noise $(P_{0}/k_{B}T_{\rm sys})^{-1}$. We also notice that the uncertainty on the phase $\phi$ scales with the amplitude of the modulation as $c_{1}^{-1}$, which is also to be expected since if a signal does not modulate ($c_{1}=0$) then the phase is undefined and unmeasurable. 5 Results 5.1 Measuring the daily modulation Employing the machinery described in the previous section we now estimate the general scale of signal required in a directional experiment to measure the daily modulation. For measuring a modulation controlled by three parameters to a 3$\sigma$ discrimination against an unmodulated hypothesis, we need a test statistic of $D=13.93$. The power required in $\ell$ and $q$-type experiments following the formulae detailed in the previous section lead us to the general sizes, $$P_{\ell}>1.3\times 10^{-21}\,{\rm W}\,\left(\frac{T_{\rm sys}}{4\,{\rm K}}% \right)\left(\frac{\mathpzc{g}_{\ell}}{2.4\times 10^{-4}\,{\rm km}^{-1}\,{\rm s% }}\right)^{-1}\left(\frac{t_{\rm obs}}{4\,{\rm days}}\right)^{-\frac{1}{2}}% \left(\frac{m_{a}}{100\,\mu{\rm eV}}\right)^{\frac{1}{2}}\,,$$ (5.1) $$P_{q}>8.6\times 10^{-22}\,{\rm W}\,\left(\frac{T_{\rm sys}}{4\,{\rm K}}\right)% \left(\frac{\mathpzc{g}_{q}}{1.1\times 10^{-6}\,{\rm km}^{-2}\,{\rm s}^{2}}% \right)^{-1}\left(\frac{t_{\rm obs}}{4\,{\rm days}}\right)^{-\frac{1}{2}}\left% (\frac{m_{a}}{100\,\mu{\rm eV}}\right)^{\frac{1}{2}}\,,$$ (5.2) where we assume that three identical experiments pointing along the north, west and zenith axes have been combined. These values of power are in line with the three benchmarks of section 3.4 so we expect to be able to measure the daily modulation in the experiments as they are described. The implication of detecting the daily modulation will be the ability to infer the 3-dimensional components of the Solar velocity in the galactic rest frame. We expressed the constraint one can place on the modulation parameters in analytic form in the previous section but now we can translate it into the astrophysical language by computing the constraints on the Solar velocity, $\mathbf{v}_{\odot}$, from the likelihood directly. This way we can account for both the daily and annual modulations. We write the velocity in galactocentric coordinates now as $\mathbf{v}_{\odot}=(v_{\odot}^{r},\,v_{\odot}^{\phi},\,v_{\odot}^{z})$. The second component of this velocity (which includes the rotation speed of the local standard of rest) still possesses sizable systematic uncertainties and is very sensitive to the modelling of the Milky Way rotation curve [148]. We show the constraints on the components of the Solar velocity as a function of total experimental duration $t_{\rm obs}$ in figure 4. In this result and in subsequent results we will be comparing a benchmark quadratic velocity experiment with a linear velocity one. We will also be comparing experiments pointing along each of our three laboratory axes, as well as an experiment using signals from all three experiments combined. Here we assume that the experiments have a 4 K noise temperature and total power of $P_{\ell}=2.6\times 10^{-21}$ W and $P_{q}=1.7\times 10^{-21}$ W, which are chosen so that the both combined $\ell$ and $q$-type experiments measure the modulation to the same significance in 4 days. As anticipated when we wrote down our analytic formulae, the constraint on our modulation parameters and subsequently the components of the Solar velocity decrease with total time $\propto t_{\rm obs}^{-1/2}$. The uncertainty on $v^{\phi}_{\odot}$ and the upper boundary of the constraints on $v^{r}_{\odot}$ and $v^{z}_{\odot}$ in fact exhibit two scaling regimes both $\propto t_{\rm obs}^{-1/2}$ but with different gradients for short and long times. We associate these with the daily modulation at short times and the annual modulation for longer times. One very noticeable feature for the lower limits of the $r$ and $z$ components is the multiple solutions for short durations. This is most pronounced in the non-directional limit. Even though the full annual modulation signal is sufficient to discriminate between these solutions — the lower solution for these velocity components does eventually disappear — this requires $t_{\rm obs}\gtrsim 40$–$60$ days. In particular for the power used in the $q$-type experiment we require even longer times before the uncertainty on $v^{z}_{\odot}$ reaches below 10 $\textrm{ km s}^{-1}$. The impact of the multiple solutions for $\mathbf{v}_{\odot}$ is dampened significantly with the inclusion of directional information. Since we have normalised the values of power so that the daily modulation is detected to the same significance, the evolution for small $t_{\rm obs}$ is very similar. However towards larger durations the uncertainty bands decrease slightly faster for the $l$-type experiments when the dominant influence is the fact that the $\ell$-type power is slightly higher. In the transition between these two regimes, the incorrect solution for $\mathbf{v}_{\odot}$ vanishes slightly faster in the $\ell$-type experiment since it cannot reproduce the modulation signals as well when sign information is present. Ultimately the prospects for measuring the Solar velocity are very good in axion experiments generally. Additionally here we are beginning to see that the directional information is making marked improvements to the discovery reach especially for short duration experiments. In all experiments we are able to get good constraints on $v_{\odot}^{\phi}$ since this parameter is also largely involved in setting the shape of the power spectrum, which our likelihood function is integrating over as well as the modulation. 5.2 Measuring the anisotropy of the DM halo It is predicted that the smooth component of the velocity distribution of a dark matter halo cannot be perfectly isotropic. It may be possible for an axion experiment with directional sensitivity to detect some anisotropy in the velocity ellipsoid of our own halo, even if it was present at a low level. Milky Way analogues in N-body simulations generically observe halos with some level of anisotropy, see e.g. refs. [149, 150, 151], and indeed models for the real MW halo share this prediction [152, 153, 154]. For galaxies forming from radial infall this usually results in a larger velocity dispersion in the radial direction. In our own Milky Way indeed a significantly larger velocity dispersion in the radial direction was observed in the kinematics of halo stars [106]. Such an anisotropy would likely be difficult to observe with the frequency dependence of the power spectrum alone. But one would expect a velocity distribution that was slightly hotter in one direction to alter the phases and amplitudes of daily modulations in a more complicated way than simply being controlled by $\cos{\theta_{\rm lab}}$. Detecting this anisotropy will be one of the key benefits of a directional experiment so it is a useful exercise. The degree of anisotropy in the velocity ellipsoid of some component of a galactic halo is usually parameterised111111If the halo model is allowed to possess triaxiality the anisotropy parameter can depend on other galactic coordinates as well as radius. with $\beta(r)$, $$\beta(r)=1-\frac{\sigma^{2}_{t}}{2\sigma^{2}_{r}}\,,$$ (5.3) where $\sigma_{r,t}$ are velocity dispersions in the radial and tangential directions. If at a given radius $\sigma^{2}_{t}=2\sigma^{2}_{r}$ then $\beta=0$ and the distribution is isotropic. N-body halos typically have anisotropy parameters that are zero for $r\rightarrow 0$ which then grow to values $\beta(r>8\,{\rm kpc})\sim 0.2$–$0.4$ [149, 150, 151], although it has been suggested that the inclusion of baryons may make the local distribution less anisotropic [104]. We can model a velocity distribution with some anisotropy by generalising the isotropic Maxwellian introduced earlier, $$f(\mathbf{v})=\frac{1}{(8\pi^{3}\det{\bm{\sigma}^{2}})^{1/2}}\exp\left(-\frac{% 1}{2}(\mathbf{v}+\mathbf{v}_{\mathrm{lab}})^{T}\bm{\sigma}^{-2}(\mathbf{v}+% \mathbf{v}_{\mathrm{lab}})\right)\,,$$ (5.4) where $\sigma^{2}_{t}=\sigma^{2}_{\phi}+\sigma^{2}_{z}$ at our position. If we assume that the dispersion tensor is diagonal $\bm{\sigma}^{2}=\textrm{diag}(\sigma^{2}_{r},\sigma^{2}_{\phi},\sigma^{2}_{z})$ then this is, $$f(\mathbf{v})=\frac{1}{(2\pi)^{3/2}\sigma_{r}\sigma_{\phi}\sigma_{z}}\exp\left% (-\frac{(v_{r}+v_{\textrm{lab}}^{r})^{2}}{2\sigma^{2}_{r}}-\frac{(v_{\phi}+v_{% \textrm{lab}}^{\phi})^{2}}{2\sigma^{2}_{\phi}}-\frac{(v_{z}+v_{\textrm{lab}}^{% z})^{2}}{2\sigma^{2}_{z}}\right)\,.$$ (5.5) One can allow for correlations between the dispersions in different directions with off-diagonal elements, however for simplicity we neglect this possibility. Reference [107] does observe a slight tilt in the velocity ellipsoid of their halo stars, but mostly only due to one correlation (the $\sigma_{r}\sigma_{z}$ element). Starting from the isotropic case, we increase the dispersion velocity slightly in the radial direction and decrease it in the tangential directions. We then attempt to measure the resulting anisotropy by placing the above velocity distribution into our statistical analysis as before121212Our directional integrals do not yield analytic results here so the analysis in this section is purely numerical.. For this example we choose as a benchmark the best fit values of the dispersion components of the distribution of metal-poor halo stars from Ref. [107]; we use the set with metallicities [Fe/H]$<-1.8$. These values are $\sigma_{r}=178$ km s${}^{-1}$, $\sigma_{\phi}=121\textrm{ km s}^{-1}$ and $\sigma_{z}=96.5\textrm{ km s}^{-1}$, giving an anisotropy parameter of $\beta=0.62$ which is a relatively high value. We sample the posterior distribution generated by our Asimov likelihood over linear priors in all parameters131313We use the MultiNest nested sampling algorithm [155, 156, 157] with 5000 live points to do this.. In figure 5 we display the one and two-dimensional posterior distributions. We also set the components of $v_{\odot}$ as free parameters but marginalise over them since their resulting uncertainties are essentially the same as the results of the previous section. In addition to our two directionally sensitive experiments we include the result from the same analysis in an equivalent experiment but with no directional effect, i.e. setting $\mathpzc{g}_{\ell}$ or $\mathpzc{g}_{q}$ to zero. The total power $P_{0}$ is assumed to be identical in all experiments. As expected the non-directional signal has poor sensitivity to the anisotropy with large bands of viable values of the three dispersion components able to reproduce the shape of $f(v)$ (in this case the experimental duration is insufficient to measure any modulation in frequency). In particular the measurement of $\sigma_{r}$ and $\sigma_{z}$ is very poor, with the constraints consistent with 0 at the 95% level for both parameters. This is because the value of $\sigma_{\phi}$ is in the direction that $f(v)$ is primarily boosted, so ends up having the greatest impact on its shape. With directional sensitivity on the other hand we gain major sensitivity to the velocity anisotropy. Peculiarly though, the $q$-type case performs much better here. In fact the $\ell$-type experiment exhibits a multimodal solution for the values of $\sigma_{r}$ and $\sigma_{z}$ where it seems to struggle to distinguish between the numerical values of the two parameters. This is perhaps at first counter-intuitive since one would expect that the $\ell$-type experiment would always be more sensitive by not discarding the sign information on $\mathbf{v}$. However this is only true when searching for individual directions, for instance the direction of $\mathbf{v}_{\mathrm{lab}}$. Here we are trying to constrain parameters which control the shape of the distribution. Note that the dispersion parameters only ever enter the signal as the square, there is no sign information there to measure. The $q$-type experiments turn out to be more sensitive because they receive larger (albeit negative) directional corrections over the span of frequencies where the dispersion values are playing the greatest role. Moreover the directional correction has a persistent offset, analogous to the $\zeta_{q1}(\omega)$ term in Eq. (3.15). Whereas in the $\ell$-type experiments, since the correction can be both positive and negative over one day, there are times when it disappears (or can become very small) and the signal becomes essentially non-directional. Again for measuring individual velocities, the vanishing of the directional correction can only happen when they line up with the axis of the experiment correctly. However here this effect is a hindrance since without any directional correction at all the signal cannot distinguish between shape parameters controlling the widths of the distribution in different directions. However we should emphasise the excellent reconstruction shown in the $q$-type experiment, which is able to distinguish each dispersion component from each other at over the 2$\sigma$ level. 5.3 Measuring a stream The treatment of the daily modulation due to $\mathbf{v}_{\mathrm{lab}}$ is entirely analogous to a treatment one can make of the daily modulation induced by a stream. As mentioned previously one only needs to make the substitutions $\mathbf{v}_{\mathrm{lab}}\rightarrow\mathbf{v}_{\mathrm{lab}}-\mathbf{v}_{% \mathrm{str}}$, $\sigma_{v}\rightarrow\sigma_{\rm str}$ and $\bar{\rho}_{a}\rightarrow\rho_{\rm str}$. Streams could in principle appear to originate from any galactocentric velocity, so our prior on the stream direction can only be the whole sky. We write the stream direction using the usual galactic longitude and latitude $(l_{\rm str},b_{\rm str})$141414The conversion to galactocentric cylindrical coordinates is defined as $(v^{r},\,v^{\phi},\,v^{z})=v(\cos{b}\cos{l},\,\cos{b}\sin{l},\,\sin{b})$.. We show how the values of the daily modulation parameters for a stream vary with this direction in figure 6. For clarity we show the parameters $\{c_{0},\,c_{1},\,\phi\}$ for a north-pointing experiment at Munich, measuring a stream with a galactic frame speed of $300\,\textrm{ km s}^{-1}$ on January 1. The modulation parameters are defined precisely as before, $$\cos{\theta^{i}_{\rm str}}(t)=\frac{\hat{\mathbf{x}}^{i}\cdot(\mathbf{v}_{% \mathrm{lab}}-\mathbf{v}_{\mathrm{str}})}{|\mathbf{v}_{\mathrm{lab}}-\mathbf{v% }_{\mathrm{str}}|}=c_{0}+c_{1}\cos(\omega_{d}t+\phi)$$ (5.6) where $\hat{\mathbf{x}}^{i}$ is one of our axes (north, west, zenith). Here and for several figures to come we display functions of the full sky by mapping the the galactic $(l,\,b)$ with a Mollweide projection. As is convention we put the galactic centre at the origin. The longitude $l$ is read horizontally and the latitude (which is also labelled numerically) is read vertically. One should interpret a position on this projection as the direction that a given stream points towards. So at the position of the white star is a stream that is co-rotating with us. Symmetrically opposite would be a stream that is counter-rotating, e.g. S1. The most noticeable feature in figure 6 is that there appear to be two regions of the sky where the modulation amplitude $c_{1}$ vanishes and subsequently the modulation phase becomes undefined. We associate these two points with the rotation axis of the Earth, where naturally if the Earth frame stream direction happens to coincide with our axis of rotation, no daily modulation will occur. The location of these “blind spots” in $\hat{\mathbf{v}}_{\rm str}$ varies with $v_{\rm str}$ and over the course of the year as the Earth’s rotation axis moves relative to the halo. But at any given day there will always be certain streams that will not induce a modulation. This fact is of course conspicuous and would not prevent the stream from being observed using the frequency of the feature. However in the case of quadratic-type experiments that can only measure $|c_{0}|$, we remark that roughly half of the stream ‘sky’ is degenerate with the other half. The location of the poles also varies over the year as well as with the value of $v_{\rm str}$, however the skymaps are qualitatively similar. Next we show how the measurability of a stream via its daily modulation is dependent on the direction of the stream. In figure 7 we show the significance achievable in measuring the daily modulation of a stream comparing against the model in which the stream is unmodulated (i.e. a non-directional experiment). The modulation adds three parameters so we compute the significance from the value of the test statistic and the $\chi^{2}_{3}$ distribution, then converting to a “Gaussian $\sigma$” i.e. $68\%\rightarrow 1\sigma$ etc. We again display the result for both linear and quadratic experiments along each axis separately. The significance is displayed as a function of stream direction $(l_{\rm str},\,b_{\rm str})$ projected using the same Mollweide mapping as in the previous figure. Following this we also show the total test statistic (all three experiments combined) in figure 8. We see that especially in the west-pointing experiment the significance vanishes along the same directions as highlighted earlier: those that align with the rotation axis of the Earth. A stream is undetectable via its daily modulation in this particular experiment, however comparing the same point in the north and zenith-pointing experiments shows that it is indeed measurable in those. Whilst the quadratic experiments observe a greater maximum significance value for head-on streams, the linear experiments observe a consistently large significance over the full sky (keeping in mind that the dielectric haloscope can be both a linear and a quadratic experiment when using different combinations of signals from the left and right hand sides of the device). The head-on streams are the most well-measured when looking for modulations because faster features give greater deviations away from the non-directional power, cf. $\mathcal{G}_{\ell}\propto v$ and $\mathcal{G}_{q}\propto v^{2}$. This is also the reason why quadratic experiments require smaller overall powers to measure the faster features to the same significance. So a stream originating from the opposite direction to the one we are moving (e.g. S1) will always be the most well-measured directionally. In figure 8 we also mark the directions of the six nearby substructures reported in refs. [110, 113]. As mentioned in section 2.3 the first of these objects labelled ‘S1’ has been confidently claimed to be a stream that intersects our position. S1 arrives head-on with respect to our galactic orbit, placing it in prime orientation for detection. If present this should be easily picked up by an axion search, and subsequently fully measurable in a directional experiment. Computing the test statistic for the S1 stream we find that measuring the velocity components of the stream from its daily modulation requires powers in $\ell$ and $q$-type experiments of, $$P_{\ell}\gtrsim 8.9\times 10^{-21}\,{\rm W}\,\left(\frac{\rho_{\rm str}}{0.05% \bar{\rho}_{a}}\right)^{-1}\left(\frac{T_{\rm sys}}{4\,{\rm K}}\right)\left(% \frac{\mathpzc{g}_{\ell}}{2.4\times 10^{-4}\,{\rm km}^{-1}\,{\rm s}}\right)^{-% 1}\left(\frac{t_{\rm obs}}{4\,{\rm days}}\right)^{-\frac{1}{2}}\left(\frac{m_{% a}}{100\,\mu{\rm eV}}\right)^{\frac{1}{2}}\,,$$ (5.7) $$P_{q}\gtrsim 4.5\times 10^{-21}\,{\rm W}\,\left(\frac{\rho_{\rm str}}{0.05\bar% {\rho}_{a}}\right)^{-1}\left(\frac{T_{\rm sys}}{4\,{\rm K}}\right)\left(\frac{% \mathpzc{g}_{q}}{1.1\times 10^{-6}\,{\rm km}^{-2}\,{\rm s}^{2}}\right)^{-1}% \left(\frac{t_{\rm obs}}{4\,{\rm days}}\right)^{-\frac{1}{2}}\left(\frac{m_{a}% }{100\,\mu{\rm eV}}\right)^{\frac{1}{2}}\,,$$ (5.8) As with our daily modulation these are essentially within the scope of the benchmark experiments detailed in table 1. As a final word on the topic of tidal streams we would like to display how well a directional experiment can make measurements of all the properties of the stream in conjunction. We take the aforementioned case of the S1 stream and perform a maximum likelihood fit, taking the threshold required powers to measure the modulation eqs. (5.7) and (5.8). We explore the posterior distribution generated by sampling our likelihood function over linear priors in the five parameters defining the stream: the velocity, dispersion (which is $\sim$ 50 km s${}^{-1}$ when written as a single variate Maxwellian [113]) and the density. We make the assumption that the density of dark matter in S1 comprises 5% of $\bar{\rho}_{a}$, although we stress that this parameter is completely unknown. The marginalised posterior distributions are shown in figures 9 and 10 for $\ell$ and $q$-type experiments respectively, again performing four separate analyses in each case. The first three use data from each (north, west and zenith) experiments separately and then a fourth with all three combined. As expected with all three experiments combined the stream is very well-measured. Individually we can see that in the $\ell$-type case the west-pointing experiment appears to constrain the stream most successfully, but in the $q$-type experiment it is the worst. An intuition for this result can be gleaned by looking back at figure 2 which shows the actual signal for this stream. In the $\ell$-type spectra the west-pointing experiment has a very large modulation amplitude since it has both the sensitivity to the sign of $\cos{\theta_{\rm lab}^{\mathcal{W}}}$ and a value of $c_{1}$ slightly larger than the other two directions (which have their amplitudes suppressed by factors of $\cos{\lambda_{\rm lab}}$ or $\sin{\lambda_{\rm lab}}$). On the other hand in the $q$-type experiment this large modulation gets folded into purely negative values. So the west-pointing experiment becomes much less useful for measuring S1 when quadratic effects are considered. However as already discussed the effect in all three $q$-type experiments is enhanced due to a large extra factor of $|\mathbf{v}_{\mathrm{lab}}-\mathbf{v}_{\mathrm{str}}|$ meaning they need less power to reach an equal significance. Our measurements are mostly nicely Gaussian with the exception of $v_{\rm str}$ which looks to be approximately one-sided for speeds larger than the true speed of S1 $\approx 300\,\textrm{ km s}^{-1}$. This is due to the fact that S1 is incoming head-on, so for all directions other than the true direction a faster stream is needed to reproduce the correct peak frequency. As before, we reiterate that our signal powers are reasonable based on the experimental setups summarised in table 1. The signal requirements can be rescaled according to eqs. (5.7) and (5.8) whereas the experimental requirements to reach those signals can be rescaled using eqs. (3.50) and (3.52) 5.4 Prospects for minicluster streams A scenario that has been gaining interest in the last couple of years is the possibility that a decent chunk of an axionic dark matter halo could be bound up in miniclusters (see refs. [18, 30, 37, 36, 19, 26, 158] for the most recent progress on the topic). Miniclusters have intriguing signatures for indirect detection, but a punishingly small direct encounter rate on Earth. It was suggested however in refs. [27, 28] that over time and many passages through the disk and bulge that miniclusters may become appreciably tidally disrupted by stellar interactions. Crossings through their trailing ministreams would likely be more frequent. Even if the density of a given stream was diluted over many Gyr, the initial density of a minicluster is so high that the detection prospects are not completely unfathomable. Remaining agnostic with regards to how often such a passage could occur151515This requires much more in depth numerical analysis accounting for the initial mass function and abundance of miniclusters, in turn needing a full simulation of the axion field through the QCD phase transition., we can nevertheless describe how the signal from the crossing of a ministream could be used to measure the properties of its progenitor. We assume the simplest model for a minicluster [16], that of a sphere with density $\rho_{\rm mc}$ and mass $M_{\rm mc}$. The densities are very large, typically labelled by some contrast $\delta$, $$\rho_{\rm mc}=7\times 10^{6}\,{\rm GeV\,cm}^{-3}\,\delta^{3}(1+\delta)\,.$$ (5.9) Miniclusters have a characteristic mass given by the horizon enclosure at matter-radiation equality, around $M_{\rm mc}\simeq 10^{-12}\,M_{\odot}$. The precise spectrum and mass function of miniclusters is the subject of much ongoing work. Here we focus only on the heuristic arguments regarding their detection and suggest that for now one resorts to the scaling relations detailed below if concerned about the specificities some minicluster. Without directional sensitivity one can extract the density $\rho_{\rm mstr}$, dispersion $\sigma_{\rm mc}$ and the lab frame speed $|\mathbf{v}_{\mathrm{lab}}-\mathbf{v}_{\rm mstr}|$ from the power spectrum. These are related to the properties of the minicluster as well as the age of the stream $t_{\rm mstr}$. We have, $$\displaystyle\rho_{\rm str}$$ $$\displaystyle\simeq$$ $$\displaystyle\rho_{\rm mc}\frac{R_{\rm mc}}{\sigma_{\rm mc}t_{\rm mstr}}$$ $$\displaystyle\simeq$$ $$\displaystyle 19.8\,{\rm GeV\,cm}^{-3}\,\,\delta^{3/2}(1+\delta)^{1/2}\left(% \frac{1\,{\rm Gyr}}{t_{\rm mstr}}\right)\,,$$ for the stream density (which is diluted linearly since the instance of disruption) and for the virial velocity dispersion, $$\sigma_{\rm mc}=\sqrt{\frac{GM_{\rm mc}}{R_{\rm mc}}}=6.28\times 10^{-5}\,{\rm km% \,s}^{-1}\,\delta^{1/2}(1+\delta)^{1/6}\left(\frac{M_{\rm mc}}{10^{-12}\,M_{% \odot}}\right)^{1/3}\,.$$ (5.11) We assume that the stream retains the original temperature of its progenitor. In principle this would only be a lower limit on the dispersion since tidal effects will likely heat the minicluster by some amount proportional to the timescale of disruption, see e.g. ref. [90]. We also have an additional observable, the minicluster stream crossing time, dependent on the radius of the stream and its orientation relative to our trajectory, $$T_{\textrm{mstr-x}}=\frac{2R_{\rm mc}}{v_{\textrm{lab}}\sin{(\vartheta_{\rm mstr% })}}\approx\frac{4\,{\rm days}}{\delta(1+\delta)^{1/3}}\left(\frac{M_{\textrm{% mc}}}{10^{-12}M_{\odot}}\right)^{1/3}\left(\frac{\sin{(60^{\circ})}}{\sin{(% \vartheta_{\rm mstr})}}\right)\,.$$ (5.12) We denote the angle between the ministream velocity and the lab velocity by $$\sin{(\vartheta_{\rm mstr})}=\sqrt{1-\left(\frac{\textbf{v}_{\textrm{lab}}% \cdot\mathbf{v}_{\rm mstr}}{v_{\textrm{lab}}v_{\rm mstr}}\right)^{2}}\,.$$ (5.13) Notice we have six unknown parameters $\{\delta,\,M_{\rm mc},\,t_{\rm mstr},\,\mathbf{v}_{\rm mstr}\}$ but only four equations with which to determine them (eqs. (5.4), (5.11), (5.12), and the frequency of the stream which provides $|\mathbf{v}_{\mathrm{lab}}-\mathbf{v}_{\rm mstr}|$). In a more sophisticated model we may also wish to describe the density profile of the minicluster. So we are going to require additional information, or so it would seem. In fact, the situation is slightly more complicated than the cases considered before. So far we have ignored the daily modulation in frequency due to the rotation speed of the Earth $\sim$0.47 km s${}^{-1}$ which is negligible when considering the full axion power spectrum with a width of $\sim$ 300$\textrm{ km s}^{-1}$. But here we are dealing with features that have characteristic linewidths 4 orders of magnitude smaller than even this smallest correction. So this means that one could in fact extract two additional pieces of information from a non-directional signal — the phase and amplitude of the daily modulation in frequency. We illustrate a signal in figure 11, showing a single day’s worth of modulation in the power spectrum. Since the minicluster linewidth is much smaller than the variation in $\mathbf{v}_{\mathrm{lab}}$, integrating the spectrum over one hour produces a signal that has swept out a segment of frequencies. At times when the Earth rotates along the direction of the stream the modulation turns over, leading to a very large enhancement in power (note that the signal is plotted with a log scale). The modulation in frequency is not a perfect sinusoid with a period of one day because the revolution speed of the Earth is slightly different at the beginning of the day compared with the start. This is only now visible when at such fine resolution. The full six parameters of this very simple model would be measurable to high accuracy, as long as the experiment could achieve this spectral resolution (which only requires an increase in timestream sample duration, see section 5.4 below). So it seems here that we have no need of directionality. But consider those minicluster streams that would give rise to signals with no daily modulation in frequency. This can happen in two ways. Firstly if the stream crossing time is much smaller than 1 day then the modulation parameters of the signal and thus the three components of the stream velocity will not be measurable (as in figure 4 for very small durations). Complementary to this, if the stream dispersion is wider than the frequency shift the feature undergoes during the crossing time, then this too would mean the modulation is poorly measured. Miniclusters that produce wide frequency band streams with very short radii are those with very high values of $\delta$, or an $M_{\rm mc}$ much smaller than $10^{-12}$ $M_{\odot}$. In figure 12 we show the shift in speed over the crossing time (labelled $\Delta v_{\rm mstr-x}$) as a function of the stream direction for a particular minicluster input with $\delta=10$ and $M_{\rm mc}=10^{-13}\,M_{\odot}$. For this minicluster the velocity dispersion is $\sigma_{\rm mc}=1.3\times 10^{-4}$ km s${}^{-1}$ which is used as the lower limit of the colour scale. Ministream directions in the light band that stretches across the image give frequency shifts during the crossing time that are smaller than the linewidth. We also exclude stream directions which have crossing times above 1/4 of a day, which as one would expect are roughly collinear with our trajectory $\pm\mathbf{v}_{\mathrm{lab}}$. Here we assume that each ministream crossing began on January 1. The skymaps for other times look qualitatively similar however the light band of stream directions will move across the sky with the rotation axis of the Earth. Clearly this is a rather fine tuned region of minicluster parameter space, but we should mention again that the estimate for the stream dispersion used here can only be a lower limit. One could expect hotter ministreams to be possible if heating during tidal disruption was accounted for. This issue along with the many other mysteries surrounding miniclusters we leave for future work. A note on noise statistics for miniclusters Miniclusters have extremely small velocity dispersions which means one would need to make some modifications to the binning as described in section 4. To gain a sufficient frequency resolution to measure a feature with a typical minicluster width centred around $v\sim 300\textrm{ km s}^{-1}$ we need to have a single power spectrum constructed from $\delta t>180$ seconds of integration time (for $m_{a}=100\,\mu$eV). For our analytic treatment this is not a problem; our formulae are independent of the choice of $\delta t$ since we make the assumption that the sum over power spectra bins can be approximated by an integral. It does mean though that we are forced to consider the case where the frequency binning is small enough to pick up the shape of the feature in the first place. This leads to a different problem regarding the statistics of the noise and the randomness of the signal. Since we wish our larger daily modulation bins to have durations of $\lesssim$ 1 hour (so that our time sum can be approximated by an integral), we will only be able to construct them from at most $\mathcal{N}\sim 20$ power spectra. This is potentially worrying since we made the assumption that the central limit theorem was making our noise and signal fluctuations Gaussian. The average of $\mathcal{N}$ exponentially distributed numbers with an expectation value of $P_{N}$ is a gamma distribution with a shape parameter of $\mathcal{N}$ and a scale parameter of $P_{N}$. For $\mathcal{N}\sim 20$ the discrepancy should not be important (given our other approximations). For our lower mass benchmarks however we would need to consider values $\mathcal{N}\sim 2-7$ for the typical minicluster161616For more information on this particular statistical issue we refer the reader to ref. [76] searching for cold flows of axions in ADMX.. This would cause the noise to be noticeably non-Gaussian and the observed signal much more influenced by random correlations in the phases of the axion field. Moreover it may be that the assumption of completely uncorrelated phases is not the ideal description for a disrupted minicluster. They could in fact retain some of the highly correlated nature that is characteristic of a minicluster. Though saying much more on this issue would require an in depth study. 6 Summary In developing a general formalism to describe directional effects in axion detection we have settled on three designs that would be able to implement them in reality. The first two we discussed consist of modifications to the conventional resonant cavity. In cavities – and any experiment with electric fields that have standing wave behaviour — there is only the possibility to gain sensitivity to the projection of the square of the axion velocity along the elongated axis of the device. We have described how one could construct such cavities that are large enough to approach the de Broglie wavelength of the axion. For masses between $10\,\mu$eV and $40\,\mu$eV we can set up cavities at high and low mode numbers respectively. The low mass end with high mode numbers requires a rather lengthy cavity but it turns out that only the ends of the cavity need to be magnetised to achieve a usable directional effect. For an effect of the same magnitude at the higher end of this mass range the cavity needs to be fully magnetised, but can have a very thin aspect ratio while using only the lowest mode. At higher masses still we have developed a way to extend the dielectric haloscope concept employed by MADMAX to exploit phase differences across the device as suggested in ref. [142]. In this latter case we have devised a setup where the disks are spaced symmetrically just out of phase with respect to perfect constructive interference at $v=0$. Adding or subtracting the signals from either side of the experiment can then give quadratic or linear dependence on the axion velocity. For all the experiments discussed, the various real world requirements for measuring a $\sim 10$% directional effect are summarised in table 1. These benchmarks informed the feasibility of doing axion astronomy but one need not be more optimistic than we have. Even if parameters such as our benchmark magnetic field or noise temperature are not achievable, much of the astronomy can be done but for slightly longer times than the (already very brief) benchmark duration of $t_{\rm obs}~{}=~{}4$ days. Directional experiments pose excellent prospects for the post-discovery era. Signals that exhibit pronounced daily modulations give us access to the full three dimensional velocity distribution in a much shorter time than is required for non-directional experiments. We find that these experiments can straightforwardly pick up this daily modulation and use it to infer the components of the Solar velocity. Directional experiments are particularly novel since they are able to measure the anisotropy of the velocity ellipsoid of the Milky Way dark matter halo. Due to higher order alterations to the daily modulation, the signals can distinguish increased or decreased velocity dispersions in different galactocentric directions. Such a fine sensitivity to the multidimensional structure of the velocity distribution is not possible in non-directional experiments. We also find that substructure in the form of streams are measurable in very short periods of time for almost any orientation across the sky for our linearly sensitive experiments. The local S1 stream which has been shown to directly pass through the Solar position is in fact in prime position for detection since it is incoming almost head-on with a very fast laboratory frame speed. This would lead to a large directional correction in quadratic and linear experiments even if the dark matter content was scarce, at 5% of $\rho_{0}$ or less. We also showed that all the properties of the S1 stream can be reconstructed with the daily modulation signal alone. This is especially interesting if one considers the possibility of very small scale substructure due to the disruption of miniclusters by stars in the Milky Way that would give rise to enhancements in the signal over timescales of a day or less. We have found a small range of hot dense miniclusters with streams that are roughly orthogonal to our trajectory through the galaxy that would require a directional experiment to measure. For all other miniclusters the full set of properties could be reconstructed thanks to the non-directional daily modulation due to the rotation speed of the Earth. In addition to the concepts detailed here we expect that there will be many more extensions one could devise to cover the remaining axion windows. For instance in a cavity resonating at lower frequencies (e.g. 2 $\mu$eV), we have checked that under quantum limited noise it would be possible to directly measure the electric field with $S/N>1$ in a single coherence time. Tracking the phase of the electric field in several of these experiments (which would need to be separated by $\sim$km) and combining them in real time would allow this array of cavities to measure the instantaneous axion velocity and populate $f(\mathbf{v})$ “mode by mode”. At larger masses it may be possible to extend the dish antenna method to gain directional sensitivity to axion dark matter [62]. Beyond even these masses, at the upper end of the unexcluded axion window, a dielectric haloscope for optical frequencies has been suggested [144]. Since it is analogous to the dielectric haloscope concept used by MADMAX it could be extended in precisely the same way as we have described. The only difference would be present in the statistical treatment of the background necessary when using bolometers to do photon counting as proposed by the authors. Whatever new experiments enter the stage, we hope that the general formalism we have developed here will be of use. Acknowledgements We thank N. W. Evans for discussion and further information on the local streams. We also thank E. Vitagliano for enlightening discussions. CAJO is very grateful for the benevolent hospitality of the Max Planck Institute for Physics in Munich, where much of this work took place. CAJO is spoilt by the grant FPA2015-65745-P from the Spanish MINECO and European FEDER. AJM acknowledges partial support by the Deutsche Forschungsgemeinschaft through Grant No. EXC 153 (Excellence Cluster “Universe”) and Grant No. SFB 1258 (Collaborative Research Center “Neutrinos, Dark Matter, Messengers”), as well as the European Union through Grant No. H2020-MSCA-RISE-2015/690575 (Research and Innovation Staff Exchange project “Invisibles Plus”). JR is supported by the Ramon y Cajal Fellowship 2012-10597, the grant FPA2015-65745-P (MINECO/FEDER), the EU through the ITN “Elusives” H2020-MSCA-ITN-2015/674896 and the Deutsche Forschungsgemeinschaft under grant SFB-1258 as a Mercator Fellow. Appendix A Lab velocity and modulation parameters This appendix deals with the computation of the lab velocity, in particular its three dimensional components in our laboratory coordinate system and the derivation of our definition of the daily modulation parameters $\{c_{0},\,c_{1},\,\phi\}$. The lab velocity $\mathbf{v}_{\mathrm{lab}}(t)$ is annually and diurnally modulated by the revolution and rotation of the Earth. To compute $\mathbf{v}_{\mathrm{lab}}(t)$ we need to first define the galactic coordinate system $(\hat{\textbf{x}}_{g},\hat{\textbf{y}}_{g},\hat{\textbf{z}}_{g})$ with axes in directions pointing to the galactic centre, galactic rotation (at the position of the Solar system), and the galactic north pole. We can transform vectors from the galactic to the laboratory system with the following transformation, $$\begin{pmatrix}\hat{\mathcal{N}}\\ \hat{\mathcal{W}}\\ \hat{\mathcal{Z}}\end{pmatrix}=R_{\rm lab}(t)\left(R_{\rm gal}\begin{pmatrix}% \hat{\textbf{x}}_{g}\\ \hat{\textbf{y}}_{g}\\ \hat{\textbf{z}}_{g}\end{pmatrix}\right)\,,$$ (A.1) where the transformation from the galactic to the intermediate equatorial system is given by the matrix, $$R_{\rm gal}=\begin{pmatrix}-0.05487556&+0.49410943&-0.86766615\\ -0.87343709&-0.44482963&-0.19807637\\ -0.48383502&+0.74698225&+0.45598378\end{pmatrix}\,,$$ (A.2) with values assuming the International Celestial Reference System convention for the right ascension and declination of the North Galactic Pole, $(\alpha_{\rm GP},\delta_{\rm GP})=(192^{\circ}.85948,\,+27^{\circ}.12825)$ as well as the longitude of the North Celestial Pole $l_{\rm CP}=122^{\circ}.932$ [159]. Then, from the equatorial to the laboratory system at latitude $\lambda_{\rm lab}$ we use the matrix, $$R_{\rm lab}(t)=\begin{pmatrix}-\sin(\lambda_{\textrm{lab}})\cos(\tau_{d})&-% \sin(\lambda_{\textrm{lab}})\sin(\tau_{d})&\cos(\lambda_{\textrm{lab}})\\ \sin(\tau_{d})&-\cos(\tau_{d})&0\\ \cos(\lambda_{\textrm{lab}})\cos(\tau_{d})&\cos(\lambda_{\textrm{lab}})\sin(% \tau_{d})&\sin(\lambda_{\textrm{lab}})\end{pmatrix}\,.$$ (A.3) The Local Apparent Sidereal Time, $\tau_{d}$, is expressed as an angle for convenience, $$\tau_{d}=\omega_{d}(t-t_{d})+\phi_{\rm lab}\,,$$ (A.4) where $\omega_{d}=2\pi/(0.9973\,{\rm days})$ and $t_{d}=0.721$ days (making sure to measure $t$ in days since January 1). Recall that $\phi_{\textrm{lab}}$ is the longitude of the laboratory location so naturally sets the phase of the diurnal modulation. The frequency should be one sidereal day, but since we will use the definition of the Solar day when we construct the Earth orbital velocity, the frequency here is slightly faster than once per day day. This distinction is mostly unimportant, but can be used as a useful cross check and ensures that the value of the daily modulation does not drift anomalously over the course of the year. The lab velocity is in total, $$\textbf{v}_{\textrm{lab}}={\bf v}_{\rm LSR}+{\bf v}_{\rm pec}+{\bf v}_{\oplus}% +{\bf v}_{\rm rot}\,.$$ (A.5) The galactic rotation velocity $\mathbf{v}_{\textrm{LSR}}$ and Solar peculiar velocity $\mathbf{v}_{\textrm{pec}}$ are both fixed in galactic coordinates. The velocity of the local standard of rest (LSR) is defined in galactic coordinates as (0, $v_{0}$, 0) where $v_{0}$ is the circular rotation speed of the Milky Way. The standard value tends to be $v_{0}\sim 220$ km s${}^{-1}$ [160], but astronomical determinations of this speed are heavily dependent on the model used for the MW rotation curve, e.g. ref. [148] quote values of $v_{0}$ from $200\pm 20$ km s${}^{-1}$ to $279\pm 33$ km s${}^{-1}$. The Solar peculiar velocity can also be measured with kinematic data, we use the value from ref. [161] of $\textbf{v}_{\rm pec}=(11.1^{+0.69}_{-0.75},12.24^{+0.47}_{-0.47},7.25^{+0.37}_% {-0.30})$ km s${}^{-1}$ with additional $\sim$ 0.5 - 2 $\textrm{ km s}^{-1}$ sized systematic uncertainties. In a direct detection experiment on Earth only the combination of these first two velocities is measurable, $${\bf v}_{\rm LSR}+{\bf v}_{\rm pec}\equiv\mathbf{v}_{\odot}=v_{\odot}(0.0477,0% .9984,0.0312)\,,$$ (A.6) where $v_{\odot}=232.6$ km s${}^{-1}$. The Earth revolution velocity is calculable in galactic coordinates to be [162], $$\mathbf{v}_{\oplus}=v_{\oplus}\left(\cos[\omega_{y}(t-t_{y})]\hat{\bm{\epsilon% }}_{1}+\sin[\omega_{y}(t-t_{y})]\hat{\bm{\epsilon}}_{2}\right)\,,$$ (A.7) where $\omega_{y}=2\pi/(365\,{\rm days})$, $t_{y}=$ March 20 and $v_{\oplus}=29.79$ km s${}^{-1}$. The vectors are, $$\displaystyle\hat{\bm{\epsilon}}_{1}$$ $$\displaystyle=$$ $$\displaystyle(0.9940,0.1095,0.0031)\,,$$ (A.8) $$\displaystyle\hat{\bm{\epsilon}}_{2}$$ $$\displaystyle=$$ $$\displaystyle(-0.0517,0.4945,-0.8677)\,.$$ (A.9) Since we have two separate modulations: a daily one with frequency $\omega_{d}$ and an annual one with frequency $\omega_{y}$, to compress our formulae we again write both times as angles, $$\tau_{d}=\omega_{d}(t-t_{d})+\phi_{\textrm{lab}}\,,\quad\tau_{y}=\omega_{y}(t-% t_{y})\,.$$ (A.10) Finally, we have the rotational velocity of the Earth which always points east171717Apart from at the poles when it is 0. in laboratory coordinates $$\textbf{v}_{\textrm{rot}}=v_{\rm rot}\cos{\lambda_{\rm lab}}\begin{pmatrix}0\\ -1\\ 0\end{pmatrix}\,,$$ (A.11) where $v_{\rm rot}=0.47{\rm\,km\,s}^{-1}$. Putting everything together we find that we need to calculate, $$\mathbf{v}_{\mathrm{lab}}(t)=R_{\rm lab}(\tau_{d})R_{\rm gal}(\mathbf{v}_{% \odot}+\mathbf{v}_{\oplus}(\tau_{y}))+\textbf{v}_{\textrm{rot}}\,.$$ (A.12) Focusing on a particular axis in turn we can write down, $$\displaystyle v_{\rm lab}^{\mathcal{N}}$$ $$\displaystyle\equiv$$ $$\displaystyle\mathbf{v}_{\mathrm{lab}}\cdot\hat{\mathcal{N}}=\sigma_{3}\cos{% \lambda_{\textrm{lab}}}-\sin{\lambda_{\textrm{lab}}}(\sigma_{1}\cos{\tau_{d}}+% \sigma_{2}\sin{\tau_{d}})\,,$$ (A.13) $$\displaystyle v_{\rm lab}^{\mathcal{W}}$$ $$\displaystyle\equiv$$ $$\displaystyle\mathbf{v}_{\mathrm{lab}}\cdot\hat{\mathcal{W}}=-\sigma_{2}\cos{% \tau_{d}}+\sigma_{1}\sin{\tau_{d}}-v_{\textrm{rot}}\cos{\lambda_{\textrm{lab}}% }\,,$$ (A.14) $$\displaystyle v_{\rm lab}^{\mathcal{Z}}$$ $$\displaystyle\equiv$$ $$\displaystyle\mathbf{v}_{\mathrm{lab}}\cdot\hat{\mathcal{Z}}=\sigma_{3}\sin{% \lambda_{\textrm{lab}}}-\cos{\lambda_{\textrm{lab}}}(\sigma_{1}\cos{\tau_{d}}+% \sigma_{2}\sin{\tau_{d}})\,,$$ (A.15) and we have defined, $$\displaystyle\sigma_{1}(\tau_{y})$$ $$\displaystyle=$$ $$\displaystyle\begin{pmatrix}-0.05487556\\ +0.49410943\\ -0.86766615\end{pmatrix}\cdot\left(\mathbf{v}_{\odot}+v_{\oplus}\cos(\tau_{y})% \hat{\bm{\epsilon}}_{1}+v_{\oplus}\sin(\tau_{y})\hat{\bm{\epsilon}}_{2}\right)\,,$$ (A.16) $$\displaystyle\sigma_{2}(\tau_{y})$$ $$\displaystyle=$$ $$\displaystyle\begin{pmatrix}-0.87343709\\ -0.44482963\\ -0.19807637\end{pmatrix}\cdot\left(\mathbf{v}_{\odot}+v_{\oplus}\cos(\tau_{y})% \hat{\bm{\epsilon}}_{1}+v_{\oplus}\sin(\tau_{y})\hat{\bm{\epsilon}}_{2}\right)\,,$$ (A.17) $$\displaystyle\sigma_{3}(\tau_{y})$$ $$\displaystyle=$$ $$\displaystyle\begin{pmatrix}-0.48383502\\ +0.74698225\\ +0.45598378\end{pmatrix}\cdot\left(\mathbf{v}_{\odot}+v_{\oplus}\cos(\tau_{y})% \hat{\bm{\epsilon}}_{1}+v_{\oplus}\sin(\tau_{y})\hat{\bm{\epsilon}}_{2}\right)\,.$$ (A.18) If we have a cavity that is primarily sensitive to only one direction ($\hat{\mathcal{N}}$, $\hat{\mathcal{W}}$, or $\hat{\mathcal{Z}}$), then the signal correction is dependent on the magnitude of the lab velocity, and the angle between the preferred direction and $\mathbf{v}_{\textrm{lab}}(t)$ which we write as $\cos{\theta^{\mathcal{N},\mathcal{W},\mathcal{Z}}_{\rm lab}}(t)$. Therefore to estimate the significance of a modulation in these signals we need only know the size of the modulation in $v_{\textrm{lab}}(t)$ and $\cos{\theta^{\mathcal{N},\mathcal{W},\mathcal{Z}}_{\rm lab}}(t)$ over a day or a year. Firstly, the speed of the lab we can compute in any coordinate system, so we do this in galactic coordinates with ease (for simplicity we ignore the 0.2% contribution from the Earth’s rotation here). It can be written as, $$v_{\rm lab}(t)=\sqrt{v_{\odot}^{2}+v_{\oplus}^{2}+2\alpha v_{\odot}v_{\oplus}% \cos(\tau_{y}-\omega_{y}\bar{t}\,)}\,,$$ (A.19) where $\alpha=0.491$ and $\bar{t}=72.4$ days. Then for each axis we have $\cos{\theta^{i}_{\rm lab}}=v^{i}/v_{\rm lab}$. The full formulae for these are long winded if including both daily and annual modulation, but we can use the fact that the daily modulation is much faster than the annual to write a simplified description to aid in our analytic treatment of the test statistic. Looking at the daily modulation first, we take $\sigma_{1,2,3}$ and $v_{\textrm{lab}}$ as constant, and we reduce each angle down to the form $\cos{\theta}=c_{0}+c_{1}\cos{\left(\omega_{d}t+\phi\right)}$ in the following way, $$\displaystyle\cos{\theta_{\rm lab}^{\mathcal{N}}}$$ $$\displaystyle=$$ $$\displaystyle b_{0}\cos{\lambda_{\rm lab}}-b_{1}\sin{\lambda_{\rm lab}}\cos{% \left(\omega_{d}t+\phi_{\rm lab}+\psi\right)}\,,$$ (A.20) $$\displaystyle\cos{\theta_{\rm lab}^{\mathcal{W}}}$$ $$\displaystyle=$$ $$\displaystyle b_{1}\cos{\left(\omega_{d}t+\phi_{\rm lab}+\psi-\pi\right)}\,,$$ (A.21) $$\displaystyle\cos{\theta_{\rm lab}^{\mathcal{Z}}}$$ $$\displaystyle=$$ $$\displaystyle b_{0}\sin{\lambda_{\rm lab}}+b_{1}\cos{\lambda_{\rm lab}}\cos{% \left(\omega_{d}t+\phi_{\rm lab}+\psi\right)}\,,$$ (A.22) where for our directional experiments the only unknowns regarding the daily modulation are, $$\displaystyle b_{0}$$ $$\displaystyle=$$ $$\displaystyle\sigma_{3}/v_{\textrm{lab}}\,,$$ (A.23) $$\displaystyle b_{1}$$ $$\displaystyle=$$ $$\displaystyle\sqrt{\sigma_{1}^{2}+\sigma_{2}^{2}}/v_{\textrm{lab}}\,,$$ (A.24) $$\displaystyle\psi$$ $$\displaystyle=$$ $$\displaystyle\tan^{-1}\left(\sigma_{1}/\sigma_{2}\right)-\omega_{d}t_{d}-\pi/2\,.$$ (A.25) For example on January 1 we have, $\{b_{0},b_{1},\psi\}=\{0.7589,0.6512,-3.5336\}$. Then when we use the definition $\cos{\theta}=c_{0}+c_{1}\cos{\left(\omega_{d}t+\phi\right)}$ we just absorb the laboratory location into the experiment specific constants $\{c_{0},\,c_{1},\,\phi\}$. The ranges for these experiment specific values over the full year are displayed in figure 13. Assuming we have knowledge of $v_{\textrm{lab}}$ on a given day from the frequency dependence of the power spectrum, the daily modulation can be inverted, $$\mathbf{v}_{\odot}=v_{\textrm{lab}}R_{\textrm{gal}}^{T}\begin{pmatrix}b_{1}% \sin{\left(\psi+\omega_{d}t_{d}+\pi/2\right)}\\ b_{1}\cos{\left(\psi+\omega_{d}t_{d}+\pi/2\right)}\\ b_{0}\end{pmatrix}-\mathbf{v}_{\oplus}\,.$$ (A.26) The uncertainties on each component of the velocity will then depend linearly on the uncertainties of the constants. Notice that in fact you only need the daily modulation in one of the cavities ($\mathcal{N}$ or $\mathcal{Z}$) to measure all three constants. The west pointing experiment cannot measure $b_{0}$, because it always rotates in the same direction the experiment points so the modulation in angle will always be centred around 0181818The west-pointing experiment is still useful since its modulation amplitude is larger than the other two directions.. For a $q$-type experiment we can only measure the square of the cosine of each angle, so there will be degenerate solutions for $\{b_{0},\,b_{1}\}$ and $\{-b_{0},\,-b_{1}\}$ but given that we know that the the second component of $\mathbf{v}_{\odot}$ is $\sim 220$ km s${}^{-1}$ there will only be one solution that is consistent with galactic rotation. For streams we do not necessarily have a prior expectation on a velocity so this will not be possible and there will always be multiple solutions. However the procedure is the same, $$\mathbf{v}_{\rm str}=\mathbf{v}_{\odot}+\mathbf{v}_{\oplus}-|\mathbf{v}_{% \textrm{lab}}-\mathbf{v}_{\textrm{str}}|R_{\textrm{gal}}^{T}\begin{pmatrix}b_{% 1}\sin{\left(\psi+\omega_{d}t_{d}+\pi/2\right)}\\ b_{1}\cos{\left(\psi+\omega_{d}t_{d}+\pi/2\right)}\\ b_{0}\end{pmatrix}\,,$$ (A.27) where again the value of $|\mathbf{v}_{\textrm{lab}}-\mathbf{v}_{\textrm{str}}|$ can be independently inferred from the frequency of the feature. In this study we focus on daily modulations, but it will still be possible to search for annual modulations and indeed this will always improve statistics. However this does not require directional sensitivity and was covered extensively in previous work [83, 84]. We can show the size of annually modulating features in our directional haloscopes by coarse graining over the daily modulation to only consider its range at each day of the year. We do this by defining $\cos{\theta_{\pm}}$ which is the maximum and minimum value of $\cos{\theta_{\rm lab}}$ within one day, where the parameters that modulate annually are taken to be constant over that day (which they approximately are), $$\displaystyle v_{\rm lab}\cos{\theta}_{\pm}^{\mathcal{N}}$$ $$\displaystyle=$$ $$\displaystyle\sigma_{3}\cos{\lambda_{\textrm{lab}}}\pm\sqrt{\sigma^{2}_{1}+% \sigma^{2}_{2}}\sin{\lambda_{\textrm{lab}}}\,,$$ (A.28) $$\displaystyle v_{\rm lab}\cos{\theta}_{\pm}^{\mathcal{W}}$$ $$\displaystyle=$$ $$\displaystyle\pm\sqrt{\sigma^{2}_{1}+\sigma^{2}_{2}}\,,$$ (A.29) $$\displaystyle v_{\rm lab}\cos{\theta}_{\pm}^{\mathcal{Z}}$$ $$\displaystyle=$$ $$\displaystyle\sigma_{3}\sin{\lambda_{\textrm{lab}}}\pm\sqrt{\sigma^{2}_{1}+% \sigma^{2}_{2}}\cos{\lambda_{\textrm{lab}}}\,.$$ (A.30) When focusing on the daily modulation it is sufficient to say that at a given day during the year, each angle oscillates sinusoidally between $\cos\theta_{-}$ and $\cos\theta_{+}$. Each term in the above formulae are time dependent with a frequency of 1 year. We claimed that in our experiments the daily modulation is the more important effect. We display how the size of the daily modulation varies on top of the annual modulation in figure 13. We see that the amplitude of each daily modulation is larger by up to factor of 4 for each experiment whilst also varying a factor of 365 more quickly. One can also observe that the ranges for the modulation parameters are rather small and would only induce an error of 25% if left constant over the whole year (and we generally only use times shorter than a few days). For times much longer than this (e.g. figure 4) we account for the full calculation including diurnal and annual modulations. Appendix B Analytic formulae for the test statistic In eq. (4.13) we claimed that the integrals over frequency and time in the test statistic can be written analytically for a Maxwellian distribution (describing the SHM or a stream). In the interest of readability we have deposited them here. B.1 Linear experiments First for the $\ell$-type experiments we need to do $$\displaystyle D_{\ell}\approx$$ $$\displaystyle\,\frac{\Delta\omega}{\Delta t}\int_{0}^{t_{\rm obs}}{\rm d}t\int% _{m_{a}}^{\infty}{\rm d}\omega~{}\left(\frac{P_{0}f(\omega)\,\zeta_{\ell}(% \omega)\,\cos[\theta_{\rm lab}(t)]\,\mathpzc{g}_{\ell}}{\sigma_{N}}\right)^{2}$$ (B.1) $$\displaystyle=$$ $$\displaystyle\,2\pi\left(\frac{P_{0}}{k_{B}T_{\rm sys}}\right)^{2}\mathpzc{g}_% {\ell}^{2}\,\mathcal{I}^{\ell}_{\omega}\,\mathcal{I}^{\ell}_{t}\,,$$ (B.2) where we have substituted $\sigma_{N}=k_{B}T_{\rm sys}\sqrt{\Delta\omega/2\pi\Delta t}$. We simplify the notation here (and for similar expressions later) by separating the time and frequency integrals. Recalling that we are writing the daily modulation of the lab velocity angle as $\cos{\theta_{\rm lab}(t)}=c_{0}+c_{1}\cos{(\omega_{d}t+\phi)}$ we can integrate over this as well as over a Maxwellian $f(\omega)$ to give the following, $$\displaystyle\mathcal{I}^{\ell}_{\omega}=$$ $$\displaystyle\frac{\sqrt{\pi}\left(2v_{\mathrm{lab}}^{2}-\sigma_{v}^{2}\right)% \text{erf}\left(\frac{v_{\mathrm{lab}}}{\sigma_{v}}\right)+2\sigma_{v}v_{% \mathrm{lab}}e^{-\frac{v_{\mathrm{lab}}^{2}}{\sigma_{v}^{2}}}}{4\pi v_{\mathrm% {lab}}\sigma_{v}}\cdot\frac{1}{m_{a}}\,,$$ (B.3) $$\displaystyle\mathcal{I}^{\ell}_{t}=$$ $$\displaystyle\Bigl{[}{c_{0}}^{2}t+\frac{{c_{1}}^{2}t}{2}+\frac{2{c_{0}}{c_{1}}% \sin(\omega_{d}t+\phi)}{\omega_{d}}+\frac{{c_{1}}^{2}\sin(\omega_{d}t+\phi)% \cos(\omega_{d}t+\phi)}{2\omega_{d}}+\frac{2{c_{0}}^{2}\phi+{c_{1}}^{2}\phi}{2% \omega_{d}}\Bigr{]}_{0}^{t_{\rm obs}}\,.$$ The factor $1/m_{a}$ comes from the fact that we integrate the square of the distribution $f(\omega)$191919Consider: $\displaystyle\int_{m_{a}}^{\infty}{\rm d}\omega f(\omega)^{2}=\int_{m_{a}}^{% \infty}{\rm d}\omega\left(\frac{{\rm d}v}{{\rm d}\omega}f(v)\right)^{2}=\frac{% 1}{m_{a}}\int_{0}^{\infty}{\rm d}v\frac{f(v)^{2}}{v}\,.$ . In order to write the integral over time starting at 0 we need to ensure that the phase is defined according to the same definition of time. Throughout we assume our origin is January 1 where $\psi=−3.5336$ (see eq. (A.25)). For daily modulations with a total measurement time $t_{\rm obs}$ over several days we can simplify the time integral to, $$\displaystyle\mathcal{I}^{\ell}_{t}$$ $$\displaystyle\approx\left({c_{0}}^{2}+\frac{1}{2}{c_{1}}^{2}\right)t_{\rm obs}\,.$$ (B.4) Also if we have a stream, we can approximate further as long as $\sigma_{v}\equiv\sigma_{\rm str}\ll|\mathbf{v}_{\mathrm{lab}}-\mathbf{v}_{% \mathrm{str}}|$, giving instead $$\displaystyle\mathcal{I}^{\ell}_{\omega}$$ $$\displaystyle\approx\frac{\left(2|\mathbf{v}_{\mathrm{lab}}-\mathbf{v}_{% \mathrm{str}}|^{2}-\sigma_{\rm str}^{2}\right)}{4\sqrt{\pi}|\mathbf{v}_{% \mathrm{lab}}-\mathbf{v}_{\mathrm{str}}|\sigma_{\rm str}}\cdot\frac{1}{m_{a}},$$ (B.5) B.2 Quadratic experiments Next, we consider the quadratic case. We can write the test statistic in the same way except the directional correction has an unmodulated ‘offset’ term and a modulated term which means the full test statistic has to be written as, $$\displaystyle D_{q}=2\pi\left(\frac{P_{0}}{k_{B}T_{\rm sys}}\right)^{2}% \mathpzc{g}_{q}^{2}\,\left(\mathcal{I}^{q1}_{\omega}\,\mathcal{I}^{q1}_{t}+% \mathcal{I}^{q2}_{\omega}\,\mathcal{I}^{q2}_{t}+\mathcal{I}^{q12}_{\omega}\,% \mathcal{I}^{q12}_{t}\right)\,.$$ (B.6) We use the label ‘$q1$’ for the integrals of the offset term $\zeta_{q1}(\omega)$ and ‘$q2$’ for the integrals of the modulation term $\zeta_{q2}(\omega)\cos{\theta_{\rm lab}}$. Since we integrate over the square of the directional correction we need to include the mixing term which we label ‘$q12$’. First for the offset we have, $$\displaystyle\mathcal{I}_{\omega}^{q1}$$ $$\displaystyle=\left(\frac{\sigma_{v}^{2}}{v_{\rm lab}}\right)^{2}\mathcal{I}_{% \omega}^{\ell}\,,$$ (B.7) $$\displaystyle\mathcal{I}_{t}^{q1}$$ $$\displaystyle=t_{\rm obs}\,.$$ (B.8) Then for the modulation, $$\displaystyle\mathcal{I}^{q2}_{\omega}=$$ $$\displaystyle\frac{\sqrt{\pi}\left(-4\sigma_{v}^{2}v_{\mathrm{lab}}^{2}+4v_{% \mathrm{lab}}^{4}+3\sigma_{v}^{4}\right)\text{erf}\left(\frac{v_{\mathrm{lab}}% }{\sigma_{v}}\right)+e^{-\frac{v_{\mathrm{lab}}^{2}}{\sigma_{v}^{2}}}\left(4% \sigma_{v}v_{\mathrm{lab}}^{3}-6\sigma_{v}^{3}v_{\mathrm{lab}}\right)}{8\pi% \sigma_{v}v_{\mathrm{lab}}}\cdot\frac{1}{m_{a}}\,,$$ (B.9) $$\displaystyle\mathcal{I}^{q2}_{t}=$$ $$\displaystyle\Bigg{[}\left(\frac{3c_{1}^{4}}{8}+3c_{1}^{2}c_{0}^{2}+c_{0}^{4}% \right)t$$ (B.10) $$\displaystyle+\left(\frac{4c_{1}c_{0}^{3}}{\omega_{d}}+\frac{3c_{1}^{3}c_{0}}{% \omega_{d}}\right)\sin(t\omega_{d}+\phi)+\left(\frac{3c_{1}^{2}c_{0}^{2}}{2% \omega_{d}}+\frac{c_{1}^{4}}{4\omega_{d}}\right)\sin(2(t\omega_{d}+\phi))$$ (B.11) $$\displaystyle+\frac{c_{1}^{3}c_{0}}{3\omega_{d}}\sin(3(t\omega_{d}+\phi))+% \frac{c_{1}^{4}}{32\omega_{d}}\sin(4(t\omega_{d}+\phi))$$ (B.12) $$\displaystyle+\frac{c_{0}^{4}\phi}{\omega_{d}}+\frac{3c_{1}^{2}c_{0}^{2}\phi}{% \omega_{d}}+\frac{3c_{1}^{4}\phi}{8\omega_{d}}\Bigg{]}_{0}^{t_{\rm obs}}\,,$$ (B.13) Then finally the mixing term can be written in terms of integrals already calculated, $$\displaystyle\mathcal{I}_{\omega}^{q12}\mathcal{I}_{t}^{q12}=$$ $$\displaystyle\,2\int_{0}^{t_{\rm obs}}\textrm{d}t\,\int_{m_{a}}^{\infty}% \textrm{d}\omega~{}\zeta_{q1}(\omega)\zeta_{q2}(\omega)\left(f(\omega)\cos(% \theta_{\rm lab})\right)^{2}$$ (B.14) $$\displaystyle=$$ $$\displaystyle~{}2\frac{\sigma_{v}^{2}}{v_{\mathrm{lab}}^{2}}~{}\mathcal{I}_{% \omega}^{q2}~{}\mathcal{I}^{\ell}_{t}\,.$$ (B.15) Two of these integrals can be simplified in a similar way to before. If we approximate over several days we can write down, $$\displaystyle\mathcal{I}^{q2}_{t}\approx$$ $$\displaystyle\left(\frac{3c_{1}^{4}}{8}+3c_{1}^{2}c_{0}^{2}+c_{0}^{4}\right)t_% {\rm obs}\,.$$ (B.16) And if we have a low dispersion stream we can use, $$\displaystyle\mathcal{I}^{q2}_{\omega}\approx$$ $$\displaystyle\,\frac{\sqrt{\pi}\left(-4\sigma_{\rm str}^{2}|\mathbf{v}_{% \mathrm{lab}}-\mathbf{v}_{\mathrm{str}}|^{2}+4|\mathbf{v}_{\mathrm{lab}}-% \mathbf{v}_{\mathrm{str}}|^{4}+3\sigma_{\rm str}^{4}\right)}{8\pi\sigma_{\rm str% }|\mathbf{v}_{\mathrm{lab}}-\mathbf{v}_{\mathrm{str}}|}\,.$$ (B.17) References [1] R. D. Peccei and H. R. Quinn, CP Conservation in the presence of instantons, Phys. Rev. Lett. 38 (1977) 1440–1443. [2] J. E. Kim and G. Carosi, Axions and the Strong CP Problem, Rev. Mod. Phys. 82 (2010) 557–602, [0807.3125]. [3] D. J. E. Marsh, Axion cosmology, Phys. Rept. 643 (2016) 1–79, [1510.07633]. [4] J. Preskill, M. B. Wise and F. Wilczek, Cosmology of the invisible axion, Phys. Lett. B 120 (1983) 127–132. [5] M. Dine and W. Fischler, The not so harmless axion, Phys. Lett. B 120 (1983) 137–141. [6] L. F. Abbott and P. Sikivie, A cosmological bound on the invisible axion, Phys. Lett. B 120 (1983) 133–136. [7] O. Wantz and E. P. S. Shellard, Axion cosmology revisited, Phys. Rev. D 82 (2010) 123508, [0910.1066]. [8] R. L. Davis, Cosmic axions from cosmic strings, Phys. Lett. B 180 (1986) 225–230. [9] T. Hiramatsu, M. Kawasaki, K. Saikawa and T. Sekiguchi, Production of dark matter axions from collapse of string-wall systems, Phys. Rev. D 85 (2012) 105020, [1202.5851]. [10] T. Hiramatsu, M. Kawasaki, K. Saikawa and T. Sekiguchi, Axion cosmology with long-lived domain walls, JCAP 1301 (2013) 001, [1207.3166]. [11] M. Kawasaki, K. Saikawa and T. Sekiguchi, Axion dark matter from topological defects, Phys. Rev. D 91 (2015) 065014, [1412.0789]. [12] M. Gorghetto, E. Hardy and G. Villadoro, Axions from Strings: the Attractive Solution, 1806.04677. [13] C. J. Hogan and M. J. Rees, Axion miniclusters, Phys. Lett. B 205 (1988) 228–230. [14] E. W. Kolb and I. I. Tkachev, Axion miniclusters and Bose stars, Phys. Rev. Lett. 71 (1993) 3051–3054, [hep-ph/9303313]. [15] E. W. Kolb and I. I. Tkachev, Nonlinear axion dynamics and formation of cosmological pseudosolitons, Phys. Rev. D 49 (1994) 5040–5051, [astro-ph/9311037]. [16] E. W. Kolb and I. I. Tkachev, Large amplitude isothermal fluctuations and high density dark matter clumps, Phys. Rev. D 50 (1994) 769–773, [astro-ph/9403011]. [17] E. W. Kolb and I. I. Tkachev, Femtolensing and picolensing by axion miniclusters, Astrophys. J. 460 (1996) L25–L28, [astro-ph/9510043]. [18] V. S. Berezinsky, V. I. Dokuchaev and Yu. N. Eroshenko, Formation and internal structure of superdense dark matter clumps and ultracompact minihaloes, JCAP 1311 (2013) 059, [1308.6742]. [19] J. Enander, A. Pargner and T. Schwetz, Axion minicluster power spectrum and mass function, 1708.04466. [20] E. Braaten, A. Mohapatra and H. Zhang, Dense axion stars, Phys. Rev. Lett. 117 (2016) 121801, [1512.00108]. [21] J. Eby, P. Suranyi and L. C. R. Wijewardhana, The lifetime of axion stars, Mod. Phys. Lett. A 31 (2016) 1650090, [1512.01709]. [22] P.-H. Chavanis, Phase transitions between dilute and dense axion stars, 1710.06268. [23] L. Visinelli, S. Baum, J. Redondo, K. Freese and F. Wilczek, Dilute and dense axion stars, Phys. Lett. B 777 (2018) 64–72, [1710.08910]. [24] M. Colpi, S. L. Shapiro and I. Wasserman, Boson stars: gravitational equilibria of selfinteracting scalar fields, Phys. Rev. Lett. 57 (1986) 2485–2488. [25] D. G. Levkov, A. G. Panin and I. I. Tkachev, Relativistic axions from collapsing Bose stars, Phys. Rev. Lett. 118 (2017) 011301, [1609.03611]. [26] S. Davidson and T. Schwetz, Rotating drops of axion dark matter, Phys. Rev. D 93 (2016) 123509, [1603.04249]. [27] P. Tinyakov, I. Tkachev and K. Zioutas, Tidal streams from axion miniclusters and direct axion searches, JCAP 1601 (2016) 035, [1512.02884]. [28] V. I. Dokuchaev, Yu. N. Eroshenko and I. I. Tkachev, Destruction of axion miniclusters in the Galaxy, J. Exp. Theor. Phys. 125 (2017) 434–442, [1710.09586]. [29] A. Iwazaki, Axion stars and fast radio bursts, Phys. Rev. D 91 (2015) 023008, [1410.4323]. [30] I. I. Tkachev, Fast radio bursts and axion miniclusters, JETP Lett. 101 (2015) 1–6, [1411.3900]. [31] A. Iwazaki, Fast radio bursts from axion stars, 1412.7825. [32] S. Raby, Axion star collisions with Neutron stars and Fast Radio Bursts, Phys. Rev. D94 (2016) 103004, [1609.01694]. [33] J. B. Muñoz, E. D. Kovetz, L. Dai and M. Kamionkowski, Lensing of fast radio bursts as a probe of compact dark matter, Phys. Rev. Lett. 117 (2016) 091301, [1605.00008]. [34] A. Iwazaki, Axion stars and repeating fast radio bursts with finite bandwidths, 1707.04827. [35] M. S. Pshirkov, May axion clusters be sources of fast radio bursts?, Int. J. Mod. Phys. D26 (2017) 1750068, [1609.09658]. [36] M. Fairbairn, D. J. E. Marsh and J. Quevillon, Searching for the QCD axion with gravitational microlensing, Phys. Rev. Lett. 119 (2017) 021101, [1701.04787]. [37] M. Fairbairn, D. J. E. Marsh, J. Quevillon and S. Rozier, Structure formation and microlensing with axion miniclusters, Phys. Rev. D 97 (2018) 083502, [1707.03310]. [38] CAST collaboration, K. Zioutas et al., First results from the CERN Axion Solar Telescope (CAST), Phys. Rev. Lett. 94 (2005) 121301, [hep-ex/0411033]. [39] E. Armengaud et al., Conceptual design of the International Axion Observatory (IAXO), JINST 9 (2014) T05002, [1401.3233]. [40] K. Van Bibber, N. R. Dagdeviren, S. E. Koonin, A. Kerman and H. N. Nelson, Proposed experiment to produce and detect light pseudoscalars, Phys. Rev. Lett. 59 (1987) 759–762. [41] R. Bähre et al., Any Light Particle Search II — Technical design report, JINST 8 (2013) T09001, [1302.5647]. [42] ADMX collaboration, S. J. Asztalos et al., A SQUID-based microwave cavity search for dark-matter axions, Phys. Rev. Lett. 104 (2010) 041301, [0910.5914]. [43] ADMX collaboration, N. Du et al., A search for invisible axion dark matter with the Axion Dark Matter Experiment, Phys. Rev. Lett. 120 (2018) 151301, [1804.05750]. [44] B. M. Brubaker et al., First results from a microwave cavity axion search at 24 $\mu$eV, Phys. Rev. Lett. 118 (2017) 061302, [1610.02580]. [45] N. M. Rapidis, Application of the bead perturbation technique to a study of a tunable 5 GHz annular cavity, 2017, 1708.04276. [46] B. M. Brubaker, L. Zhong, S. K. Lamoreaux, K. W. Lehnert and K. A. van Bibber, HAYSTAC axion search analysis procedure, Phys. Rev. D 96 (2017) 123008, [1706.08388]. [47] L. Zhong, B. M. Brubaker, S. B. Cahn and S. K. Lamoreaux, Recent technical improvements to the HAYSTAC experiment, 2017, 1706.03676. [48] B. M. Brubaker, First results from the HAYSTAC axion search, Ph.D. thesis, 2018. 1801.00835. [49] W. Chung, CULTASK, The Coldest Axion Experiment at CAPP/IBS in Korea, PoS CORFU2015 (2016) 047. [50] S. Lee, Development of a data acquisition software for the CULTASK experiment, J. Phys. Conf. Ser. 898 (2017) 032035. [51] W. Chung, Launching axion experiment at CAPP/IBS in Korea, in Proceedings, 12th Patras Workshop on Axions, WIMPs and WISPs (PATRAS 2016): Jeju Island, South Korea, June 20-24, 2016, pp. 30–34, 2017, DOI. [52] G. Rybka, A. Wagner, A. Brill, K. Ramos, R. Percival and K. Patel, Search for dark matter axions with the Orpheus experiment, Phys. Rev. D91 (2015) 011701, [1403.3121]. [53] B. T. McAllister, G. Flower, E. N. Ivanov, M. Goryachev, J. Bourhill and M. E. Tobar, The ORGAN Experiment: An axion haloscope above 15 GHz, Phys. Dark Univ. 18 (2017) 67–72, [1706.00209]. [54] B. T. McAllister, G. Flower, L. E. Tobar and M. E. Tobar, Tunable supermode dielectric resonators for axion dark-matter haloscopes, Phys. Rev. Applied 9 (2018) 014028, [1705.06028]. [55] A. A. Melcon et al., Axion searches with microwave filters: the RADES project, 1803.01243. [56] MADMAX Working Group collaboration, A. Caldwell, G. Dvali, B. Majorovits, A. Millar, G. Raffelt, J. Redondo et al., Dielectric haloscopes: a new way to detect axion dark matter, Phys. Rev. Lett. 118 (2017) 091801, [1611.05865]. [57] MADMAX interest Group collaboration, P. Brun et al., “A new experimental approach to probe QCD axion dark matter in the mass range above 40 micro eV.” https://www.mpp.mpg.de/fileadmin/user_upload/Forschung/MADMAX/madmax_white_paper.pdf, 2017. [58] A. J. Millar, G. G. Raffelt, J. Redondo and F. D. Steffen, Dielectric haloscopes to search for axion dark matter: theoretical foundations, JCAP 1701 (2017) 061, [1612.07057]. [59] DESY, “BRASS: Broadband Radiometric Axion SearcheS.” http://www.iexp.uni-hamburg.de/groups/astroparticle/brass/brassweb.htm, 2018. [60] D. Horns, J. Jaeckel, A. Lindner, A. Lobanov, J. Redondo and A. Ringwald, Searching for WISPy cold dark matter with a dish antenna, JCAP 1304 (2013) 016, [1212.2970]. [61] J. Suzuki, T. Horie, Y. Inoue and M. Minowa, Experimental search for hidden photon CDM in the eV mass range with a dish antenna, JCAP 1509 (2015) 042, [1504.00118]. [62] J. Jaeckel and S. Knirck, Directional resolution of dish antenna experiments to search for WISPy dark matter, JCAP 1601 (2016) 005, [1509.00371]. [63] S. Knirck, T. Yamazaki, Y. Okesaku, S. Asai, T. Idehara and T. Inada, First results from a hidden photon dark matter search in the meV sector using a plane-parabolic mirror system, 1806.05120. [64] P. Sikivie, N. Sullivan and D. B. Tanner, Proposal for axion dark matter detection using an LC circuit, Phys. Rev. Lett. 112 (2014) 131301, [1310.8545]. [65] Y. Kahn, B. R. Safdi and J. Thaler, Broadband and resonant approaches to axion dark matter detection, Phys. Rev. Lett. 117 (2016) 141801, [1602.01086]. [66] M. Silva-Feaver et al., Design overview of DM Radio Pathfinder experiment, IEEE Trans. Appl. Supercond. 27 (2016) 1400204, [1610.09344]. [67] B. T. McAllister, M. Goryachev, J. Bourhill, E. N. Ivanov and M. E. Tobar, Broadband axion dark matter haloscopes via electric field sensing, 1803.07755. [68] D. Budker, P. W. Graham, M. Ledbetter, S. Rajendran and A. Sushkov, Proposal for a Cosmic Axion Spin Precession Experiment (CASPEr), Phys. Rev. X 4 (2014) 021030, [1306.6089]. [69] P. W. Graham and S. Rajendran, New observables for direct detection of axion dark matter, Phys. Rev. D 88 (2013) 035023, [1306.6088]. [70] R. Barbieri, C. Braggio, G. Carugno, C. S. Gallo, A. Lombardi, A. Ortolan et al., Searching for galactic axions through magnetized media: the QUAX proposal, Phys. Dark Univ. 15 (2017) 135–141, [1606.02201]. [71] N. Crescini, C. Braggio, G. Carugno, P. Falferi, A. Ortolan and G. Ruoso, The QUAX-g${}_{p}$ g${}_{s}$ experiment to search for monopole-dipole axion interaction, Nucl. Instrum. Meth. A842 (2017) 109–113, [1606.04751]. [72] G. Ruoso, A. Lombardi, A. Ortolan, R. Pengo, C. Braggio, G. Carugno et al., The QUAX proposal: a search of galactic axion with magnetic materials, J. Phys. Conf. Ser. 718 (2016) 042051, [1511.09461]. [73] N. Crescini et al., Operation of a ferromagnetic axion haloscope at $m_{a}=58\,\mu$eV, 1806.00310. [74] I. G. Irastorza and J. Redondo, New experimental approaches in the search for axion-like particles, 1801.08127. [75] L. Krauss, J. Moody, F. Wilczek and D. E. Morris, Calculations for cosmic axion detection, Phys. Rev. Lett. 55 (1985) 1797. [76] ADMX collaboration, L. D. Duffy, P. Sikivie, D. B. Tanner, S. J. Asztalos, C. Hagmann, D. Kinion et al., A high resolution search for dark-matter axions, Phys. Rev. D 74 (2006) 012006, [astro-ph/0603108]. [77] J. Hoskins et al., Modulation sensitive search for nonvirialized dark-matter axions, Phys. Rev. D 94 (2016) 082001. [78] J. Hoskins et al., A search for non-virialized axionic dark matter, Phys. Rev. D 84 (2011) 121302, [1109.4128]. [79] J. V. Sloan et al., Limits on axion–photon coupling or on local axion density: Dependence on models of the Milky Way’s dark halo, Phys. Dark Univ. 14 (2016) 95–102. [80] J. D. Vergados and Y. Semertzidis, Axionic dark matter signatures in various halo models, Nucl. Phys. B 915 (2017) 10–18, [1601.04765]. [81] M. S. Turner, Periodic signatures for the detection of cosmic axions, Phys. Rev. D 42 (1990) 3572–3575. [82] F.-S. Ling, P. Sikivie and S. Wick, Diurnal and annual modulation of cold dark matter signals, Phys. Rev. D 70 (2004) 123503, [astro-ph/0405231]. [83] C. A. J. O’Hare and A. M. Green, Axion astronomy with microwave cavity experiments, Phys. Rev. D95 (2017) 063017, [1701.03118]. [84] J. W. Foster, N. L. Rodd and B. R. Safdi, Revealing the dark matter halo with axion direct detection, 1711.10489. [85] I. G. Irastorza and J. A. Garcia, Direct detection of dark matter axions with directional sensitivity, JCAP 1210 (2012) 022, [1207.6129]. [86] F. Mayet et al., A review of the discovery reach of directional dark matter detection, Phys. Rept. 627 (2016) 1–49, [1602.03781]. [87] S. K. Lee and A. H. G. Peter, Probing the local velocity distribution of WIMP dark matter with directional detectors, JCAP 1204 (2012) 029, [1202.5035]. [88] B. J. Kavanagh and C. A. J. O’Hare, Reconstructing the three-dimensional local dark matter velocity distribution, Phys. Rev. D 94 (2016) 123009, [1609.08630]. [89] M. Vogelsberger and S. D. M. White, Streams and caustics: the fine-grained structure of LCDM haloes, Mon. Not. Roy. Astron. Soc. 413 (2011) 1419, [1002.3162]. [90] A. Schneider, L. Krauss and B. Moore, Impact of dark matter microhalos on signatures for direct and indirect detection, Phys. Rev. D 82 (2010) 063525, [1004.5432]. [91] S. Hofmann, D. J. Schwarz and H. Stoecker, Damping scales of neutralino cold dark matter, Phys. Rev. D 64 (2001) 083507, [astro-ph/0104173]. [92] A. M. Green, S. Hofmann and D. J. Schwarz, The power spectrum of SUSY - CDM on sub-galactic scales, Mon. Not. Roy. Astron. Soc. 353 (2004) L23, [astro-ph/0309621]. [93] A. M. Green, S. Hofmann and D. J. Schwarz, The first wimpy halos, JCAP 0508 (2005) 003, [astro-ph/0503387]. [94] J. Diemand, B. Moore and J. Stadel, Earth-mass dark-matter haloes as the first structures in the early Universe, Nature 433 (2005) 389–391, [astro-ph/0501589]. [95] J. E. Kim, Weak interaction singlet and Strong CP invariance, Phys. Rev. Lett. 43 (1979) 103. [96] M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Instantons in nonperturbative QCD vacuum, Nucl. Phys. B 165 (1980) 45. [97] A. M. Green, Astrophysical uncertainties on direct detection experiments, Mod. Phys. Lett. A 27 (2012) 1230004, [1112.0524]. [98] T. Piffl et al., The RAVE survey: the Galactic escape speed and the mass of the Milky Way, Astron. Astrophys. 562 (2014) A91, [1309.4293]. [99] M. Vogelsberger, A. Helmi, V. Springel, S. D. M. White, J. Wang, C. S. Frenk et al., Phase-space structure in the local dark matter distribution and its signature in direct detection experiments, Mon. Not. Roy. Astron. Soc. 395 (2009) 797–811, [0812.0362]. [100] M. Maciejewski, M. Vogelsberger, S. D. M. White and V. Springel, Bound and unbound substructures in Galaxy-scale dark matter haloes, Mon. Not. Roy. Astron. Soc. 415 (2011) 2475, [1010.2491]. [101] Y.-Y. Mao, L. E. Strigari, R. H. Wechsler, H.-Y. Wu and O. Hahn, Halo-to-halo similarity and scatter in the velocity distribution of dark matter, Astrophys. J. 764 (2013) 35, [1210.2721]. [102] N. Bozorgnia, F. Calore, M. Schaller, M. Lovell, G. Bertone, C. S. Frenk et al., Simulated Milky Way analogues: implications for dark matter direct searches, JCAP 1605 (2016) 024, [1601.04707]. [103] J. D. Sloane, M. R. Buckley, A. M. Brooks and F. Governato, Assessing astrophysical uncertainties in direct detection with Galaxy simulations, Astrophys. J. 831 (2016) 93, [1601.05402]. [104] C. Kelso, C. Savage, M. Valluri, K. Freese, G. S. Stinson and J. Bailin, The impact of baryons on the direct detection of dark matter, JCAP 1608 (2016) 071, [1601.04725]. [105] E. W. Lentz, T. R. Quinn, L. J. Rosenberg and M. J. Tremmel, A new signal model for axion cavity searches from N-Body simulations, Astrophys. J. 845 (2017) 121, [1703.06937]. [106] J. Herzog-Arbeitman, M. Lisanti and L. Necib, The metal-poor stellar halo in RAVE-TGAS and its implications for the velocity distribution of dark matter, 1708.03635. [107] J. Herzog-Arbeitman, M. Lisanti, P. Madau and L. Necib, Empirical determination of dark matter velocities using metal-poor stars, 1704.04499. [108] D. Stiff, L. M. Widrow and J. Frieman, Signatures of hierarchical clustering in dark matter detection experiments, Phys. Rev. D 64 (Oct., 2001) 083516, [astro-ph/0106048]. [109] H. J. Newberg and J. L. Carlin, eds., Tidal streams in the Local Group and beyond, vol. 420 of Astrophysics and Space Science Library, 2016. [110] G. C. Myeong, N. W. Evans, V. Belokurov, N. C. Amorisco and S. Koposov, Halo substructure in the SDSS-Gaia Catalogue: streams and clumps, 1712.04071. [111] L. Lancaster, V. Belokurov and N. W. Evans, Quantifying the smoothness of the stellar halo: a link to accretion history, 1804.09181. [112] DES collaboration, N. Shipp et al., Stellar streams discovered in the Dark Energy Survey, Submitted to: Astrophys. J. (2018) , [1801.03097]. [113] G. C. Myeong, N. W. Evans, V. Belokurov, J. L. Sanders and S. E. Koposov, The shards of $\omega$ Centauri, 1804.07050. [114] SDSS collaboration, H. J. Newberg et al., Sagittarius tidal debris 90 kpc from the Galactic center, Astrophys. J. 596 (2003) L191–L194, [astro-ph/0309162]. [115] SDSS collaboration, B. Yanny et al., A low latitude halo stream around the Milky Way, Astrophys. J. 588 (2003) 824, [astro-ph/0301029]. [116] S. R. Majewski, M. F. Skrutskie, M. D. Weinberg and J. C. Ostheimer, A 2mass all-sky view of the Sagittarius dwarf galaxy: I. Morphology of the Sagittarius core and tidal arms, Astrophys. J. 599 (2003) 1082–1115, [astro-ph/0304198]. [117] DES collaboration, E. Luque et al., The Dark Energy Survey view of the Sagittarius stream: Discovery of two faint stellar system candidates, Mon. Not. Roy. Astron. Soc. 468 (2017) 97–108. [118] C. A. J. O’Hare and A. M. Green, Directional detection of dark matter streams, Phys. Rev. D 90 (2014) 123511, [1410.2749]. [119] C. Savage, K. Freese and P. Gondolo, Annual modulation of dark matter in the presence of streams, Phys. Rev. D 74 (2006) 043531, [astro-ph/0607121]. [120] S. E. Koposov, V. Belokurov, N. W. Evans, G. Gilmore, M. Gieles, M. J. Irwin et al., The Sagittarius streams in the southern Galactic hemisphere, Astrophys. J. 750 (2012) 80, [1111.7042]. [121] V. Belokurov, S. E. Koposov, N. W. Evans, J. Peñarrubia, M. J. Irwin, M. C. Smith et al., Precession of the Sagittarius stream, Mon. Not. Roy. Astron. Soc. 437 (2014) 116–131, [1301.7069]. [122] Gaia Collaboration, T. Prusti, J. H. J. de Bruijne, A. G. A. Brown, A. Vallenari, C. Babusiaux et al., The Gaia mission, Astron. Astrophys. 595 (2016) A1, [1609.04153]. [123] K. Freese, P. Gondolo and H. J. Newberg, Detectability of weakly interacting massive particles in the Sagittarius dwarf tidal stream, Phys. Rev. D 71 (2005) 043516, [astro-ph/0309279]. [124] M. Kuhlen, J. Diemand, P. Madau and M. Zemp, The Via Lactea INCITE Simulation: Galactic dark matter substructure at high resolution, J. Phys. Conf. Ser. 125 (2008) 012008, [0810.3614]. [125] M. Lisanti and D. N. Spergel, Dark matter debris flows in the Milky Way, Phys. Dark Univ. 1 (2012) 155–161, [1105.4166]. [126] M. Kuhlen, M. Lisanti and D. N. Spergel, Direct detection of dark matter debris flows, Phys. Rev. D 86 (2012) 063505, [1202.0007]. [127] J. D. Vergados, Debris flows in direct dark matter searches-the modulation effect, Phys. Rev. D 85 (2012) 123502, [1202.3105]. [128] M. S. Petersen, M. D. Weinberg and N. Katz, Dark matter trapping by stellar bars: the shadow bar, Mon. Not. Roy. Astron. Soc. 463 (2016) 1952–1967, [1602.04826]. [129] M. S. Petersen, N. Katz and M. D. Weinberg, Dynamical response of dark matter to galaxy evolution affects direct-detection experiments, Phys. Rev. D 94 (2016) 123013, [1609.01307]. [130] T. Bruch, J. Read, L. Baudis and G. Lake, Detecting the Milky Way’s dark disk, Astrophys. J. 696 (2009) 920–923, [0804.2896]. [131] J. I. Read, L. Mayer, A. M. Brooks, F. Governato and G. Lake, A dark matter disc in three cosmological simulations of Milky Way mass galaxies, Mon. Not. Roy. Astron. Soc. 397 (2009) 44, [0902.0009]. [132] C. W. Purcell, J. S. Bullock and M. Kaplinghat, The dark disk of the Milky Way, Astrophys. J. 703 (2009) 2275–2284, [0906.5348]. [133] M. Schaller, C. S. Frenk, A. Fattahi, J. F. Navarro, K. A. Oman and T. Sawala, The low abundance and insignificance of dark discs in simulated Milky Way galaxies, Mon. Not. Roy. Astron. Soc. 461 (2016) L56, [1605.02770]. [134] K. Schutz, T. Lin, B. R. Safdi and C.-L. Wu, Constraining a thin dark matter disk with Gaia, 1711.03103. [135] J. Billard, Q. Riffard, F. Mayet and D. Santos, Is a co-rotating dark disk a threat to dark matter directional detection ?, Phys. Lett. B 718 (2013) 1171–1175, [1207.1050]. [136] S. K. Lee, M. Lisanti, A. H. G. Peter and B. R. Safdi, Effect of gravitational focusing on annual modulation in dark-matter direct-detection Experiments, Phys. Rev. Lett. 112 (2014) 011301, [1308.1953]. [137] M. Buschmann, J. Kopp, B. R. Safdi and C.-L. Wu, Stellar wakes from dark matter subhalos, 1711.03554. [138] P. Sikivie, Experimental tests of the invisible axion, Phys. Rev. Lett. 51 (1983) 1415–1417. [139] A. N. Ioannisian, N. Kazarian, A. J. Millar and G. G. Raffelt, Axion-photon conversion caused by dielectric interfaces: quantum field calculation, JCAP 1709 (2017) 005, [1707.00701]. [140] D. E. Morris, “An electromagnetic detector for relic axions.” LBL-17915, https://pubarchive.lbl.gov/islandora/object/ir%3A87040/datastream/PDF/download/citation.pdf, 1984. [141] M. Goryachev, B. T. Mcallister and M. E. Tobar, Axion detection with cavity arrays, 1703.07207. [142] A. J. Millar, J. Redondo and F. D. Steffen, Dielectric haloscopes: sensitivity to the axion dark matter velocity, JCAP 1710 (2017) 006, [1707.04266]. [143] J. Jaeckel and J. Redondo, Resonant to broadband searches for cold dark matter consisting of weakly interacting slim particles, Phys. Rev. D 88 (2013) 115002, [1308.1103]. [144] M. Baryakhtar, J. Huang and R. Lasenby, Axion and hidden photon dark matter detection with multilayer optical haloscopes, 1803.11455. [145] S. K. Lamoreaux, K. A. van Bibber, K. W. Lehnert and G. Carosi, Analysis of single-photon and linear amplifier detectors for microwave cavity dark matter axion searches, Phys. Rev. D 88 (2013) 035020, [1306.3591]. [146] G. Cowan, K. Cranmer, E. Gross and O. Vitells, Asymptotic formulae for likelihood-based tests of new physics, Eur. Phys. J. C 71 (2011) 1554, [1007.1727]. [147] S. S. Wilks, The Large-Sample Distribution of the Likelihood Ratio for Testing Composite Hypotheses, Annals Math. Statist. 9 (1938) 60–62. [148] P. J. McMillan and J. J. Binney, The uncertainty in Galactic parameters, Mon. Not. Roy. Astron. Soc. 402 (2010) 934, [0907.4685]. [149] R. Wojtak, E. L. Lokas, G. A. Mamon, S. Gottloeber, A. Klypin and Y. Hoffman, The distribution function of dark matter in massive haloes, Mon. Not. Roy. Astron. Soc. 388 (2008) 815, [0802.0429]. [150] A. D. Ludlow, J. F. Navarro, M. Boylan-Kolchin, V. Springel, A. Jenkins, C. S. Frenk et al., The density and pseudo-phase-space density profiles of cold dark matter haloes, Mon. Not. Roy. Astron. Soc. 415 (2011) 3895–3902, [1102.0002]. [151] D. Lemze, R. Wagner, Y. Rephaeli, S. Sadeh, M. L. Norman, R. Barkana et al., Profiles of dark matter velocity anisotropy in simulated clusters, Astrophys. J. 752 (2012) 141, [1106.6048]. [152] D. R. Hunter, Derivation of the anisotropy profile, constraints on the local velocity dispersion, and implications for direct detection, JCAP 1402 (2014) 023, [1311.0256]. [153] M. Fornasa and A. M. Green, Self-consistent phase-space distribution function for the anisotropic dark matter halo of the Milky Way, Phys. Rev. D 89 (2014) 063531, [1311.5477]. [154] N. Bozorgnia, R. Catena and T. Schwetz, Anisotropic dark matter distribution functions and impact on WIMP direct detection, JCAP 1312 (2013) 050, [1310.0468]. [155] F. Feroz, M. P. Hobson, E. Cameron and A. N. Pettitt, Importance nested sampling and the MultiNest algorithm, 1306.2144. [156] F. Feroz, M. P. Hobson and M. Bridges, MultiNest: an efficient and robust Bayesian inference tool for cosmology and particle physics, Mon. Not. Roy. Astron. Soc. 398 (2009) 1601–1614, [0809.3437]. [157] F. Feroz and M. P. Hobson, Multimodal nested sampling: an efficient and robust alternative to MCMC methods for astronomical data analysis, Mon. Not. Roy. Astron. Soc. 384 (2008) 449, [0704.3704]. [158] D. G. Levkov, A. G. Panin and I. I. Tkachev, Bose condensation by gravitational interactions, 1804.05857. [159] J. Binney and M. Merrifield, Galactic astronomy. Princeton University Press, 1998. [160] F. J. Kerr and D. Lynden-Bell, Review of galactic constants, Mon. Not. Roy. Astron. Soc. 221 (1986) 1023. [161] R. Schoenrich, J. Binney and W. Dehnen, Local kinematics and the local standard of rest, Mon. Not. Roy. Astron. Soc. 403 (2010) 1829, [0912.3693]. [162] C. McCabe, The Earth’s velocity for direct detection experiments, JCAP 1402 (2014) 027, [1312.1355].
[orcid=0000-0002-8157-3534] \cormark[1] \creditConceptualization, Software & Investigation, synthesis & characterization, Writing [orcid=0000-0003-3719-8535] \creditConceptualization, Software $\&$ Investigation [orcid=0000-0002-1775-6598] \creditInvestigation, Writing - review $\&$ editing [orcid=0000-0002-8572-1485] \creditInvestigation and Supervision [orcid=0000-0003-0418-5375] \creditConceptualization, Funding acquisition, Investigation, Supervision, Writing - review $\&$ editing [orcid=0000-0003-2168-853X] \fnmark[1] \creditConceptualization, Funding acquisition, Investigation, Methodology, Supervision, Writing - review $\&$ editing \cortext [cor1]Corresponding author \fntext[fn1]On sabatical leave at ICF-UNAM from CIICAP-UAEM Optimization of wide-band quasi-omnidirectional 1-D photonic structures V. Castillo-Gallardo ing.victorcg@gmail.com Centro de Investigación en Ingeniería y Ciencias Aplicadas, Universidad del Estado de Morelos, Av. Universidad 1001, Col. Chamilpa, Cuernavaca, Morelos 62209, México.    Luis Eduardo Puente-Díaz fmatpuente@gmail.com Instituto de Ciencias Físicas, Universidad Nacional Autónoma de México, Av. Universidad S/N, Col. Chamilpa, 62210 Cuernavaca, Morelos, México. Facultad de Ciencias Físico Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo, Av. Francisco J. Múgica S/N 58030, Morelia, Mich., México.    D. Ariza-Flores david1cool@gmail.com CONACyT-Universidad Autónoma de San Luis Potosí, Karakorum 1470, Lomas 4ta Secc, San Luis Potosí, S.L.P., 78210, México.    Héctor Pérez-Aguilar hiperezag@yahoo.com    W. Luis Mochán mochan@fis.unam.mx    V Agarwal vagarwal@uaem.mx Abstract We have designed, optimized, fabricated and characterized highly reflective quasi-omnidirectional (angular range of $0-60^{\circ}$) multilayered structures with a wide spectral range. Two techniques, chirping (a continuous change in thicknesses) and stacking of Bragg-type sub-structures, have been used to enhance the reflectance with minimum thickness for a given pair of refractive indices. Numerical calculations were carried out employing the transfer matrix method and we optimized the design parameters to obtain maximal reflectance averaged over different spectral ranges for all angles. We fabricated some of the optimized structures with porous silicon dielectric multilayers with low refractive contrast and compared their measured optical properties with the calculations. Two chirped structures with thicknesses 21.6 $\mu$m and 60.4 $\mu$m, resulting in quasi omnidirectional mirrors with bandwidths of 360 nm and 1800 nm, centered at 1160 nm and 1925 nm respectively have been shown. In addition, we fabricated a stacked sub-Bragg mirror structure with a quasi omnidirectional bandwidth of 1800 nm (centered at 1850 nm) and a thickness of 41.5 $\mu$m, which is almost two third (in thickness) of the chirped structure. Thus, our techniques allowed us to obtain relatively thin quasi-omnidirectional mirrors with wide bands over different wavelength ranges. Our analysis techniques can be used for the optimization of the reflectance not only in multilayered PS systems with different refractive index contrasts but also for systems with other types of materials with low refractive index contrast. The present study could also be useful for obtaining omnidirectional dielectric mirrors in large spectral regions using different materials, flat focusing reflectors, thermal regulators, or, if defects are included, as filters or remote chemical/biosensors with a wide angular independent response. keywords: Optimized reflectance \sepOmnidirectional structures \sepChirped structures \sepStacked Bragg mirror structures 1 Introduction Photonic crystal (PC)-based structures have been extensively investigated due to their light controlling properties [1, 2]. Photons entering a PC interact with its periodically varying dielectric constant and their energies are consequently organized into photonic bands. In analogy to electronic bands withiny a crystal, propagation is forbidden if the energy lies within certain regions known as photonic band gaps (PBG’s). Thus, the PC behaves as a mirror if illuminated by light with frequencies within in a PBG. For several applications it is useful to fabricate ommnidirectional mirrors with an absolute band gap for all angles of incidence. The use of metallic mirrors is limited due to their relatively large absorption at the visible (Vis) and near infrared (NIR) frequencies. An attractive alternative consists of low-absorption dielectric structures, which may be designed for specific frequency ranges. One-dimensional (1D) and two-dimensional (2D) PCs have applications in optoelectronics, optical telecommunications and computing, laser technology [3, 4], and radiative cooling applications [5]. The simplest 1D-PC is composed of a finite number of periodically alternating layers of high $n_{H}$ and low $n_{L}$ refractive indices, with corresponding thicknesses $d_{H}$ and $d_{L}$ chosen so that their optical thicknesses corresponds to a quarter of a wavelength $n_{H}d_{H}=n_{L}d_{L}=\lambda_{0}/4$ at a nominal wavelength $\lambda_{0}$. This yields a Bragg mirror (BM) which might have a large PBG including an omnidirectional gap if the contrast $n_{H}/n_{L}$ and the number of periods are large enough. Mirrors with high reflectance within a wider frequency range may be engineered by using different nominal wavelengths for different layers. One way to obtain such structures is to gradually change the width of the layers as a function of their depth, producing chirped-type structures [6]. Another alternative is to stack BMs, in such a way that their reflection bands partially overlap each other at their edges, resulting in a wider high-reflection band [7]. On the other hand, different fabrication techniques have been used to obtain omnidirectional mirrors (ODM) that operate on the visible and/or near infrared (Vis-NIR) range of the electromagnetic spectrum. For example, Chen et al. [8], reported the fabrication of a structure made of six pairs of TiO${}_{2}$/SiO${}_{2}$ layers with an omnidirectional band (ODB) of about 70 nm in the NIR range, employing a sol-gel deposition method. Park et al. [9], used molecular beam epitaxy to grow a stack of four pairs of GaAs/AlAs layers, followed by its conversion to GaAs/Al${}_{2}$O${}_{3}$ by selective oxidation of the AlAs layers, to obtain an ODB from 710 to 950 nm. On the other hand, DeCorby et al. [10], fabricated mirrors by coupling multiple layers of Ge${}_{33}$As${}_{12}$Se${}_{55}$ chalcogenide glass and polyamide-imide deposited by thermal evaporation and spin-casting respectively, to obtain a 150 nm wide omnidirectional band centered at 1750 nm. Furthermore, Jena et al. [11], used sequential asymmetric bipolar pulsed DC magnetron sputtering for TiO${}_{2}$ layers and radio frequency magnetron sputtering for SiO${}_{2}$ layers to generate TiO${}_{2}$/SiO${}_{2}$ 1D-PC’s and achieved an ODB from 592 to 668 nm. However, these techniques are expensive and they require sophisticated equipment and long fabrication times. Use of porous silicon (PS) is an attractive alternative, since it can be easily manufactured by electrochemical etching crystalline Si in an hydrofluoric acid based electrolyte to obtain a sponge-like nanostructure composed of Si and air. By modulating the supplied current and its application time during the electrochemical reaction, it is possible to obtain multilayered structures allowing for the fabrication ODM’s [12, 13]. Although optical filters [14, 15] are the most common application of PS PC’s, they have also been widely used as chemical sensors [16, 17], waveguides [18] and for photoluminescence control [19]. Recently, the study of chirped multilayer structures has increased due to their possible application as flat focusing mirrors [20, 21, 22]. PS based dielectric optical filters, quasi-ODM’s and ODM’s have been extensively studied in different regions of the electromagnetic spectrum, such as the ultraviolet (UV) [23], visible [24], and NIR [25] regions. However, to obtain the desirable high index of refraction contrast, layers of high porosity have to be employed, which makes the resulting structures relatively fragile. In addition, use of high porosity makes the fabrication of large structures, with a very large number of layers, unfeasible. For this reason, in this work we study different strategies to produce multilayered mirrors with high reflectance omnidirectional band with relatively small thickness and refractive index contrast. This paper is organized as follows. In Sec. 2 we develop the methods used to design and calculate the reflectance spectra of the PS multilayered structures such as chirped structures and mirror stacks. In Sec. 3 we provide details about the fabrication of these structures. In Sec. 4 we present the numerical and experimental results corresponding to our proposed structures and compare them to previous reports. We obtained significantly large band-widths with relatively thinner structures than reported previously. Finally, we discuss our conclusions in Section 5. 2 Theory For the analysis of the propagation of electromagnetic fields through multilayered systems it is usual to employ the transfer matrix method [26, 27]. Assuming the system is 1D, with variations only along the $z$ directions, and that the polarization of light is either transverse electric (TE) or transverse magnetic (TM), the transfer matrix $M\left(z_{2},z_{1}\right)$ is a 2x2 matrix that relates the components of the electric $E_{\|}$ and magnetic $H_{\|}$ fields parallel to the $xy$ plane evaluated at $z_{2}$ to their value at $z_{1}$, $$\left(\begin{array}[]{c}E_{\|}\\ H_{\|}\end{array}\right)_{z_{2}}=M\left(z_{2},z_{1}\right)\left(\begin{array}[]{c}E_{\|}\\ H_{\|}\end{array}\right)_{z_{1}}.$$ Many equivalent formulations have been proposed to obtain and use $M$ [28, 29, 30]. Here we use a recently developed formalism for the numerically stable calculation of the reflectance of large multilayer systems, as proposed by Puente-Díaz, et al. [31], summarized in the supplementary information. The explicit expressions for the optical coefficients are: $$r=\sigma\frac{Z_{0}M_{11}+M_{12}-Z_{0}Z_{s}M_{21}-Z_{s}M_{22}}{Z_{0}M_{11}-M_{12}-Z_{0}Z_{s}M_{21}+Z_{s}M_{22}},$$ (1) and $$t=\frac{2\zeta}{Z_{0}M_{11}-M_{12}-Z_{0}Z_{s}M_{21}+Z_{s}M_{22}},$$ (2) where $M_{ij}$ are the elements of the transfer matrix which transfer the fields from its surface interface with vacuum at $z_{0}$ towards the substrate at $z_{s}$, $Z_{0}$ is the surface impedance of vacuum and $Z_{s}$ is the surface impedance of the substrate, and we define $r=F_{r}/F_{i}$ and $t=F_{t}/F_{i}$ in terms of the reflected and transmitted fields, where we choose the $F$’s as electric fields for (TE) polarization and magnetic fields for transverse magnetic polarization (TM). Here, $\sigma=-1$ and $\zeta=Z_{s}$ for TE polarization, while $\sigma=1$ and $\zeta=Z_{0}$ for TM polarization. The reflectance is given by $R=|r|^{2}$ and the transmittance by $T=\beta\left|t\right|^{2}$ with $\beta=\text{Re}Z_{0}/\text{Re}Z_{s}$ for TE polarization and $\beta=\text{Re}Z_{s}/\text{Re}Z_{0}$ for TM polarization. The complex refractive index of the porous layers were obtained through the Bruggeman’s effective medium theory, which has been reported to adequately reproduce the optical parameters of PS [16, 32, 33]. 3 Experimental details Some of the proposed photonic structures were synthesized through anodic etching of a (100) oriented, $p$-type Boron doped, crystalline Si wafer with resistivity 0.002-0.005 $\Omega\cdot$cm, under galvanostatic conditions [34, 35]. The electrochemical anodization process was performed at room temperature, with an electrolyte of aqueous hydrofluoric acid (HF) (48$\%$ of wt) and ethanol (99.9$\%$ of wt) in 1:1 volumetric proportion, respectively. However, as it is not desirable to use very high porosity contrasts due to structure fragility and electrolyte diffusivity problems [36], the current densities were chosen as 35 and 305 $\text{mA}/\text{cm}^{2}$, with corresponding porosities of 51% and 76%, respectively. These porosities were determined through previously obtained calibration curves using a gravimetric technique [32]. The etching rate of PS was obtained by synthetizing single layers under similar conditions and measuring their thicknesses through Scanning Electron Microscopy (SEM). Absolute reflectivity measurements were carried out with a Perkin Elmer Lambda 950 UV/Visible spectrophotometer with a variable angle universal reflectance accessory (URA) for different incident angles $\theta_{i}=10^{\circ}$, $20^{\circ}$, $30^{\circ}$, $40^{\circ}$, $50^{\circ}$ and $60^{\circ}$ using non-polarized light. The maximum and minimum values of $\theta_{i}$ were constrained by the angular range of the URA. 4 Results and discussion In this section we present optimized calculations of the reflectance of different multilayered omnidirectional wide-band mirrors and we compare the results with experimental spectra taken from the corresponding samples at diferent angles of incidence. Sec. 4.1 is devoted to chirped-type Bragg mirrors while Sec. 4.2 is devoted to structures made of stacked sub-mirrors. We develop two techniques to design multilayer photonic structures. In the first, the wavelength ($\lambda_{j}$) to which the $j$-th pair of layers is tuned is determined by a given function of $j$. Several functions are proposed and their parameters are optimized to maximize the reflectance $\braket{R}$ averaged over a given spectral and angular range. The second technique consisted in stacking sub-mirrors tuned to different wavelengths, chosen so that their spectral ranges overlap. For each sub-mirror, the spectral range was obtained from the dispersion relation of a corresponding periodic structure. The degree of overlap was optimized to maximize $\braket{R}$. The optimizations were done employing the Minuit module [37] of the Perl Data Language (PDL) [38]. 4.1 Chirped-type Bragg mirrors Here we study multilayered structures where the thicknesses for each pair $j=0\ldots N_{p}-1$ of layers are tuned to a wavelength $\lambda_{j}$ which changes gradually with the depth of the layer, according to $$\lambda_{j}=\lambda_{\min}+\left(\lambda_{\max}-\lambda_{\min}\right)f\left(\frac{j}{N_{p}-1}\right),$$ (3) where $\lambda_{\min}$ and $\lambda_{\max}$ are the minimum and maximum design wavelengths respectively, $N_{p}$ is the number of periods in the structure, and $f(\xi)$ is a smooth function that goes from 0 to 1 as its argument goes from the surface ($\xi=0$) to the substrate ($\xi=1)$. We only consider increasing functions $f$ due to the high absorption of PS in the ultraviolet region, which decreases in the visible and becomes negligible in the near infrared, so, that the first periods are tuned in the UV-Vis regions. We consider the following classes of functions: $$f_{1}\left(\xi\right)=\xi^{\alpha},$$ (4) $$f_{2}\left(\xi\right)=\frac{1}{2}\left(\xi^{\alpha}+\xi^{\beta}\right)$$ (5) and $$f_{3}\left(\xi\right)=A\xi^{\alpha}\left(1-\xi\right)+\xi^{\beta},$$ (6) where $\alpha$, $\beta$ and $A$ are parameters to optimize in order to maximize the average reflectance $\braket{R}$ over given spectral and angular ranges. The class $f_{1}$ (Eq. (4)) with $\alpha>0$ corresponds to simple increasing profiles, $f_{2}$ is the arithmetic mean of two functions of the type $f_{1}$ with different powers $\alpha$ and $\beta$, and $f_{3}$ (Eq. (6)) is designed so that the first term dominates near the surface and the second near the substrate, to allow different behaviors at the edges of the spectrum. The parameters $\alpha$, $\beta$, $A$ and $N_{p}$ were optimized to maximize the reflectance for non-polarized light $\braket{R}$ averaged in the angular range of $0^{\circ}$ to $90^{\circ}$ and either in the spectral region $\mathcal{R}_{1}$ from 350 to 1400 nm or $\mathcal{R}_{2}$ from 400 to 3000 nm. For $\mathcal{R}_{1}$ we took $\lambda_{\min}$ as another parameter to be optimized while $\lambda_{\max}$ was fixed at 1400 nm. For $\mathcal{R}_{2}$ we fixed both $\lambda_{\min}=850$ nm and $\lambda_{\min}=3000$ nm. In the first case, we decided to optimize $\lambda_{min}$ to obtain the best design in the high absorption zone of the PS. Approximately $90\%$ of the solar radiation that reaches the earth’s surface is contained in the spectral region $\mathcal{R}_{1}$, and $98\%$ in $\mathcal{R}_{2}$ [39]. A direct application of the optimized structures in these regions is solar reflectors. The optimized parameters corresponding to the choice $p_{l}=0.51$ and $p_{h}=0.76$ for the low and high porosities are shown in Table 1. In the supplementary information, we include the parameters used to calculate the reflectance of structures corresponding to the porosity $p_{l}=0.30$ and $p_{h}=0.76$. We also add the theoretical results obtained for $p_{l}=0.42$ and $p_{h}=0.76$. The function $f_{3}$ yields after optimization the highest $\braket{R}$ in the region $\mathcal{R}_{1}$ and with the thinnest structure. Furthermore, this structure has an omnidirectional width (ODW) of 250 nm centered at (ODC) 1125 nm, as shown in Fig. 1(b). Various photonic structures have been reported with ODB’s [40, 41, 42] defined as those spectral regions for which $R>90\%$ for all angles. Here we we adopt the same criterium. We synthesized a structure designated sample $S_{1}$ corresponding to these results. For the region $\mathcal{R}_{2}$ the resulting structures have similar thicknesses and attain similar average reflectance, though the one with the smallest thickness and the maximum $\braket{R}$ is that obtained from the family $f_{1}$. We also synthesized the corresponding structure and designated it sample $S_{2}$. The calculated optimized reflectance spectrum is shown in Fig. 1(d). This structure has a very wide region where $R(\lambda,\theta)>0.90$, which goes from 850 to 3000 nm, but in a restricted angular range from $0^{\circ}$ to $70^{\circ}$. Thus, it has a quasi-ODW of 2150 nm centered at 1925 nm. The functions $f_{n}$ optimized for the region $\mathcal{R}_{1}$ are shown in Fig. 1(a), and those for $\mathcal{R}_{2}$ in Fig. 1(c). The profiles $f_{n}$ are qualitatively similar. For example, for $\mathcal{R}_{1}$ the optimal photonic structures have a small number of periods tuned to the wavelengths where silicon has a large absorption, i.e., the first periods of the structure. As withinin $\mathcal{R}_{2}$ the absorption of porous silicon is negligible, the optimized $f_{n}$ are not too far from a linear profile. In Fig. 2(a) we present the calculated and measured reflectance for non-polarized light corresponding to sample $S_{1}$ in the spectral range from 350 to 1400 nm for different angles of incidence $\theta=10^{\circ}$, $20^{\circ}$, $30^{\circ}$, $40^{\circ}$, $50^{\circ}$, and $60^{\circ}$. The spectra are qualitatively similar, and for $\lambda>700$ nm and $10^{\circ}<\theta<40^{\circ}$, they agree quantitatively, with a difference smaller than $5\%$. The measured reflectance is lower than the calculated one, more so at small wavelengths and large angles. This difference may be partially attributed to the scattering of light due to the roughness present at the sample [43, 30, 29] that we have not taken into account in our theory. Due to the limitations of our spectrophotometer, we couldn’t perform measurements for $\theta>60^{\circ}$ and thus check the ODB. Nevertheless, our results allow us to conclude that the structure has a quasi-ODB from 980 and 1340 nm. In Fig. 2(b) we compare the calculated and measured spectra of sample $S_{2}$ in the range from 400 to 3000 nm. Both spectra are very similar, with some differences at long wavelengths which can be attributed to an increase of imperfections such as wavy layers and more roughness for deeper layers, due to a more restricted diffusivity of the electrolyte as the thickness of the structure is increased [44, 45]. According to our results, this structure has a wide quasi-ODW of 1800 nm centered at 1880 nm. 4.2 Photonic structure with sub-mirrors stacking Another class of techniques for designing good reflectors across a wide spectral range is to stack sub-mirrors tuned to different wavelengths chosen with different criteria. For example, Agarwal et al. [46] proposed a photonic structure in which 54 sub-mirrors were stacked as follows: the first sub-mirror consisted of 2.5 periods tuned to $\lambda_{1}=700$ nm; the following $j$-th sub-mirrors were tuned to wavelengths $\lambda_{j}$ according to the sequence $\lambda_{j+1}=\lambda_{j}+\left(2+j\right)$ nm with 4 periods for each of the following periods. Furthermore, Estrada-Wiese et al. [47] developed a method that combines three stochastic optimization algorithms together with a narrow space search methodology to obtain a customized and optimized PC configuration. We recall that for a periodic structure, the propagation of electromagnetic waves is described by the dispersion relation $$\cos KD=\frac{1}{2}\text{Tr}\bm{M},$$ (7) where $K$ is Bloch’s vector and $\bm{M}$ the transfer matrix of a single period of thickness $D$ [28, 48, 49]. Thus, whenever $\left|\text{Tr}\left(M\right)\right|>2$ electromagnetic energy cannot propagate in the periodic system. This condition corresponds to a photonic band gap within which we expect a high reflectance, even for a finite system made of only a few periods. This suggests the following design strategy. We stack sub-mirrors each made of a few periods of a periodic system such that its photonic band gap (PBG) overlaps the PBG of the previous and that of the next sub-mirror. The number of periods of each sub-mirror is an important parameter. If it is too small, the sub-mirror would be partially transparent within the PBG. If it is too large, the resulting structure would be too thick. This is particularly important for sub-mirrors tuned to shorter wavelengths, for which there is some absorption. We followed thus two design strategies: In the first, we considered sub-mirrors each with the same number $p$ of periods. In the second, a different number of periods was assigned to each sub-mirror according to the center of its PBG: we chose a single period for those sub-mirrors $j$ tuned to $\lambda_{j}<500$ nm, two periods for $500<\lambda_{j}<650$ nm, three periods for $650<\lambda_{j}<800$ nm, and for $\lambda_{j}\geq 800$ nm the number of periods $p$ was varied between 4 and 10. In both designs, the degree of overlap of consecutive PBGs was taken as the parameter to be optimized in order to maximize the reflectance $\braket{R}$ averaged within the spectral and angular range $400\text{ nm}\leq\lambda\leq 3000\text{ nm}$ and for $0^{\circ}\leq\theta\leq 90^{\circ}$. We define the percentage of overlap in terms of the portion of the PBG of the $j$-th mirror that lies within the PBG of the $(j+1)$-th mirror. Table 2 shows the optimized values of the average reflectance $\braket{R}$ for different choices of the number of periods $p$ of each sub-mirror, as discussed above, including the resulting number $N_{m}$ of stacked sub mirrors, the thickness $d$ of the structure, and the optimal overlap of consecutive PBGs. The first and second blocks correspond to the first and second strategies discussed above. According to table 2, the structure that has the highest reflectance is in the second block, with an optimal overlap between consecutive PBGs of $78\%$, a thickness of 41.5 $\mu$m, and an average reflectance $\braket{0.954}$. It is made up of 18 sub-mirrors tuned to the wavelengths 400, 460, 510, 570, 640, 720, 810, 910, 1020, 1150, 1290, 1450, 1630, 1830, 2060, 2320, 2610, and 2860 nm. We synthesized this structure and designated it sample $S_{3}$. Its calculated and measured reflectance spectra are shown in Fig. 3. Fig. 3(a) reveals a quasi-ODM in the range from 950 to 2900 nm, with an angular range between $0^{\circ}$ and $60^{\circ}$. It is also observed that at incidence angles $\theta>70^{\circ}$ and long wavelengths the reflectance decreases and has oscillations, which we attribute to the greater penetration of the electromagnetic field within the structure, as shown by Puente-Díaz, et al. [31]. In Fig. 3(b) we compare the calculated and measured reflectance for the structure $S_{3}$ at several angles of incidence. Although, the quasi omnidirectional wavelength range of the synthesized structure, from 950 to 2750 nm, is slightly smaller than the calculated one, the measured PBG is almost twice as wide as a recently reported PS chirped structure [30], which has the largest previously reported ODW. Finally, Table 3 shows data from some porous silicon based ODM designed to operate in the NIR region and their of the ODB’s are compared with those of the structures analyzed in the present study. The ODW of the structure $S_{1}$ results 20 nm wider than that reported by Bruyant, et al. [25] and it is located closer to the visible range. Furthermore, the ODB of the structure $S_{3}$ begins at 950 nm like the one synthesized by Estevez, et al. [14]; nevertheless, its width extends to 2750 nm. In other words, $S_{3}$ has an ODW 3.55 times higher. Moreover, the ODW of the structure reported by Xifre-Perez, et al. [50] represents only 17.6 % of the ODW of our $S_{2}$ and $S_{3}$ structures. Aditionally, the table reveals that the porous silicon based structures $S_{2}$ and $S_{3}$ have the widest gap widths, 1.36 times greater than the largest reported work until now in the NIR region [20]. As compared to the previous studies [25, 14, 50, 20, 51], this work opens the possibility of generating dielectric multilayered structure with large omnidirectional band with the materials with relatively low refractive index contrast. 5 Conclusion We have designed, optimized, fabricated and characterized highly reflective quasi omnidirectional (angular range of $0-60^{\circ}$) multilayered structures with a wide spectral range. Two techniques, chirping (a continuous change in thicknesses) and stacking of Bragg-type sub-structures, have been used to enhance the reflectance with minimum thickness for a given pair of refractive indices. Numerical calculations were carried out employing the transfer matrix method and we optimized the design parameters to obtain maximal reflectance averaged over different spectral ranges for all angles. We fabricated some of the optimized structures with porous silicon dielectric multilayers with low refractive contrast and compared their measured optical properties with the calculations. Two chirped structures with thicknesses 21.6 $\mu$m and 60.4 $\mu$m, resulting in quasi omnidirectional mirrors with bandwidths of 360 nm and 1800 nm, centered at 1160 nm and 1925 nm respectively have been shown. In addition, we fabricated a stacked sub-Bragg mirror structure with a quasi omnidirectional bandwidth of 1800 nm (centered at 1850 nm) and a thickness of 41.5 $\mu$m, which is almost two third (in thickness) of the chirped structure. Thus, our techniques allowed us to obtain relatively thin quasi-omnidirectional mirrors with wide bands over different wavelength ranges. Our analysis techniques can be used for the optimization of the reflectance not only in multilayered PS systems with different refractive index contrasts but also for systems with other types of materials with low refractive index contrast. The present study could also be useful for obtaining omnidirectional dielectric mirrors in large spectral regions using different materials, flat focusing reflectors, thermal regulators, or, if defects are included, as filters or remote chemical/biosensors with a wide angular independent response. Acknowledgments This project is supported by Consejo Nacional de Ciencia y Tecnología through grant No.A-S1-30393(VA), 256243 (DA), Cátedras Conacyt program 1577 (DA), and by DGAPA-PAPIIT UNAM under grant IN111119 (LM). VA acknowledges PRODEP (Sabbatical grant), and is grateful to ICF where she spent the sabbatical year. HPA also expresses his gratitude to the Coordinación de la Investigación Científica de la Universidad Michoacana de San Nicolás de Hidalgo. VCG is grateful to CIICAp and ICF where he did this work. \printcredits References [1] John D. Joannopoulos, Steven G. Johnson, Joshua N. Winn, and Robert D. Meade. Optical Waves in Layered Media Photonic Crystals: Molding the Flow of Light. Princeton University Press, New Jersey, 2nd edition, 2008. [2] Amit Kumar Goyal, Hemant Sankar Dutta, and Suchandan Pal. Porous photonic crystal structure for sensing applications. J. Nanophotonics, 12(4):1–7, 2018. [3] C. López. Materials Aspects of Photonic Crystals. Adv. Mat., 15(20):1679–1704, 2003. [4] Masaya Notomi. Manipulating light with strongly modulated photonic crystals. Rep. Prog. Phys., 73(9):096501, 2010. [5] Amit Kumar Goyal and Ajay Kumar. Recent advances and progresses in photonic devices for passive radiative cooling application: A review. J. Nanophotonic, 14(3):1 – 20, 2020. [6] R. Szipöcs and A. Koházi-Kis. Theory and design of chirped dielectric laser mirrors. Appl. Phys. B, 65(2):115–135, 1997. [7] E. Xifré-Pérez, L. F. Marsal, J. Ferré-Borrull, and J. Pallarés. Low refractive index contrast porous silicon omnidirectional reflectors. Appl. Phys. B, 95(1):169–172, 2009. [8] Kevin M. Chen, Andrew W. Sparks, Hsin-Chiao Luan, Desmond R. Lim, Kazumi Wada, and Lionel C. Kimerling. SiO${}_{2}$/TiO${}_{2}$ omnidirectional reflector and microcavity resonator via the sol-gel method. Appl. Phys. Lett., 75(24):3805–3807, 1999. [9] Yeonsang Park, Young-Geun Roh, Chi-O Cho, Heonsu Jeon, Min Gyu Sung, and J. C. Woo. GaAs-based near-infrared omnidirectional reflector. Appl. Phys. Lett., 82(17):2770–2772, 2003. [10] R. G. DeCorby, H. T. Nguyen, P. K. Dwivedi, and T. J. Clement. Planar omnidirectional reflectors in chalcogenide glass and polymer. Opt. Express, 13(16):6228–6233, 2005. [11] S. Jena, R. B. Tokas, S. Tripathi, K. D. Rao, D. V. Udupa, S. Thakur, and N. K. Sahoo. Influence of oxygen partial pressure on microstructure, optical properties, residual stress and laser induced damage threshold of amorphous HfO${}_{2}$ thin films. J. Alloy Compd., 771:373–381, 2019. [12] Elisabet Xifré-Pérez, Josep Ferré-Borrull, Josep Pallarés, and Lluís F. Marsal. Methods, Properties and Applications of Porous Silicon. In Dusan Losic and Abel Santos, editors, Electrochemically Engineered Nanoporous Materials: Methods, Properties and Applications, Springer Series in Materials Science, pages 37–63. Springer International Publishing, Cham, 2015. [13] O. Bisi, Stefano Ossicini, and L. Pavesi. Porous silicon: a quantum sponge structure for silicon based optoelectronics. Surf. Sci. Rep., 38(1):1 – 126, 2000. [14] J. O. Estevez, J. Arriaga, A. Méndez Blas, and V. Agarwal. Enlargement of omnidirectional photonic bandgap in porous silicon dielectric mirrors with a gaussian profile refractive index. Appl. Phys. Lett., 94(6):061914, 2009. [15] D. Ariza-Flores, J.S. Pérez-Huerta, Yogesh Kumar, A. Encinas, and V. Agarwal. Design and optimization of antireflecting coatings from nanostructured porous silicon dielectric multilayers. Sol. Energ. Mat. Sol. C., 123:144 – 149, 2014. [16] Anne M. Ruminski, Giuseppe Barillaro, Charles Chaffin, and Michael J. Sailor. Internally referenced remote sensors for HF and Cl${}_{2}$ using reactive porous silicon photonic crystals. Adv. Funct. Mat., 21(8):1511–1525, 2011. [17] T.V.K. Karthik, L. Martinez, and V. Agarwal. Porous silicon ZnO/SnO${}_{2}$ structures for CO${}_{2}$ detection. J. Alloy Compd., 731:853 – 863, 2018. [18] C. P. Hussell and R. V. Ramaswamy. High-index overlay for high reflectance DBR gratings in LiNbO${}_{3}$ channel waveguides. IEEE Photonic Tech. L., 9(5):636–638, 1997. [19] E.E. Antunez, J.O. Estevez, J. Campos, M.A. Basurto-Pensado, and V. Agarwal. Formation of photoluminescent n-type macroporous silicon: Effect of magnetic field and lateral electric potential. Physica B, 453:34 – 39, 2014. Low-Dimensional Semiconductor Structures - A part of the XXII International Material Research Congress (IMRC 2013). [20] F. Wu, K. Lyu, S. Hu, M. Yao, and S. Xiao. Ultra-large omnidirectional photonic band gaps in one-dimensional ternary photonic crystals composed of plasma, dielectric and hyperbolic metamaterial. Opt. Mater., 111:110680, 2021. [21] Volokh Anatoliy, Sergey Marchenko, and Pavel Shestakov. Focusing and defocusing of reflected light beams from chirped dielectric layered structure. In 2017 Days on Diffraction (DD), pages 200–204, 06 2017. [22] Yu-Chieh Cheng and Kestutis Staliunas. Near-field flat focusing mirrors. Appl. Phys. Rev., 5(1):011101, 2018. [23] María Jimenez Vivanco, Godofredo García, Jesús Carrillo, Vivechana Agarwal, Tomás Díaz-Becerril, Rafael Doti, Jocelyn Faubert, and Eduardo Lugo. Porous Si-SiO${}_{2}$ based UV Microcavities. Sci. Rep.-UK, 10:2220, 12 2020. [24] A. David Ariza-Flores, L. M. Gaggero-Sager, and V. Agarwal. White metal-like omnidirectional mirror from porous silicon dielectric multilayers. Appl. Phys. Lett., 101(3):031119, 2012. [25] A. Bruyant, G. Lérondel, P. J. Reece, and M. Gal. All-silicon omnidirectional mirrors based on one-dimensional photonic crystals. Appl. Phys. Lett., 82(19):3227–3229, 2003. [26] Rolando Perez-Alvarez and F. García-Moliner. Transfer matrix, Green Function and related techniques: Tools for the study of multilayer heterostructures. Universitat Jaume I, España, 01 2004. [27] Pochi Yeh. Optical Waves in Layered Media. Wiley, USA, 2nd edition, 2005. [28] Mochán W. Luis., del Castillo-Mussot Marcelo., and Barrera Rubén G. Effect of plasma waves on the optical properties of metal-insulator superlattices. Phys. Rev. B, 35(3):1088–1098, 1987. [29] Leandro L. Missoni, Guillermo P. Ortiz, María Luz Martínez Ricci, Victor J. Toranzos, and W. Luis Mochán. Rough 1D photonic crystals: A transfer matrix approach. Opt. Mater., 109:110012, 2020. [30] B. A. Chavez-Castillo, J. S. Pérez-Huerta, J. Madrigal-Melchor, S. Amador-Alvarado, I. A. Sustaita-Torres, V. Agarwal, and D. Ariza-Flores. A wide band porous silicon omnidirectional mirror for the near infrared range. J. Appl. Phys., 127(20):203106, 2020. [31] Luis Eduardo Puente-Díaz, Victor Castillo-Gallardo, Guillermo P. Ortiz, José Samuel Pérez-Huerta, Héctor Pérez-Aguilar, Vivechana Agarwal, and W. Luis Mochán. Stable calculation of optical properties of large non-periodic dissipative multilayered systems. Superlattice Microst., 145:106629, 2020. [32] Andrea Edit Pap, Krisztián Kordás, Jouko Vähäkangas, Antti Uusimäki, Seppo Leppävuori, Laurent Pilon, and Sándor Szatmári. Optical properties of porous silicon. Part III: Comparison of experimental and theoretical results. Opt. Mater., 28(5):506–513, 2006. [33] D. Estrada-Wiese and J.A. del RÃo. Refractive index evaluation of porous silicon using bragg reflectors. Rev. Mex. Fís., 64:72–81, 02 2018. [34] L. T. Canham. Silicon quantum wire array fabrication by electrochemical and chemical dissolution of wafers. Appl. Phys. Lett., 57(10):1046–1048, 1990. Publisher: American Institute of Physics. [35] Escorcia J. and Agarwal V. Effect of duty cycle and frequency on the morphology of porous silicon formed by alternating square pulse anodic etching. Phys. Stat. Sol., 4(6):2039–2043, 2007. [36] Ariza-Flores A. David, Gaggero-Sager L. M., and Agarwal V. Effect of interface gradient on the optical properties of multilayered porous silicon photonic structures. J. Phys. D: Appl. Phys., 44(15):155102, 2011. [37] F. James. MINUIT Function Minimization and Error Analysis: Reference Manual Version 94.1. CERN Program Library Long Writeup D506, 1994. [38] K. Glazebrook, J. Brinchmann, C. DeForest J. Cerney, D. Hunt, T. Jenness, T. Lukka, R. Schwebel, and C. Soeller. Perl Data Language (PDL), 1997. [39] R.E. Bird, R.L. Hulstrom, and L.J. Lewis. Terrestrial solar spectral data sets. Solar Energy, 30(6):563–573, 1983. [40] J. Xu, Q. Sun, Z. Wu, L. Guo, S. Xie, Q. Huang, and Q. Peng. Development of broad-band high-reflectivity multilayer film for positron emission tomography system. Journal of Instrumentation, 13(09):P09016–P09016, 2018. [41] Kamal Ahmed, A Khan, A Rauf, and Amir Gul. Multilayer dielectric narrow band mangin mirror. IOP Conference Series: Materials Science and Engineering, 60:012016, 2014. [42] Reiner Dietsch, Stefan Braun, Thomas Holz, Hermann Mai, Roland Scholz, and Lutz Bruegemann. Multilayer x-ray optics for energies E > 8 keV and their application in x-ray analysis. In Carolyn A. MacDonald and Ali M. Khounsary, editors, Advances in Laboratory-based X-Ray Sources and Optics, volume 4144, pages 137 – 147. International Society for Optics and Photonics, SPIE, 2000. [43] W. Theiß, R. Arens-Fischer, M. Arntzen, M.G. Berger, S. Frohnhoff, S. Hilbrich, and M. Wernke. Probing Optical Transitions in Porous Silicon by Reflectance Spectroscopy in the Near Infrared, Visible and UV. MRS Proceedings, 358:435, 1994. [44] Luca Dal Negro, Claudio J. Oton, Zeno Gaburro, Lorenzo Pavesi, Patrick Johnson, Ad Lagendijk, Roberto Righini, Marcello Colocci, and Diederik S. Wiersma. Light Transport through the Band-Edge States of Fibonacci Quasicrystals. Phys. Rev. Lett., 90:055501, 2003. [45] G. Vincent. Optical properties of porous silicon superlattices. Appl. Phys. Lett., 64(18):2367–2369, 1994. [46] V. Agarwal and J. A. del Río. Tailoring the photonic band gap of a porous silicon dielectric mirror. Appl. Phys. Lett., 82(10):1512–1514, 2003. [47] D. Estrada-Wiese, Jesus Del Río, and Ehecatl del Rio-Chanona. Stochastic optimization of broadband reflecting photonic structures. Sci. Rep.-UK, 8:1193, 2018. [48] W. Luis Mochán and Marcelo del Castillo Mussot. Optics of multilayered conducting systems: Normal modes of periodic superlattices. Phys. Rev. B, 37:6763–6771, 1988. [49] Pérez-Huerta J. S., Ariza-Flores D., Castro-García R., Mochán W. L., Ortiz G. P., and Agarwall V. Reflectivity of 1D photonic crystals: A comparison of computational schemes with experimental results. Int. J. Mod. Phys. B, 32(11):1850136, 2018. [50] Xifré-Pérez E., Marsal L. F., Pallarés J., and Ferré-Borrull J. Porous silicon mirrors with enlarged omnidirectional band gap. J. Appl. Phys., 97(6):064503, 2005. [51] Yoel Fink, Joshua N. Winn, Shanhui Fan, Chiping Chen, Jurgen Michel, John D. Joannopoulos, and Edwin L. Thomas. A dielectric omnidirectional reflector. Science, 282(5394):1679–1682, 1998.
Electrochemical integration of graphene with light absorbing copper-based thin films Medini Padmanabhan,${}^{1}$ Kallol Roy,${}^{1}$ Gopalakrishnan Ramalingam,${}^{2}$ Srinivasan Raghavan,${}^{2}$ and Arindam Ghosh${}^{1}$ ${}^{1}$Department of Physics, Indian Institute of Science, Bangalore 560012, India ${}^{2}$Materials Research Center, Indian Institute of Science, Bangalore 560012, India (December 7, 2020) Abstract We present an electrochemical route for the integration of graphene with light sensitive copper-based alloys used in optoelectronic applications. Graphene grown using chemical vapor deposition (CVD) transferred to glass is found to be a robust substrate on which photoconductive Cu${}_{x}$S films of 1-2 $\mu$m thickness can be deposited. The effect of growth parameters on the morphology and photoconductivity of Cu${}_{x}$S films is presented. Current-voltage characterization and photoconductivity decay experiments are performed with graphene as one contact and silver epoxy as the other. pacs: I Introduction Carbon based nanostructures such as nano-tubes (CNTs) and graphene are starting to carve out a niche for themselves in the field of energy related materials. Their applications range from hydrogen storage architectures to design of ultracapacitors Dimitrakakis et al. (2008); Yu and Dai (2010). In the field of optoelectronics both CNTs and graphene are being regularly incorporated in a wide variety of solar photovoltaic designs. For CNTs, the strategy has been twofold: (1) direct employment as photosensitive material Kongkanand et al. (2007); Dissanayake and Zhong (2011), and (2) as transparent conducting films for front electrodes Wu et al. (2004); Pasquier et al. (2005); Barnes et al. (2007). Both strategies have been investigated for graphene as well. Functionalized (organic solution processed) graphene has been demonstrated to be efficient electron acceptors in organic bulk heterojunction photovoltaic devices Li et al. (2011). Solar cell designs which have successfully integrated graphene as the transparent electrode including silicon Schottky junctions Li et al. (2010); Chen et al. (2011), dye-sensitized cells Wang et al. (2008); Hong et al. (2008) as well as thin film organic Wu et al. (2008) and inorganic Bi et al. (2011) cells, seem to have generated particular interest. Current market for transparent electrodes is dominated by indium tin oxide (ITO) thanks to a rather rare combination of high transparency and conductivity. However, material scarcity, brittleness, and operational deterioration due to ion diffusion have been some of the outstanding problems Schlatmann et al. (1996); Chen et al. (2002). Graphene is an especially attractive alternative for such applications due to its compatibility with planar technology, superior mechanical qualities, transparency and reasonably high conductivity. The development of chemical vapor deposition (CVD) of graphene on transition metals, such as copper and nickel Li et al. (2009), has made large-area scalability plausible Bae and Kim (2010). There are however several challenges to this, in particular, (1) creating high specific conductance that can be comparable to ITO, and (2) implementing a robust and non-invasive technique to transfer and integrate graphene to the light absorbing components of the solar cell. While significant progress has been made to produce high-conductivity CVD graphene Wang et al. (2011); Li et al. (2011), its integration to photosensitive components is still in a rather primitive stage. The conventional way for incorporating graphene to solar cell architectures is to lay a sheet of graphene on the active layers Li et al. (2010); Chen et al. (2011). This can trap significant quantity of impurities, water/gas molecules, acrylic residues etc. Thus a clean and non-invasive method of integrating graphene to photovoltaic designs may not only enhance the yield and efficiency, but also create a platform to realize a larger class of hybrid energy-harvesting structures. Here we demonstrate a new room-temperature integration scheme (Fig. 1(a)) of single-layer CVD graphene to copper-based inorganic photosensitive alloys through electrochemical means. We show that graphene can be used as a cathode on which copper-based alloys, in this case copper sulfide (Cu${}_{x}$S), can be directly grown from electrochemical bath of appropriate salts. Preliminary electro-optical characterization of the devices is reported and compared with identically grown ITO-Cu${}_{x}$S devices. Our results suggest that graphene can be an excellent host in electrochemical coupling to a wide variety of alloys and materials. II Results and discussion The starting point for the preparation of our samples is graphene synthesized by low-pressure CVD (base pressure of 1 Torr) Pal et al. (2011). 25 $\mu$m thick copper foils are annealed at 1000 ${}^{\circ}$C for 5 minutes under a H${}_{2}$ flow of 50 sccm (standard cubic centimeters per minute) to reclaim the pure metal surface. CH${}_{4}$ and H${}_{2}$ are then introduced at a rate of 35 sccm and 2 sccm for a growth time of 30 s. The reactor is cooled down to room temperature at a cooling rate of 8${}^{\circ}$C/minute under a 1000 sccm flow of H${}_{2}$. Scanning electron microscope (SEM) images of the as-grown graphene shows complete coverage of the copper substrate. The SEM images also suggest that the growth is predominantly single-layer Kochat et al. (2011). Small pieces (5 mm $\times$ 5 mm) are cut out of these copper foils and then transferred to a petri-dish containing acidic FeCl${}_{3}$ solution which etches away the copper. The graphene which floats on the surface of FeCl${}_{3}$ is then scooped and transferred into another petri-dish containing de-ionised water. It is allowed to float for about 2$-$3 minutes before being scooped on to a piece of clean glass. Note that no PMMA (poly(methyl methacrylate)) is used in our transfer procedure. This is because electrodeposited Cu${}_{x}$S is observed to have poor adhesion on graphene films transferred with the help of PMMA. We believe that PMMA residues on the surface of graphene are responsible for this. Ar-H${}_{2}$ annealing may be helpful in improving the interface properties of graphene transferred using PMMA. In Fig. 1(b), we show SEM images of graphene transferred to glass substrates using the above procedure. Note that large area transfer is possible with a few tears. In Fig. 1(c), we show representative Raman spectra taken on our samples with 514 nm radiation. In most of the cases, the $D$-peak is stronger or comparable to the $G$ peak indicating high level of disorder. Note that our transfer procedure can cause ripples and tears resulting in lattice distortions which might be the reason for the prominent $D$-peak. Our $2D$ peaks are well-fitted by a single lorentzian, thereby indicating that our substrates are predominantly single layer Ferrari (2007). Measurement of the sheet resistance of our transferred graphene sheets using van der Pauw technique yields values between 5$-$15 $k\Omega/sq$. The presence of micro-tears in our sample makes van der Pauw measurements difficult van der Pauw (1958). More careful Hall bar measurements done using similar graphene samples transferred to clean Si/SiO${}_{2}$ substrates yield resistivity values of the range 5$-$50 k$\Omega$. Although these values are rather high, they can be brought down by various methods including chemical doping and gating Wadhwa et al. (2010); Jia et al. (2011). We use graphene transferred onto glass as the substrate for electrochemical deposition. We choose copper sulfide as our light absorbing material because it has an indirect bandgap of about 1.2 eV (for Cu${}_{2}$S) and the component elements are earth abundant and non-toxic Wu et al. (2008). Electrochemical routes for the synthesis of copper sulfide on metal substrates are also well-researched Yukawa et al. (1996); Mane and Lokhande (2000); Anuar et al. (2002). Solar cells with efficiencies of around 10$\%$ have been reported for the Cu${}_{2}$S-CdS system Hall and Meakin (1979); Bragagnolo et al. (1980). Recently there has been a revival of interest in the electrodeposition of related materials such as CuInSe${}_{2}$ and Cu(In,Ga)Se${}_{2}$ Valdés and Vázquez (2011); Ribeaucourt et al. (2011). We now describe our electrochemical growth procedure. The graphene transferred to glass is contacted using silver epoxy paste. For electrochemical deposition, it is desirable that only a pre-defined area is exposed to the growth solution. This is achieved by masking graphene by non-conducting epoxy. This epoxy is also found to be helpful in clamping the graphene firmly to glass. The electrochemical bath consists of 10 mM CuSO${}_{4}$.5H${}_{2}$O (10 mmoles of the solute for every 1L of the solvent), 400 mM Na${}_{2}$S${}_{2}$O${}_{3}$.5H${}_{2}$O and 24 mM EDTA (dihydrate) disodium salt. The freshly prepared solution is aged for about 12 hours maintaining a pH of around 3.0 $\pm$ 0.2. The pH is lowered to $\sim$2.8 right before the growth starts and no further pH maintenance is done during growth. We observe that the pH slowly rises to about 3.5 during a course of 7 hours which is our typical growth time. Graphite is used as the counter electrode in our two-electrode set-up. Typical current densities and voltages for deposition are 3$-$5 $\mu$A/mm${}^{2}$ and 0.9$-$1.1 V respectively. In the inset of Fig. 2(a) we show the $I-V$ characteristics of the electrode at a voltage sweep rate of 5.5 mV/s. Data for three sweeps are given. From the third sweep onwards, a peak starts developing around -0.25 V during downsweeps which we believe is indicative of a reaction involving copper. We observe that even after repeated sweeps ($\sim$50) the graphene substrate is found to be unaffected (under optical microscope) indicating that the deposition process is reversible. For electrodeposition, a constant negative bias is applied to the graphene electrode (cathode). In the electrolyte, Cu${}^{2+}$ ions are present due to the dissociation of CuSO${}_{4}$. Both thiosulfate and EDTA act as complexing agents for copper. The thiosulphate ion is known to reduce Cu${}^{2+}$ to Cu${}^{+}$. The sulphur source is the thiosulphate radical which releases colloidal sulphur in acidic medium Yukawa et al. (1996); Anuar et al. (2002). A photograph of the grown film is shown in Fig. 2(b). We observe that the adhesion of the copper sulfide film to graphene is strong in spite of the underlying tears and disorder in graphene. It is known that copper forms a series of sulfides with varying ratios of Cu:S. For photovoltaic applications, copper rich phases, mainly Cu${}_{2}$S, are the preferred ones Bragagnolo et al. (1980); Hall and Meakin (1979); B G Caswell and Woods (1977). In this study, we typically grow films with Cu:S ratios of 1.3 - 1.7. The composition of our films is determined by energy dispersive spectroscopy (EDS). Figure 2(a) shows measured Cu:S ratios as a function of growth current density. At lower current densities (0.1 - 1 $\mu$A/mm${}^{2}$), the coverage on the substrate is sparse and the Cu:S ratio is typically low ($\lesssim$1.4). Photoconductivity is not generally observed in these films. This is expected since many of the copper-poor phases are known to be metallic. As the growth current density is raised, the coverage on the substrate improves and complete coverage is achieved by current densities more than $\sim$3 $\mu$A/mm${}^{2}$. In Fig. 2(c) we show a film grown near the optimum current density. An underlayer of Cu${}_{x}$S is visible, a magnified image of which is shown in Fig. 2(d). A copper-rich overlayer growth is also typically seen which becomes more pronounced at higher current densities ($\sim$10 $\mu$A/mm${}^{2}$). At high current densities, a co-deposition of elemental copper is also observed in the overlayer. However, we believe that it is the underlayer that contributes to the measured photoconductivity. As can be inferred from Fig. 2(a), a copper-rich underlayer is conducive to the observation of photoconductivity. Further studies have indicated that a copper-rich underlayer can be obtained by tuning various other parameters including the relative concentration of the bath elements, pH, the presence of complexing agents and deposition voltage. Ideally, during electrodeposition, we expect the copper sulfide film to grow vertically on top of graphene. However, in our experiments we observe that, the film also creeps up laterally on top of the underlying non-conducting epoxy mask. In many cases, the microstructure and composition of the lateral growth is observed to be comparable to the underlayer of the main film. For electrical measurements we exploit lateral growth to make contacts to the sample. Attempts to directly contact the thin film from above resulted in short-through to the underlying graphene in some cases. In Fig. 3(a) we show a schematic of the contact configuration in our devices. We take silver epoxy contacts from the underlying graphene and the laterally grown copper sulfide. Examples of two-point $I-V$ characteristics are shown in Fig. 3(b). The two traces correspond to two different silver epoxy contacts (A and B) on the same sample. These two contacts differ from each other in terms of their areas and possibly, Schottky barrier heights. The graphene contact used is same for both the traces. Note that the shape is linear in one case, whereas it is rectifying in the other. In general, the range of currents allowed through the different contact pairs also vary widely. We try to understand the shape of these curves and the magnitude of the currents by using a back to back diode model as shown in Fig. 3(c). Ideal diode equation is used, $I=I_{sat}[\exp(eV/nkT)-1]$, where $I$ is the current through the diode, $V$ is the voltage drop across the diode, $I_{sat}$ is the reverse saturation current, $kT$ is thermal voltage and $n$ is the non-ideality factor assumed to be 1.1 in our case. In Fig. 3(d), we show examples of computed curves corresponding to this back-to-back diode model. Note that results comparable to our experiential curves can be reproduced by choosing appropriate values of $I_{sat}$ and $R$. We compare curves taken under various contact configurations and conclude that our results are best explained by assuming that the Cu${}_{x}$S layer is n-doped. We repeat $I-V$ measurements using two Ag/Cu${}_{x}$S junctions as contacts and conclude that they are non-linear in character. The nature of the graphene/Cu${}_{x}$S junctions, however, is unclear. Within our model, it is difficult to distinguish between an ohmic contact and a Schottky contact with a high reverse saturation current. Note that the area of graphene junction is much larger than the silver junction which will lead to high $I_{sat}$. We have devices where as much as 1 $\mu$A current flows at 0.1 V when the graphene junction is reverse biased. This can indicate a Schottky barrier with $I_{sat}\gtrsim$ 1 $\mu$A or an ohmic contact with resistance of $\sim$100 k$\Omega$. However, in most cases, the resistance of the Cu${}_{x}$S film is present in series with the junction resistances. These series resistances are typically large, probably due to the fact that the laterally grown material has uneven substrate coverage and is prone to micro-cracks. In Fig. 4(a) we show $I-V$ characteristics of our device under dark and light (AM1.5, 1000 W/m${}^{2}$) conditions. An anti-clockwise rotation of the curve is observed in response to light which indicates a decrease in resistance. In this trace one contact is graphene and the other one is Ag-epoxy on laterally grown Cu${}_{x}$S. Two point measurements with a different contact configuration where both contacts are taken from laterally grown Cu${}_{x}$S also show similar light response. Since the contact area of graphene is much larger than Ag-epoxy, we conclude that the magnitude of the rotation of the $I-V$ curves in response to light is mostly independent of the contact area. Hence, we believe that major contribution to the observed photoconductivity comes from the bulk of the sample and not just the region surrounding the contacts. In Fig. 4(b), similar data is shown for a film grown on ITO coated glass. This film was grown with a current density of 3 $\mu$A/mm${}^{2}$. Two point measurements are done with ITO as one contact and Ag-epoxy contacted to laterally grown Cu${}_{x}$S as the other. Note that the responses of both films to light are comparable in spite of the fact that the sheet resistance of ITO is only $\sim$10 $\Omega/sq$. However, the difference between ITO and graphene may be masked by the fact that the overall resistance has a significant contribution from the thin film itself. In Figs. 4(c) and 4(d) we monitor the current through the films grown on graphene as a function of time. This data is shown for sample CuS1, whose $I-V$ characteristics are shown in Fig. 3(b) (graphene/Cu${}_{x}$S/contact A). A constant bias of $\pm$0.1 V is maintained. A white light emitting diode (LED) is turned on and off periodically and the response is recorded. In the inset of Fig. 4(b), we show photoconductive build-up and decay measured for the Cu${}_{x}$S film grown on ITO substrates. We observe that the response to light evolves over timescales of tens of seconds. It is known that traps present in the bulk and interfaces cause slow decay of photoconductivity by trapping the minority carries for long timescales thereby delaying their recombination with the majority carriers. In some cases, the shape of our photoconductive decay is well-fitted by a stretched exponential. The presence of DX centers and local-potential fluctuations due to compositional inhomogeneities are possible factors which can influence the decay Lin and Jiang (1990); Lin et al. (1990). Note that in Figs. 4(c) and 4(d) the only difference is the sign of the applied bias. The shape of the decay curves are however markedly different. The general observation is that the photoresponse curves are dominated by a fast time scale when certain contacts with low values of $I_{sat}$ are reverse biased. We try to understand the bias dependence of the light response timescales by considering a model in which copper sulfide is assumed to be an n-type semiconductor with two unequal Schottky barriers at two contacts (Fig. 4(e)). The junction at the left (D1) is assumed to have a lower value of $I_{sat}$ compared to the junction at the right (D2). When a negative bias is applied to D1, most of the external voltage is dropped across the depletion region of D1. The timescales of the photoresponse is thus dictated by the processes in the depletion region. However, when the sign of the bias is reversed, the voltage drops across D1 and D2 are comparable due to the forward bias and large value of $I_{sat,D2}$ respectively. The timescales in this case can have significant contribution from the processes in the bulk. However, if the two junctions have comparable values of $I_{sat}$, the device becomes symmetric. In such cases, our experiments show that the light response curves at positive and negative biases have qualitatively similar features. Further studies including temperature and bias dependent transport measurements are needed before we can fully understand the processes in our system. Local current density variations during electrochemical growth owing to the high sheet resistance of graphene as well as presence of impurities can cause compositional inhomogeneities in the film which can also influence transport data in the present device configuration. For optoelectronic applications, it is also desirable to anneal the films before characterizing them since electrodeposition is known to introduce considerable disorder. III Conclusion In conclusion, we present a novel method of integrating graphene with photovoltaic device architectures. We employ an electrochemical route to grow a thin film of Cu${}_{x}$S on CVD graphene and investigate its electrical and optical properties. Cu(In,Ga)(S,Se)${}_{2}$ and Cu${}_{2}$ZnSn(S,Se)${}_{4}$ which are known to be near-ideal materials for solar cell applications, also fall into the same family of chalcogenides Bhattacharya and Fernandez (2003); Lincot et al. (2004); Fernández and Bhattacharya (2005); Katagiri et al. (2009); Todorov et al. (2010). Recently, electrochemical growth of CuInSe${}_{2}$ on CNT-based nanocomposite membranes has been reported Kou et al. (2011). The device architecture outlined in our work has many advantages including simplicity, low-cost and scalability thereby opening up new avenues for integration of graphene with various optoelectronic devices including solar cells. IV Acknowledgement We thank Korean Institute of Science and Technology for funding and Prof. V. Venkataraman for illuminating discussions. We acknowledge the Department of Science and Technology (DST) for a funded project. S.R. acknowledges support under Grant No. SR/S2/CMP-02/2007. References Dimitrakakis et al. (2008) Dimitrakakis, G. K.; Tylianakis, E.; Froudakis, G. E. Nano Lett. 2008, 8, 3166–3170. Yu and Dai (2010) Yu, D.; Dai, L. J. Phys. Chem. Lett. 2010, 1, 467–470. Kongkanand et al. (2007) Kongkanand, A.; Martínez Domínguez, R.; Kamat, P. V. Nano Lett. 2007, 7, 676–680. Dissanayake and Zhong (2011) Dissanayake, N. M.; Zhong, Z. Nano Lett. 2011, 11, 286–290. Wu et al. (2004) Wu, Z.; Chen, Z.; Du, X.; Logan, J. M.; Sippel, J.; Nikolou, M.; Kamaras, K.; Reynolds, J. R.; Tanner, D. B.; Hebard, A. F.; Rinzl, A. G. Science 2004, 305, 1273–1276. Pasquier et al. (2005) Pasquier, A. D.; Unalan, H. E.; Kanwal, A.; Miller, S.; Chhowalla, M. Appl. Phys. Lett. 2005, 87, 203511. Barnes et al. (2007) Barnes, T. M.; Wu, X.; Zhou, J.; Duda, A.; van de Lagemaat, J.; Coutts, T. J.; Weeks, C. L.; Britz, D. A.; Glatkowski, P. Appl. Phys. Lett. 2007, 90, 243503. Li et al. (2011) Li, Y.; Hu, Y.; Zhao, Y.; Shi, G.; Deng, L.; Hou, Y.; Qu, L. Adv. Mater. 2011, 23, 776–780. Li et al. (2010) Li, X.; Zhu, H.; Wang, K.; Cao, A.; Wei, J.; Li, C.; Jia, Y.; Li, Z.; Li, X.; Wu, D. Adv. Mater. 2010, 22, 2743–2748. Chen et al. (2011) Chen, C.-C.; Aykol, M.; Chang, C.-C.; Levi, A. F. J.; Cronin, S. B. Nano Lett. 2011, 11, 1863–1867. Wang et al. (2008) Wang, X.; Zhi, L.; Mullen, K. Nano Lett. 2008, 8, 323–327. Hong et al. (2008) Hong, W.; Xu, Y.; Lu, G.; Li, C.; Shi, G. Electrochem. Commun. 2008, 10, 1555 – 1558. Wu et al. (2008) Wu, J.; Becerril, H. A.; Bao, Z.; Liu, Z.; Chen, Y.; Peumans, P. Appl. Phys. Lett. 2008, 92, 263302. Bi et al. (2011) Bi, H.; Huang, F.; Liang, J.; Xie, X.; Jiang, M. Adv. Mater. 2011, n/a–n/a,doi: 10.1002/adma.201100645. Schlatmann et al. (1996) Schlatmann, A. R.; Floet, D. W.; Hilberer, A.; Garten, F.; Smulders, P. J. M.; Klapwijk, T. M.; Hadziioannou, G. Appl. Phys. Lett. 1996, 69, 1764–1766. Chen et al. (2002) Chen, Z.; Cotterell, B.; Wang, W. Eng. Fract. Mech. 2002, 69, 597 – 603. Li et al. (2009) Li, X.; Cai, W.; An, J.; Kim, S.; Nah, J.; Yang, D.; Piner, R.; Velamakanni, A.; Jung, I.; Tutuc, E.; Banerjee, S. K.; Colombo, L.; Ruoff, R. S. Science 2009, 324, 1312–1314. Bae and Kim (2010) Bae, S.; Kim, H. Nat. Nanotechnol. 2010, 5, 574–578. Wang et al. (2011) Wang, Y.; Tong, S. W.; Xu, X. F.; Özyilmaz, B.; Loh, K. P. Adv. Mater. 2011, 23, 1475–1475. Li et al. (2011) Li, X.; Magnuson, C. W.; Venugopal, A.; Tromp, R. M.; Hannon, J. B.; Vogel, E. M.; Colombo, L.; Ruoff, R. S. J. Am. Chem. Soc. 2011, 133, 2816–2819. Pal et al. (2011) Pal, A. N.; Ghatak, S.; Kochat, V.; Sneha, E. S.; Sampathkumar, A.; Raghavan, S.; Ghosh, A. ACS Nano 2011, 5, 2075–2081. Kochat et al. (2011) Kochat, V.; Pal, A. N.; Sneha, E. S.; Sampathkumar, A.; Gairola, A.; Shivashankar, S. A.; Raghavan, S.; Ghosh, A. J. Appl. Phys. 2011, 110, 014315–014319. Ferrari (2007) Ferrari, A. C. Solid State Commun. 2007, 143, 47 – 57. van der Pauw (1958) van der Pauw, L. Philips Research Reports 1958, 13, 1–9. Wadhwa et al. (2010) Wadhwa, P.; Liu, B.; McCarthy, M. A.; Wu, Z.; Rinzler, A. G. Nano Lett. 2010, 10, 5001–5005. Jia et al. (2011) Jia, Y.; Cao, A.; Bai, X.; Li, Z.; Zhang, L.; Guo, N.; Wei, J.; Wang, K.; Zhu, H.; Wu, D.; Ajayan, P. M. Nano Lett. 2011, 11, 1901–1905. Wu et al. (2008) Wu, Y.; Wadia, C.; Ma, W.; Sadtler, B.; Alivisatos, A. P. Nano Lett. 2008, 8, 2551–2555. Yukawa et al. (1996) Yukawa, T.; Kuwabara, K.; Koumoto, K. Thin Solid Films 1996, 280, 160 – 162. Mane and Lokhande (2000) Mane, R. S.; Lokhande, C. D. Mater. Chem. Phys. 2000, 65, 1 – 31. Anuar et al. (2002) Anuar, K.; Zainal, Z.; Hussein, M. Z.; Saravanan, N.; Haslina, I. Sol. Energy Mater. Sol. Cells 2002, 73, 351 – 365. Hall and Meakin (1979) Hall, R.; Meakin, J. Thin Solid Films 1979, 63, 203 – 211. Bragagnolo et al. (1980) Bragagnolo, J.; Barnett, A.; Phillips, J.; Hall, R.; Rothwarf, A.; Meakin, J. IEEE Trans. Electron Devices 1980, 27, 645 – 651. Valdés and Vázquez (2011) Valdés, M.; Vázquez, M. Electrochim. Acta 2011, 56, 6866 – 6873. Ribeaucourt et al. (2011) Ribeaucourt, L.; Savidand, G.; Lincot, D.; Chassaing, E. Electrochim. Acta 2011, 56, 6628 – 6637. B G Caswell and Woods (1977) B G Caswell, G. J. R.; Woods, J. J. Phys. D: Appl. Phys. 1977, 10, 1345–1350. Lin and Jiang (1990) Lin, J. Y.; Jiang, H. X. Phys. Rev. B 1990, 41, 5178–5187. Lin et al. (1990) Lin, J. Y.; Dissanayake, A.; Brown, G.; Jiang, H. X. Phys. Rev. B 1990, 42, 5855–5858. Bhattacharya and Fernandez (2003) Bhattacharya, R. N.; Fernandez, A. M. Sol. Energy Mater. Sol. Cells 2003, 76, 331 – 337. Lincot et al. (2004) Lincot, D. et al. Sol. Energy 2004, 77, 725 – 737. Fernández and Bhattacharya (2005) Fernández, A. M.; Bhattacharya, R. N. Thin Solid Films 2005, 474, 10 – 13. Katagiri et al. (2009) Katagiri, H.; Jimbo, K.; Maw, W. S.; Oishi, K.; Yamazaki, M.; Araki, H.; Takeuchi, A. Thin Solid Films 2009, 517, 2455 – 2460. Todorov et al. (2010) Todorov, T. K.; Reuter, K. B.; Mitzi, D. B. Adv. Mater. 2010, 22, E156–E159. Kou et al. (2011) Kou, H.; Zhang, X.; Jiang, Y.; Li, J.; Yu, S.; Zheng, Z.; Wang, C. Electrochim. Acta 2011, 56, 5575 – 5581.
Spontaneous Symmetry Breaking in Superfluid Helium-4 Junpei Harada E-mail:jharada@apctp.org Asia Pacific Center for Theoretical Physics, Pohang, 790-784, Korea Physics division, History of Science, Kyoto University, Kyoto, 606-8501, Japan (January 16, 2007) Abstract We derive an analytical expression for a critical temperature of spontaneous symmetry breaking in a repulsive hard-core interacting Bose system. We show that the critical temperature of spontaneous symmetry breaking in a hard-core interacting Bose system is determined by the three physical parameters: the density of Bose liquid at absolute zero ($\rho_{0}$), the mass ($m$) and the hard sphere diameter ($\sigma$) of a boson. The formula that we have derived is $T_{c}=\rho_{0}\pi\hbar^{2}\sigma/m^{2}k_{B}$. We report $T_{c}$ of liquid helium-4 is 2.194 K, which is significantly close to the $\lambda$-temperature of 2.1768 K. The deviation between the predicted and experimental values of the $\lambda$-temperature is less than 1$\%$. pacs: 67.40.-w ††preprint: 1. The year 2008 is the 100th anniversary of “Liquid Helium Year”, in which Heike Kamerlingh Onnes produced liquid helium. We are in a truly memorial moment in the history of low temperature physics. Liquid helium has been studied for the past 100 years, and there is no question that liquid helium is of central importance in this field. (I would like to turn readers’ attention to Ref. Laesecke:2002 . It presents the first complete English translation of the inaugural speech of Heike Kamerlingh Onnes at the University of Leiden in 1882. Although his speech is not related to helium, it is quite interesting and gives us a lesson. I believe that it attracts the interest of readers.) In spite of great efforts, an important problem still remains to be solved. The problem is as follows. Liquid helium-4 undergoes a phase transition, known as the $\lambda$-transition, to a superfluid phase at the $\lambda$-temperature. The experimental value of the $\lambda$-temperature $T_{\lambda}$ is approximately equal to 2.2 K. The problem is quite simple. Why is $T_{\lambda}\simeq 2.2$ K? What physical parameters determine the value of the $\lambda$-temperature? This is the “Why 2.2 K?” problem, which is the subject of the present paper. It is a long-standing dream in low temperature physics to derive a formula for the liquid helium-4 $\lambda$-temperature. In 1938, Fritz London proposed that the $\lambda$-transition of liquid helium-4 probably has to be regarded as the Bose-Einstein condensation (BEC) London:1938_1 . He calculated a critical temperature of the BEC in an ideal Bose gas: $$\displaystyle T_{BE}=\frac{2\pi\hbar^{2}}{mk_{B}}\left(\frac{n}{\zeta(3/2)}% \right)^{2/3},$$ (1) where $m$ is a boson mass, $n$ is a number density of Bose gas and $\zeta(3/2)=2.612375\cdots$. Thus, the Bose-Einstein temperature $T_{BE}$ is determined by the two physical parameters, $m$ and $n$. London reported $T_{BE}$ of helium is 3.13 K London:1938_2 , which is the same order of magnitude as the $\lambda$-temperature of 2.18 K. His proposal is quite important, because it was the first suggestion that the $\lambda$-transition was intrinsically related to the Bose-Einstein condensation. Recall that it was not clear in the 1930s whether the BEC was an actually physical phenomenon London:1938_2 . Although London’s work was revolutionary progress, there is a difference of approximately 0.95 K between his theoretical prediction and the experimental value. This deviation is approximately 44$\%$ and it is not negligible. It is widely believed that this discrepancy arises from the neglect of interactions between helium atoms in his theoretical calculation. This conjecture is quite reasonable because liquid helium is not an ideal Bose gas but a repulsive interacting Bose system. However, in general, it is difficult to consider interactions between atoms in analytical calculations of the BEC. Although this problem has been discussed Williams:1995 , it remains very important. In this paper, we study the “Why 2.2 K?” problem via an alternative approach. The key concept in our approach is spontaneous symmetry breaking. We derive an analytical formula for a critical temperature of spontaneous symmetry breaking in a repulsive hard-core interacting Bose system. We show that a critical temperature is determined by the three physical parameters: the density of Bose liquid at absolute zero ($\rho_{0}$), the mass ($m$) and the hard sphere diameter ($\sigma$) of a boson. Our formula predicts that a critical temperature of liquid helium-4 is 2.194 K, which is significantly close to the experimental value of the $\lambda$-temperature ($T_{\lambda}$ = 2.1768 K Donnelly:1998 ). 2. Spontaneous symmetry breaking is a symmetry breaking by the ground state of a system. The symmetry can be discrete or continuous, and be local or global. Spontaneous symmetry breaking is a phenomenon that occurs in many systems. In condensed matter physics, ferromagnetism is a primary example. For superconductivity, the importance of spontaneous symmetry breaking was firstly emphasized by Yoichiro Nambu Nambu:1960tm . In particle physics, the electroweak symmetry breaking is established in the Standard Model. In cosmology, a number of spontaneous symmetry breaking play an important role in the history of the early universe. Furthermore, the recently proposed ghost condensation Arkani-Hamed:2003uy in the accelerating universe can be regarded as a kind of spontaneous symmetry breaking. Thus, spontaneous symmetry breaking is a very important concept in modern physics. Spontaneous symmetry breaking is crucial in liquid helium-4. In short, the $\lambda$-transition is the spontaneous symmetry breaking. This is the basis of our approach. We derive a formula for a critical temperature of spontaneous symmetry breaking in the framework of effective field theory described by the order parameters. In the case of liquid helium-4, the order parameter is a macroscopic wavefunction $\varphi$, which is a one-component complex scalar field with both amplitude and phase. We begin with the theory described by this scalar field $\varphi$. 3. The Lagrangian density ${\cal L}(={\cal K}-{\cal V})$ of the system is given by the nonrelativistic Goldstone model of the form $$\displaystyle{\cal V}=-\mu\varphi^{*}\varphi+\frac{\lambda}{2}(\varphi^{*}% \varphi)^{2},\qquad\lambda=\frac{2\pi\hbar^{2}\sigma}{m}.$$ (2) The kinetic part ${\cal K}=i\hbar\varphi^{*}\partial_{t}\varphi+\hbar^{2}\varphi^{*}\nabla^{2}% \varphi/2m$ is not important here, and therefore we concentrate on the potential ${\cal V}$. In Eq. (2), $\mu$ is the chemical potential, $\lambda$ is the coupling constant for helium interactions and $\varphi^{*}$ is the complex conjugate of $\varphi$. The interaction between helium atoms is repulsive at short distances (hard-core interactions). Therefore, the coupling constant $\lambda=2\pi\hbar^{2}\sigma/m$ is positive, where $m$ and $\sigma$ are the mass and the hard sphere diameter of a helium atom, respectively. We first consider the physical dimensions of parameters of the potential (2). In units of $\hbar=1$ and $k_{B}=1$ ($\hbar$ is the reduced Planck’s constant and $k_{B}$ is the Boltzmann constant), for any physical quantity $Q$, its physical dimension $[Q]$ is written as the product of the dimensions of length and temperature: $$\displaystyle[Q]=[\mbox{Length}]^{\alpha}[\mbox{Temperature}]^{\beta},$$ (3) where $\alpha$ and $\beta$ are numerical constants. The physical dimension of the Lagrangian density is $[{\cal L}]=[\mbox{L}]^{-3}[\mbox{T}]$, and that of the scalar field $\varphi$ is $[\varphi]=[\mbox{L}]^{-3/2}$. (Here $[\mbox{L}]$ and $[\mbox{T}]$ represent $[\mbox{Length}]$ and $[\mbox{Temperature}]$, respectively.) Hence it is straightforward to derive the physical dimensions of parameters in Eq. (2): $$\displaystyle[\mu]=[\mbox{T}],\ [\lambda]=[\mbox{L}]^{3}[\mbox{T}],\ [m]=[% \mbox{L}]^{-2}[\mbox{T}]^{-1},\ [\sigma]=[\mbox{L}].$$ (4) Note here that in the present case the coupling constant $\lambda$ is a dimensionful parameter, in contrast to a relativistic case in which $\lambda$ is dimensionless. These physical dimensions help us to understand the physics of liquid helium-4. We next consider the symmetry of Lagrangian. The potential ${\cal V}$ of Eq. (2) is invariant under the U(1) phase transformation of the field $\varphi$: $\varphi\rightarrow e^{i\theta}\varphi$, $\varphi^{*}\rightarrow e^{-i\theta}\varphi^{*}$. Furthermore, if this is a global transformation in which the transformation parameter $\theta$ does not depend on the space and time coordinates, the kinetic part ${\cal K}$ is invariant under the U(1) phase transformation. Hence, the system described by the Lagrangian density ${\cal L}$ has a global U(1)$\simeq$O(2) symmetry. This global U(1) symmetry is spontaneously broken at low temperature as you see below. 4. The ground state of a system is given by solving the condition $\partial{\cal V}/\partial\varphi=0$. First, we consider the case that a chemical potential $\mu$ is negative ($\mu\leq 0$). In this case, there is only a trivial solution $\varphi=0$. This solution is the stable ground state, because $\partial^{2}{\cal V}/\partial\varphi^{*}\partial\varphi|_{\varphi=0}\geq 0$. Hence the global U(1) symmetry is unbroken in this case. Second, we consider the case that $\mu$ is positive ($\mu>0$). In this case, in contrast to the case that $\mu\leq 0$, there is a nontrivial solution in addition to a trivial solution $\varphi=0$: $$\displaystyle|\varphi|=\sqrt{\frac{\mu}{\lambda}}.$$ (5) Fig. 1 shows the potential ${\cal V}$ in the case that $\mu>0$. In the present case, the trivial solution $\varphi=0$ is unstable and it is not the ground state of the system. The nontrivial solution $|\varphi|=\sqrt{\mu/\lambda}$ is the stable ground state, because the curvature of the potential at two solutions satisfies $$\displaystyle\frac{\partial^{2}{\cal V}}{\partial\varphi^{*}\partial\varphi}% \Big{|}_{\varphi=0}<0,\quad\frac{\partial^{2}{\cal V}}{\partial\varphi^{*}% \partial\varphi}\Big{|}_{|\varphi|=\sqrt{\mu/\lambda}}>0.$$ (6) Therefore, if a chemical potential $\mu$ is positive ($\mu>0$), the global U(1) symmetry is spontaneously broken. From above arguments, we reach a following view. A physical system undergoes a phase transition at a critical temperature $T_{c}$, if a chemical potential satisfies the conditions: $\mu\leq 0\ (T\geq T_{c})$, $\mu>0\ (T<T_{c})$. Spontaneous symmetry breaking occurs only if a chemical potential becomes positive. Therefore, in contrast to the Bose-Einstein condensation, spontaneous symmetry breaking never occurs in an ideal Bose gas, in which a chemical potential is always negative ($\mu\leq 0$). This is a crucial difference between spontaneous symmetry breaking and the standard Bose-Einstein condensation. 5. Before deriving a formula for a critical temperature, we consider the temperature dependence of a chemical potential $\mu$. Although the temperature dependence of $\mu$ is not necessary for deriving a formula, it is useful to understand a phase transition. For this reason, we consider the temperature dependence of $\mu$ in the following. The quantity $|\varphi|^{2}$ is equivalent to the number density $n$ of superfluid. Therefore, the following relation is satisfied below the $\lambda$-temperature $$\displaystyle|\varphi|=\sqrt{\frac{\mu}{\lambda}}=\sqrt{n}=\sqrt{\frac{\rho_{s% }}{m}},$$ (7) where $\rho_{s}$ is the superfluid density and $m$ is a boson mass. From this relation, the superfluid density is given by $$\displaystyle\rho_{s}=\frac{m\mu}{\lambda}.$$ (8) This expression indicates that the temperature dependence of $\mu$ is obtained from that of $\rho_{s}$. Here we take a minimal model: $$\displaystyle\frac{\rho_{s}}{\rho}=1-\left(\frac{T}{T_{\lambda}}\right)^{6},$$ (9) where $\rho(=\rho_{n}+\rho_{s})$ is the total density of liquid helium, and $\rho_{n}$, $\rho_{s}$ are the normal fluid and superfluid densities, respectively. It should be emphasized that this model does not affect a formula for a critical temperature. Fig. 2 shows the superfluid density ratio as a function of temperature. Two functions $1-(T/T_{\lambda})^{5}$ and $1-(T/T_{\lambda})^{7}$ are plotted, for comparison. Fig. 2 shows that Eq. (9) and experimental values are consistent except near the $\lambda$-transition, at which the superfluid density ratio is of the form $(1-T/T_{\lambda})^{2/3}$. The form of Eq. (9) is the same as that of the Bose-Einstein condensation in an ideal Bose gas, $1-(T/T_{c})^{3/2}$. Although the exponent “6” has not yet been derived, we do not consider this problem here. From Eqs. (8) and (9), we obtain the expression $$\displaystyle\mu=\mu_{0}\left(1-\left(\frac{T}{T_{\lambda}}\right)^{6}\right),% \quad\mu_{0}=\frac{\lambda\rho_{0}}{m},$$ (10) where $\mu_{0}$ is a chemical potential at absolute zero and $\rho_{0}$ is the total density at absolute zero. We have used the approximation $\rho\simeq\rho_{0}$ in Eq. (10), because the total density $\rho$ is approximately a constant below the $\lambda$-temperature. Therefore, Eq. (10) is a good approximation at low temperature ($T<1.7$ K). Fig. 3 shows the schematic temperature dependence of the potential ${\cal V}$. For $T>T_{\lambda}$, the potential has only one minimum at $\varphi=0$ and the curvature at $\varphi=0$ is positive. When $T=T_{\lambda}$, the curvature at the minimum is zero, $\partial^{2}{\cal V}/\partial\varphi^{*}\partial\varphi|_{\varphi=0}=0$. For $T<T_{\lambda}$, the curvature at $\varphi=0$ is negative and the scalar field rolls down to the $|\varphi|\not=0$ minimum. Thus, there is no barrier of potential between the minimum at $\varphi=0$ and $|\varphi|\not=0$, indicating a second-order phase transition. 6. We now derive a formula for a critical temperature of spontaneous symmetry breaking. When $T=0$, the field $\varphi$ is located at the nonzero minimum $|\varphi|\not=0$. Hence the global U(1) symmetry is broken in this phase. As the temperature increases above a critical temperature, the potential has only one minimum at $\varphi=0$ and the global U(1) symmetry is recovered. From these observations, we reach a following conclusion; the depth of the potential well at absolute zero determines a critical temperature. When $T=0$, the depth of the potential well is given by $$\displaystyle-{\cal V}(|\varphi|=\sqrt{\mu_{0}/\lambda})=\frac{\mu_{0}n_{0}}{2},$$ (11) where $n_{0}=\rho_{0}/m$ is the number density of Bose liquid at absolute zero. This quantity $\mu_{0}n_{0}/2$ represents the required energy density to recover the global U(1) symmetry of the system. Therefore, the corresponding thermal energy density $k_{B}T_{c}n_{0}$ is equivalent to $\mu_{0}n_{0}/2$. Consequently, we obtain the following relation $$\displaystyle k_{B}T_{c}=\frac{\mu_{0}}{2}.$$ (12) Fig. 4 shows the present situation. From above arguments, we have the following relations: $$\displaystyle k_{B}T_{c}=\frac{\mu_{0}}{2},\quad\frac{\rho_{0}}{m}=\frac{\mu_{% 0}}{\lambda},\quad\lambda=\frac{2\pi\hbar^{2}\sigma}{m}.$$ (13) Therefore, a critical temperature of spontaneous symmetry breaking in a repulsive interacting Bose system is: $$\displaystyle T_{c}=\frac{\rho_{0}\pi\hbar^{2}\sigma}{m^{2}k_{B}},$$ (14) where $\rho_{0}$ is the density of Bose liquid at absolute zero, $m$ and $\sigma$ are the mass and the hard sphere diameter of a boson, respectively. Substituting the values of liquid helium-4 into Eq. (14), $\rho_{0}=0.1451$ g/cm${}^{3}$ Donnelly:1998 , $m=6.6465\times 10^{-24}$ g, and $\sigma=2.639$ Å Hurly:2000 ; Janzen:1997 , we obtain the value $T_{c}=2.194$ K. This prediction is significantly close to the experimental value of the $\lambda$-temperature 2.1768 K Donnelly:1998 . The deviation $\Delta$ between the predicted and experimental values of the $\lambda$-temperature is less than $1\%$: $$\displaystyle\Delta\equiv 100\times\frac{T_{c}^{(theory)}-T_{\lambda}^{(exp)}}% {T_{\lambda}^{(exp)}}\simeq 0.8\%<1\%.$$ (15) Thus, our prediction is much closer to the experimental value than London’s prediction. It is because in the present approach the repulsive interactions between helium atoms are included in the derivation of a formula. While the Bose-Einstein temperature $T_{BE}$ in an ideal Bose gas is determined by the two physical parameters, our formula for a critical temperature, Eq. (14), is determined by the three physical parameters: the density of superfluid ($\rho_{0}$), the mass ($m$) and the hard sphere diameter ($\sigma$) of a boson. The additional physical parameter $\sigma$ includes information about interactions, and it has significantly improved the theoretical prediction for a critical temperature. The results are summarized in Table 1. Finally, we comment on the hard sphere diameter $\sigma$. Although $\rho_{0}$ and $m$ have been determined precisely, the uncertainty of $\sigma$ is relatively large. The hard sphere diameter $\sigma$ represents the point at which the interatomic potential $V(r)$ is zero ($V(\sigma)=0$), and the distance $r_{m}$ represents the point at which the potential $V(r)$ is minimum ($\partial V(r)/\partial r|_{r=r_{m}}=0$). Although the value of $r_{m}$ has been reported by many groups Janzen:1995 , that of $\sigma$ has been less reported. Furthermore, the relation $r_{m}=2^{1/6}\sigma$ of the (12,6) Lennard-Jones potential model is not applicable to the present study, because it is not sufficiently a good approximation. For these reasons, we plot the $\lambda$-temperature as a function of $\sigma$. Fig. 5 shows that the deviation between the predicted and experimental values is less than $\pm 1\%$ for the region $2.592$ Å$\ \leq\sigma\leq 2.644$ Å. 7. In conclusion, we have derived an analytical formula for a critical temperature of spontaneous symmetry breaking in a repulsive interacting Bose system. The formula that we have derived, Eq. (14), is a simple analytical expression to predict $T_{\lambda}\simeq 2.2$ K. We hope that this work contributes to progress of low temperature physics. References (1) A. Laesecke, “Through measurement to knowledge: the inaugural lecture of Heike Kamerlingh Onnes (1882),” J. Res. Nath. Inst. Stand. Technol. 107(3), 261 (2002). (2) F. London, “The $\lambda$-phenomenon of liquid helium and the Bose-Einstein degeneracy,” Nature, 141, 643 (1938). (3) F. London, “On the Bose-Einstein condensation,” Phys. Rev. 54, 947 (1938). (4) G. A. Williams, “Specific Heat and Superfluid Density of Bulk and Confined ${}^{4}$He Near the $\lambda$-transition,” J. Low. Temp. Phys. 101, 421 (1995). (5) R. J. Donnelly and C. F. Barenghi, “The Observed Properties of Liquid Helium at the Saturated Vapor Pressure,” J. Phys. Chem. Ref. Data. Vol. 27, No. 6 (1998). (6) Y. Nambu, “Quasi-particles and gauge invariance in the theory of superconductivity,” Phys. Rev.  117 (1960) 648. (7) N. Arkani-Hamed, H. C. Cheng, M. A. Luty and S. Mukohyama, “Ghost condensation and a consistent infrared modification of gravity,” JHEP 0405, 074 (2004). (8) J. J. Hurly and M. R. Moldover, “Ab Initio Values of the Thermophysical Properties of Helium as Standards,” J. Res. Nath. Inst. Stand. Technol. 105(5), 667 (2000). (9) A. R. Janzen and R. A. Aziz, “An accurate potential energy curve for helium based on ab initio calculations” J. Chem. Phys. 107, 914 (1997). (10) A. R. Janzen and R. A. Aziz, “Modern He-He potentials: Another look at binding energy, effective range theory, retardation and Efimov states” J. Chem. Phys. 103, 9626 (1995).
\tensordelimiter ? \tensorformat The Kerr-Schild Double Copy in Lifshitz Spacetime Gökhan Alkaç gokhanalkac@hacettepe.edu.tr Physics Engineering Department, Faculty of Engineering, Hacettepe University, 06800, Ankara, Turkey    Mehmet Kemal Gümüş kemal.gumus@metu.edu.tr Department of Physics, Faculty of Arts and Sciences, Middle East Technical University, 06800, Ankara, Turkey    Mustafa Tek mustafa.tek@medeniyet.edu.tr Department of Physics Engineering, Faculty of Engineering and Natural Sciences, Istanbul Medeniyet University, 34000, Istanbul, Turkey Abstract The Kerr-Schild double copy is a map between exact solutions of general relativity and Maxwell’s theory, where the nonlinear nature of general relativity is circumvented by considering solutions in the Kerr-Schild form. In this paper, we give a general formulation, where no simplifying assumption about the background metric is made, and show that the gauge theory source is affected by a curvature term that characterizes the deviation of the background spacetime from a constant curvature spacetime. We demonstrate this effect explicitly by studying gravitational solutions with non-zero cosmological constant. We show that, when the background is flat, the constant charge density filling all space in the gauge theory that has been observed in previous works is a consequence of this curvature term. As an example of a solution with a curved background, we study the Lifshitz black hole with two different matter couplings. The curvature of the background, i.e., the Lifshitz spacetime, again yields a constant charge density; however, unlike the previous examples, it is canceled by the contribution from the matter fields. For one of the matter couplings, there remains no additional non-localized source term, providing an example for a non-vacuum gravity solution corresponding to a vacuum gauge theory solution in arbitrary dimensions. Contents 1 Introduction 2 General Formulation 3 Maximally Symmetric Background Spacetime 3.1 AdS${}_{4}$ Spacetime around Minkowski Background 3.2 Banados-Teitelboim-Zanelli (BTZ) Black Hole 3.3 Three-dimensional Rotating Black hole 3.4 Schwarzschild-AdS${}_{4}$ Black Hole 3.5 Reissner-Nordström-AdS${}_{4}$ Black Hole 4 Lifshitz Black Holes 4.1 Lifshitz Black Hole from a Massless Scalar and a Gauge Field 4.2 Lifshitz Black Hole from a Massive Vector and a Gauge Field 5 Summary and Discussions A Maximally Symmetric Spacetimes and the Deviation Tensor 1   Introduction The classical double copy is an extension of ideas discovered in the study of scattering amplitudes Bern:2010ue ; Bern:2010yg to classical solutions of general relativity and gauge theories. The data from the amplitudes suggest a $\text{gravity}=(\text{Yang-Mills})^{2}$-type relationship where the graviton amplitudes form a double copy of the gluon amplitudes of two Yang-Mills theories that are called single copies. In general, it is possible to relate the classical solutions at a fixed order in perturbation theory Luna:2016hge ; Goldberger:2016iau ; Goldberger:2017frp ; Goldberger:2017vcg ; Goldberger:2017ogt ; Shen:2018ebu ; Carrillo-Gonzalez:2018pjk ; Plefka:2018dpa ; Plefka:2019hmz ; Goldberger:2019xef ; PV:2019uuv ; Anastasiou:2014qba ; Borsten:2015pla ; Anastasiou:2016csv ; Cardoso:2016ngt ; Borsten:2017jpt ; Anastasiou:2017taf ; Anastasiou:2018rdx ; LopesCardoso:2018xes ; Luna:2020adi ; Borsten:2020xbt ; Borsten:2020zgj ; Luna:2017dtq ; Kosower:2018adc ; Maybee:2019jus ; Bautista:2019evw ; Bautista:2019tdr ; Cheung:2018wkq ; Bern:2019crd ; Bern:2019nnu ; Bern:2020buy ; Kalin:2019rwq ; Kalin:2020mvi ; Almeida:2020mrg ; Godazgar:2020zbv ; Chacon:2020fmr ; however, for a certain class of spacetimes, a simpler form can be achieved where exact solutions of general relativity are mapped to gauge theory solutions Monteiro:2014cda ; Luna:2015paa ; Luna:2016due ; Carrillo-Gonzalez:2017iyj ; Bahjat-Abbas:2017htu ; Berman:2018hwd ; Bah:2019sda ; CarrilloGonzalez:2019gof ; Banerjee:2019saj ; Ilderton:2018lsf ; Monteiro:2018xev ; Luna:2018dpt ; Lee:2018gxc ; Cho:2019ype ; Kim:2019jwm ; Alfonsi:2020lub ; White:2016jzc ; DeSmet:2017rve ; Bahjat-Abbas:2020cyb ; Elor:2020nqe ; Gumus:2020hbb ; Keeler:2020rcv ; Arkani-Hamed:2019ymq ; Huang:2019cja ; Alawadhi:2019urr ; Moynihan:2019bor ; Alawadhi:2020jrv ; Easson:2020esh ; Casali:2020vuy ; Cristofoli:2020hnk . Recently, a particular version, the so-called Weyl Double Copy Luna:2018dpt , was derived through the ideas from twistor theory White:2020sfn , implying a much deeper and general relation than previously thought. In the pioneering work Monteiro:2014cda , the map was obtained by considering solutions of general relativity which can be written in the Kerr-Schild (KS) form with the flat background metric. The fact that the Ricci tensor with mixed indices becomes linear in the perturbation for such solutions provides a natural way to map them to solutions of Maxwell’s theory defined on the flat spacetime. A natural extension is to consider spacetimes with non-flat background metrics, which was first studied in Luna:2015paa . Later, a more systematic analysis was given in Bahjat-Abbas:2017htu and it was shown that there exist two different ways to realize the double copy structure when the background metric is curved, called Type-A and Type-B double copies. In the Type-A double copy, one maps both the background and the perturbation by using the flat metric as the base. Alternatively, in the Type-B double copy, only the perturbation is mapped by taking the base metric as that of the background spacetime, yielding solutions of Maxwell’s theory defined on the curved background. A wide range of examples with constant curvature background was presented in Carrillo-Gonzalez:2017iyj where the authors showed the crucial role played by the Killing vectors in the construction. For the stationary solutions, the contraction of the gravity equations with the time-like Killing vector was used, which is essentially checking the $\mu 0$-components of the trace-reversed equations as done previously. More non-trivial evidence was obtained from the wave solutions where the contraction with the null Killing vector yielded a reasonable single copy. The linearity of the Ricci tensor in the perturbation, which is the crucial property that makes the whole construction work, holds in the case of a generic curved background spacetime. Motivated by this, in Section 2, we will give a general formulation of the KS double copy without any simplifying assumption about the background metric. With the assumption that some redundant terms vanish, one obtains Maxwell’s equation defined on the curved background where the source term gets a contribution from the curvature of the background, which vanishes for a constant curvature spacetime, in addition to the energy-momentum tensor in the gravity side. In order to see the implications, we will study different solutions of general relativity with a cosmological constant. In Section 3, solutions with a maximally symmetric background will be examined. When the background is chosen to be of constant curvature, there is no effect on the source. However, choosing a flat background leads to a constant charge density filling all space. While it has been observed before, our formalism explicitly demonstrates that this is due to the deviation of the background from a constant curvature spacetime. In Section 4, in order to exhibit the effect of a curved background, we will consider the Lifshitz black hole with two different matter couplings. 2   General Formulation In this section, we give a general formulation of the KS double copy in curved spacetime. For that, we will consider classical solutions of cosmological general relativity minimally coupled to matter, which is described by the action $$S=\frac{1}{16\pi G_{d}}\int\text{d}^{d}x\,\sqrt{-g}\,\left[R-2\Lambda+\mathcal{L}_{m}\right],$$ (2.1) where $G_{d}$ is the $d-$dimensional Newton’s constant, $\Lambda$ is the cosmological constant and $\mathcal{L}_{m}$ is the matter part of the Lagrangian density. The field equations arising from the action (2.1) are $$G_{\mu\nu}+\Lambda\,g_{\mu\nu}=T_{\mu\nu}.$$ (2.2) For the KS double copy, one needs the trace-reversed equations with mixed indices $$R^{\mu}_{\ \nu}-\frac{2\,\Lambda}{d-2}\,\delta^{\mu}_{\ \nu}=\widetilde{T}^{\mu}_{\ \nu},$$ (2.3) where the matter contribution is given by $$\widetilde{T}^{\mu}_{\ \nu}=T^{\mu}_{\ \nu}-\frac{1}{d-2}\,\delta^{\mu}_{\ \nu}\,T.$$ (2.4) For a metric in the KS form, $$g_{\mu\nu}=\bar{g}_{\mu\nu}+\phi\,k_{\mu}k_{\nu},$$ (2.5) where the vector $k_{\mu}$ is null and geodesic with respect to both the background and the full metric as $$\bar{g}^{\mu\nu}k_{\mu}k_{\nu}=g^{\mu\nu}k_{\mu}k_{\nu}=0,\qquad\qquad k^{\nu}\bar{\nabla}_{\nu}k^{\mu}=k^{\nu}\nabla_{\nu}k^{\mu}=0,$$ (2.6) the Ricci tensor with mixed indices becomes linear in the perturbation as follows Stephani:2003tm $$R^{\mu}_{\ \nu}=\bar{R}^{\mu}_{\ \nu}-\phi\,k^{\mu}k^{\alpha}\bar{R}_{\alpha\nu}+\frac{1}{2}\left[\bar{\nabla}^{\alpha}\bar{\nabla}^{\mu}\left(\phi\,k_{\alpha}k_{\nu}\right)+\bar{\nabla}^{\alpha}\bar{\nabla}_{\nu}\left(\phi\,k^{\mu}k_{\alpha}\right)-\bar{\nabla}^{2}\left(\phi\,k^{\mu}k_{\nu}\right)\right].$$ (2.7) Since the aim is to obtain Maxwell’s equations in the background spacetime, we rewrite the Ricci tensor in the KS coordinates (2.7) by using the gauge field $A_{\mu}\equiv\phi\,k_{\mu}$ as $$?R^{\mu}_{\nu}?={\bar{R}^{\mu}}_{\ \nu}-\frac{1}{2}\left[\bar{\nabla}_{\alpha}F^{\alpha\mu}k_{\nu}+?E^{\mu}_{\nu}?\right],$$ (2.8) where $F_{\mu\nu}=2\,\bar{\nabla}_{[\mu}A_{\nu]}$ is the field strength tensor and $$?E^{\mu}_{\nu}?=?X^{\mu}_{\nu}?+?Y^{\mu}_{\nu}?-\bar{R}_{\ \alpha\beta\nu}^{\mu}A^{\alpha}k^{\beta}+\bar{R}_{\alpha\nu}A^{\alpha}k^{\mu},$$ (2.9) with $?X^{\mu}_{\nu}?$ and $?Y^{\mu}_{\nu}?$ given by $$?X^{\mu}_{\nu}?=-\bar{\nabla}_{\nu}\left[A^{\mu}\left(\bar{\nabla}_{\alpha}k^{\alpha}+\frac{k^{\alpha}\bar{\nabla}_{\alpha}\phi}{\phi}\right)\right]\,,$$ (2.10) $$?Y^{\mu}_{\nu}?=F^{\alpha\mu}\bar{\nabla}_{\alpha}k_{\nu}-\bar{\nabla}_{\alpha}\left(A^{\alpha}\bar{\nabla}^{\mu}k_{\nu}-A^{\mu}\bar{\nabla}^{\alpha}k_{\nu}\right)\,.$$ (2.11) Using this form of the Ricci tensor (2.8) in the trace-reversed equations (2.3) gives, $$\Delta^{\mu}_{\ \nu}-\frac{1}{2}\left[\bar{\nabla}_{\alpha}F^{\alpha\mu}k_{\nu}+?E^{\mu}_{\nu}?\right]=\widetilde{T}^{\mu}_{\ \nu},$$ (2.12) where we introduce the deviation tensor $$\Delta^{\mu}_{\ \nu}=\bar{R}^{\mu}_{\ \nu}-\frac{2\,\Lambda}{d-2}\,\delta^{\mu}_{\ \nu},$$ (2.13) which vanishes for a constant curvature spacetime if the cosmological constant $\Lambda$ is appropriately chosen, and therefore, characterizes the deviation of the background spacetime from a spacetime with constant curvature (see Appendix for more explanation). In order to solve for the field strenght term, we consider the contraction of this equation (2.12) with a Killing vector $V^{\nu}$ of both the background and the full metric, i.e., $$\nabla_{(\mu}V_{\nu)}=\bar{\nabla}_{(\mu}V_{\nu)}=0,$$ (2.14) which gives the single copy equation as $$\bar{\nabla}_{\nu}F^{\nu\mu}+E^{\mu}=J^{\mu},$$ (2.15) where the extra part is $$E^{\mu}=\frac{1}{V\cdot k}\,E^{\mu}_{\ \nu}\,V^{\nu},$$ (2.16) and the gauge theory source is given by $$J^{\mu}=2\left[\Delta^{\mu}-\widetilde{T}^{\mu}\right],$$ (2.17) with $$\Delta^{\mu}=\frac{1}{V\cdot k}\,\Delta^{\mu}_{\ \nu}V^{\nu},\qquad\qquad\widetilde{T}^{\mu}=\frac{1}{V\cdot k}\,\widetilde{T}^{\mu}_{\ \nu}V^{\nu},$$ (2.18) which are the contributions from the background spacetime and the matter part of the Lagrangian respectively. Contracting the single copy equation (2.15) with the Killing vector $V^{\mu}$, one obtains the zeroth copy equation as $$\displaystyle\bar{\nabla}^{2}\phi+\mathcal{Z}+\mathcal{E}=j,$$ (2.19) where $$\mathcal{Z}=\frac{V\cdot Z}{V\cdot k},\qquad\qquad\qquad\mathcal{E}=\frac{V\cdot E}{V\cdot k},\qquad\qquad\qquad j=\frac{V\cdot J}{V\cdot k},$$ (2.20) with vectors $E^{\mu}$ and $J^{\mu}$ given in (2.16 - 2.17) and, $$Z^{\mu}=\bar{\nabla}_{\alpha}k^{\mu}\,\bar{\nabla}^{\alpha}\phi+\bar{\nabla}_{\alpha}\left[2\phi\bar{\nabla}^{[\alpha}k^{\mu]}-k^{\alpha}\bar{\nabla}^{\mu}\phi\right].$$ (2.21) For any solution of the gravitational field equations (2.2) that be written in the KS form (2.5), the gauge field $A_{\mu}=\phi\,k_{\mu}$ solves the single copy equation (2.15) and the scalar $\phi$ solves the zeroth copy equation (2.19). In this paper, we will study black hole solutions in the KS form by using the time-like Killing vector111In Carrillo-Gonzalez:2017iyj , it was shown that the wave-type solutions with maximally symmetric background metrics can be studied by choosing a null Killing vector. $V^{\mu}=\delta^{\mu}_{\ 0}$. For the examples that we will consider in this paper, one has $$V\cdot k=1,\qquad E^{\mu}=E^{\mu}_{\ 0}=0,\qquad\mathcal{E}=E^{0}=0,\qquad\Delta^{\mu}=\Delta^{\mu}_{\ 0},\qquad\widetilde{T}^{\mu}=\widetilde{T}^{\mu}_{\ 0},$$ (2.22) and the single copy and the zeroth copy equations becomes Maxwell’s and Poisson’s equations $$\displaystyle\bar{\nabla}_{\nu}F^{\nu\mu}$$ $$\displaystyle=$$ $$\displaystyle J^{\mu},$$ $$\displaystyle\bar{\nabla}^{2}\phi+\mathcal{Z}$$ $$\displaystyle=$$ $$\displaystyle j,$$ (2.23) where the source terms are given by $$J^{\mu}=2\left[\Delta^{\mu}-\widetilde{T}^{\mu}\right],\qquad j=J_{0}=\bar{g}_{0\mu}J^{\mu},$$ (2.24) and $$\mathcal{Z}=Z_{0}=\bar{g}_{0\mu}Z^{\mu},$$ (2.25) with $Z^{\mu}$ given in (2.21). The $\mathcal{Z}$-term in Poisson’s equation vanishes when the background metric is flat and takes a different form depending on the background spacetime. The principal result of our analysis is that the deviation of the background metric from a constant curvature spacetime, which is characterized by the deviation tensor defined in (2.13), affects nontrivially the gauge theory source as described in (2.17) for an arbitrary Killing vector and in (2.28) for the time-like Killing vector. Previously, this has been observed as a constant charge distribution filling all space when the background is taken to be flat. In Section 4, we will show that this remains to be true when the background is the Lifshitz spacetime. Therefore, we write the contribution from the background spacetime as $$\Delta^{\mu}=\frac{1}{2}\rho_{c}\,\delta^{\mu}_{\ 0},$$ (2.26) where $\rho_{c}$ is the constant charge density. The matter contribution can also be written in the following form $$\widetilde{T}^{\mu}=-\frac{1}{2}\rho_{m}v^{\mu},$$ (2.27) where $\rho_{m}$ is the charge density due to the matter in the gravitational theory and $v^{\mu}$ is the velocity of the charge distribution. These lead to the following form of the gauge theory source $$J^{\mu}=\rho_{c}\,\delta^{\mu}_{\ 0}+\rho_{m}v^{\mu},$$ (2.28) which we will use throughout this paper222As discussed in Carrillo-Gonzalez:2017iyj , for black hole solutions, one has localized sources describing a point charge at the origin. Since our main aim is to study the effect of the background spacetime, we will only give the non-localized part of the gauge theory source.. In the next section, we will review some previously studied examples through our general formalism. 3   Maximally Symmetric Background Spacetime In this section, we focus on solutions of theories described by the action (2.1) with the corresponding field equations (2.2) which can be written in the KS form (2.5) around a maximally symmetric background spacetime. For a non-zero cosmological constant ($\Lambda\neq 0$), the background spacetime can be chosen to be Minkowski or AdS spacetimes. In the former case, the gauge theory copy is defined on Minkowski spacetime and the deviation tensor defined in (2.13) takes the form $$\Delta^{\mu}_{\ \nu}(\text{Minkowski})=-\frac{2\Lambda}{d-2}\delta^{\mu}_{\ \nu},$$ (3.1) since $\bar{R}^{\mu}_{\ \nu}=0$, and the constant charge density in the gauge theory source for a timelike Killing vector (2.28) is determined by the cosmological constant as $$\rho_{c}=-\frac{4\Lambda}{d-2}.$$ (3.2) Since the modification to the Poisson’s equation given in (2.25) vanishes when the background is Minkowski spacetime, the single copy and the zeroth copy equations become $$\displaystyle\bar{\nabla}_{\nu}F^{\nu\mu}$$ $$\displaystyle=$$ $$\displaystyle J^{\mu},$$ $$\displaystyle\bar{\nabla}^{2}\phi$$ $$\displaystyle=$$ $$\displaystyle j,$$ (3.3) where the general form of the sources is given by $$J^{\mu}=\rho_{c}\,\delta^{\mu}_{\ 0}+\rho_{m}v^{\mu},\qquad\qquad j=-(\rho_{c}+\rho_{m}).$$ (3.4) Here, $\rho_{m}$ is the charge density due to the matter fields and the velocity vector $v^{\mu}$ can be read from (2.27). For static solutions, one has a static charge distribution, and therefore, $v^{\mu}=\delta^{\mu}_{\ 0}$. For stationary solutions, one obtains a rotating charge distribution and the velocity vector takes a form accordingly. In the latter case, the gauge theory copy is Maxwell’s theory on AdS spacetime and the deviation tensor vanishes $$\Delta^{\mu}_{\ \nu}(\text{AdS})=0,$$ (3.5) which implies that there is no constant charge density in the gauge theory source ($\rho_{c}=0$). The Poisson’s equation is modified due to the curvature of the background as described in (2.23). In what follows, we will give examples in $d=4$, for which the equations take the following form $$\displaystyle\bar{\nabla}_{\nu}F^{\nu\mu}$$ $$\displaystyle=$$ $$\displaystyle J^{\mu},$$ $$\displaystyle\bar{\nabla}^{2}\phi-\frac{1}{6}\bar{R}\,\phi$$ $$\displaystyle=$$ $$\displaystyle j,$$ (3.6) and the sources are fixed by only the matter contribution as $$J^{\mu}=\rho_{m}v^{\mu},\qquad\qquad j=-\rho_{m}.$$ (3.7) In the remainder of this section, we will elaborate on this, by applying our formalism to some examples that were investigated previously in the literature, with a special focus on the sources, and make a comparison between Minkowski and AdS backgrounds whenever possible. 3.1   AdS${}_{4}$ Spacetime around Minkowski Background As the simplest example, we consider the AdS${}_{4}$ spacetime Luna:2015paa , which is a solution when $d=4$, $\mathcal{L}_{m}=0$ in (2.1). It can be written in the KS form (2.5) around the Minkowski metric $$\bar{g}_{\mu\nu}\mathrm{d}x^{\mu}\mathrm{d}x^{\nu}=-\mathrm{d}t^{2}+\mathrm{d}r^{2}+r^{2}\left(\mathrm{d}\theta^{2}+\sin^{2}\theta\,\mathrm{d}\phi^{2}\right),$$ (3.8) where the null vector and the scalar function are given by $$k_{\mu}\mathrm{d}x^{\mu}=\mathrm{d}t+\mathrm{d}r,\qquad\qquad\phi(r)=\frac{\Lambda r^{2}}{3}.$$ (3.9) As a result, the gauge field takes the form $$A_{\mu}\mathrm{d}x^{\mu}=\frac{\Lambda r^{2}}{3}\,\left(\mathrm{d}t+\mathrm{d}x\right),$$ (3.10) and the only non-zero component of the field strength tensor is $$F_{rt}=\frac{2\Lambda r}{3}.$$ (3.11) The effect of the cosmological constant on the sources shows itself as a constant charge density as follows $$\rho_{c}=-2\Lambda.$$ (3.12) 3.2   Banados-Teitelboim-Zanelli (BTZ) Black Hole An interesting example in three dimensions is the BTZ black hole Carrillo-Gonzalez:2017iyj , which is a solution when $d=3$ and $\mathcal{L}_{m}=0$ in (2.1) for $\Lambda<0$ Banados:1992wn . This black hole solution can be obtained by identifyting points of AdS${}_{3}$ spacetime by a discrete subgroup of SO(2,2) Banados:1992gq and its gauge theory copy possesses the same characteristics with AdS${}_{d}$ spacetime with $d\geq 4$. Its KS form Kim:1998iw is given around the Minkowski spacetime in spheroidal coordinates $$\bar{g}_{\mu\nu}\mathrm{d}x^{\mu}\mathrm{d}x^{\nu}=-\mathrm{d}t^{2}+\frac{r^{2}}{r^{2}+a^{2}}\mathrm{d}r^{2}+(r^{2}+a^{2})\mathrm{d}\theta^{2},$$ (3.13) where $a$ is the rotation parameter. The null vector $k_{\mu}$ is parametrized as $$k_{\mu}\mathrm{d}x^{\mu}=\mathrm{d}t+\frac{r^{2}}{r^{2}+a^{2}}\mathrm{d}r+a\mathrm{d}\theta,$$ (3.14) and the scalar is given by $$\phi(r)=1+8GM+\Lambda r^{2}.$$ (3.15) The corresponding gauge field is given by $$A_{\mu}\mathrm{d}x^{\mu}=\left(1+8GM+\Lambda r^{2}\right)\left[\mathrm{d}t+\frac{r^{2}}{r^{2}+a^{2}}\mathrm{d}r+a\mathrm{d}\theta\right].$$ (3.16) Due to the rotation, there is also a magnetic field and the independent components of the field strenght tensor are $$F_{rt}=2\Lambda r,\qquad\qquad F_{r\theta}=aF_{rt}=2\Lambda ar.$$ (3.17) The constant charge density corresponding to the BTZ black hole reads $$\rho_{c}=-4\Lambda.$$ (3.18) Here, we content ourselves with showing that the constant charge density term appears due to the general property of the deviation tensor (3.1) and refer the reader to CarrilloGonzalez:2019gof for a more detailed discussion. 3.3   Three-dimensional Rotating Black hole Another interesting example from three dimensions is the rotating black hole constructed in Gumus:2020hbb . While the background metric and the null vector is the same with our previous example as given in (3.13) and (3.14), the scalar is given by $$\phi(r)=-2M\log r+\Lambda r^{2},$$ (3.19) which leads to the gauge field $$A_{\mu}\mathrm{d}x^{\mu}=\left(-2M\log r+\Lambda r^{2}\right)\left[\mathrm{d}t+\frac{r^{2}}{r^{2}+a^{2}}\mathrm{d}r+a\mathrm{d}\theta\right],$$ (3.20) with the following non-zero components of the field strenght tensor $$F_{rt}=2\left[-\frac{M}{r}+\Lambda r\right],\qquad F_{r\theta}=aF_{rt}=2a\left[-\frac{M}{r}+\Lambda r\right],$$ (3.21) This solution should be sourced by a space-like fluid333The static version can be obtained from a free scalar field as the source Gumus:2020hbb . with the following energy momentum tensor $$T_{\mu\nu}=(\rho+P)u_{\mu}u_{\nu}+Pg_{\mu\nu},$$ (3.22) where $$P=\frac{M}{r^{2}}=-\frac{1}{3}\rho,\qquad u_{\mu}=\left[\frac{a}{r},0,\frac{a^{2}+r^{2}}{r}\right],\qquad u^{2}=+1,$$ (3.23) whose contribution to the trace-reversed Einstein equations is $$\widetilde{T}_{\mu\nu}=-2Pu_{\mu}u_{\nu}.$$ (3.24) Having a non-zero energy momentum tensor, we get a rotating charge distribution in addition to the usual constant charge density as follows $$\rho_{c}=-2\Lambda,\qquad\qquad\rho_{m}=-\frac{4Ma^{2}}{r^{4}},\qquad\qquad v^{\mu}=\left(1,0,-\frac{1}{a}\right).$$ (3.25) 3.4   Schwarzschild-AdS${}_{4}$ Black Hole Our next example is the Schwarzschild-AdS${}_{4}$ Black Hole which is a solution with $d=4$ and $\mathcal{L}_{m}=0$ in (2.1). In Carrillo-Gonzalez:2017iyj , it was studied around AdS${}_{4}$ spacetime whose metric in global static coordinates reads $$\bar{g}_{\mu\nu}\mathrm{d}x^{\mu}\mathrm{d}x^{\nu}=-\left[1-\frac{\Lambda r^{2}}{3}\right]\mathrm{d}t^{2}+\left[1-\frac{\Lambda r^{2}}{3}\right]^{-1}\mathrm{d}r^{2}+r^{2}\left(\mathrm{d}\theta^{2}+\sin^{2}\theta\,\mathrm{d}\phi^{2}\right),$$ (3.26) and the null vector and the scalar are given by $$k_{\mu}\mathrm{d}x^{\mu}=\mathrm{d}t+\left[1-\frac{\Lambda r^{2}}{3}\right]^{-1}\mathrm{d}r,\qquad\qquad\phi(r)=\frac{2M}{r}.$$ (3.27) The gauge field $$A_{\mu}\mathrm{d}x^{\mu}=\frac{2M}{r}\left[\mathrm{d}t+\left[1-\frac{\Lambda r^{2}}{3}\right]^{-1}\mathrm{d}r\right],$$ (3.28) has the field strenght tensor with the following non-zero component $$F_{rt}=-\frac{2M}{r^{2}}.$$ (3.29) We obtain vacuum solutions of (3.6) since the background is chosen to be of constant-curvature, which implies $\rho_{c}=0$ and there is no contribution from the matter fields ($\rho_{m}=0$). The solution can also be written around the Minkowski spacetime $$\bar{g}_{\mu\nu}\mathrm{d}x^{\mu}\mathrm{d}x^{\nu}=-\mathrm{d}t^{2}+\mathrm{d}r^{2}+r^{2}\left(\mathrm{d}\theta^{2}+\sin^{2}\theta\,\mathrm{d}\phi^{2}\right),$$ (3.30) with the null vector and the scalar defined as $$k_{\mu}\mathrm{d}x^{\mu}=dt+dr,\qquad\qquad\phi(r)=\frac{2M}{r}+\frac{\Lambda r^{2}}{3}.$$ (3.31) The gauge field now becomes $$A_{\mu}dx^{\mu}=\left[\frac{2M}{r}+\frac{\Lambda r^{2}}{3}\right]\left(\mathrm{d}t+\mathrm{d}r\right),$$ (3.32) with the field strenght tensor $$F_{rt}=-\frac{2M}{r^{2}}+\frac{2\Lambda r}{3}.$$ (3.33) This time, in the gauge theory source, the only contribution comes from the cosmological constant as $$\rho_{c}=-2\Lambda.$$ (3.34) 3.5   Reissner-Nordström-AdS${}_{4}$ Black Hole In order to see the effect of the matter coupling, we now consider Reissner-Nordström-AdS${}_{4}$ black hole. The matter part of the action is $$\mathcal{L}_{m}=-\frac{1}{4}f_{\mu\nu}f^{\mu\nu},$$ (3.35) with contribution to the trace-reversed equations $$\tilde{T}_{\mu\nu}=\frac{1}{2}f_{\mu\alpha}f_{\nu}^{\ \alpha}-\frac{1}{8}g_{\mu\nu}f_{\alpha\beta}f^{\alpha\beta}.$$ (3.36) When the metric is written in the KS form around AdS${}_{4}$ spacetime (3.26) with the null vector given in (3.27), the scalar function reads Carrillo-Gonzalez:2017iyj $$\phi(r)=\frac{2M}{r}-\frac{Q^{2}}{4r^{2}},$$ (3.37) where $M$ and $Q$ are the mass and the charge of the black hole respectively. The gauge field becomes $$A_{\mu}=\left[\frac{2M}{r}-\frac{Q^{2}}{4r^{2}}\right]\left[\mathrm{d}t+\left[1-\frac{\Lambda r^{2}}{3}\right]^{-1}\mathrm{d}r\right],$$ (3.38) which leads to the field strength tensor $$F_{rt}=-\frac{2M}{r^{2}}+\frac{Q^{2}}{2r^{3}}.$$ (3.39) While the constant curvature background implies no constant charge density ($\rho_{c}=0$), the matter field produces the following static charge density $$\rho_{m}=\frac{Q^{2}}{2r^{4}},\qquad\qquad v^{\mu}=\delta^{\mu}_{\ 0}.$$ (3.40) One should note that our formalism gives the modification to the Poisson’s equation and the source as $$\displaystyle\mathcal{Z}$$ $$\displaystyle=$$ $$\displaystyle\Lambda\,\frac{Q^{2}-4Mr^{2}}{3r^{2}},$$ (3.41) $$\displaystyle j$$ $$\displaystyle=$$ $$\displaystyle\frac{Q^{2}\left(\Lambda r^{2}-3\right)}{6r^{2}},$$ (3.42) and one obtains the standard form given in (3.6-3.7) only after simplifications. When written around the Minkowski spacetime (3.30) with the null vector (3.31), the scalar function is given by $$\phi(r)=\frac{2M}{r}-\frac{Q^{2}}{4r^{2}}+\frac{\Lambda r^{2}}{3},$$ (3.43) and the gauge field is $$A_{\mu}=\left[\frac{2M}{r}-\frac{Q^{2}}{4r^{2}}+\frac{\Lambda r^{2}}{3}\right]\left(\mathrm{d}t+\mathrm{d}r\right),$$ (3.44) with the field strength tensor $$F_{rt}=-\frac{2M}{r^{2}}+\frac{Q^{2}}{2r^{3}}+\frac{2\Lambda r}{3}.$$ (3.45) In addition to the static charge density $\rho_{m}$ due to the matter part of the Lagrangian, the constant charge density is produced by the non-zero deviation of the Minkowski background (3.1), which are given by $$\rho_{c}=-2\Lambda,\qquad\qquad\rho_{m}=\frac{Q^{2}}{2r^{4}}.$$ (3.46) 4   Lifshitz Black Holes So far, we studied metrics that can be written in the KS form around a maximally symmetric background and presented the differences that arise due to the deviation if the Minkowski spacetime from a constant-curvature spacetime, which are a constant charge density in the source and correspondingly, electric and magnetic (if the black hole rotates) fields that linearly increase with the radial coordinate $r$. As an example of a solution with a curved background, in this section, we will consider the Lifshitz black hole in $d$-dimensions, whose metric reads $$ds^{2}=L^{2}\left[-r^{2z}h(r)\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{r^{2}h(r)}+r^{2}\sum_{i=1}^{d-2}\mathrm{d}x_{i}^{2}\right],$$ (4.1) where the function $h(r)$ has a single zero at a finite value of $r$ and, $h(r\rightarrow\infty)=1$. Asymptotically, the metric takes the form $$\left.\mathrm{d}s^{2}\right|_{r\rightarrow\infty}=L^{2}\left[-r^{2z}\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{r^{2}}+r^{2}\sum_{i=1}^{d-2}\mathrm{d}x_{i}^{2}\right],$$ (4.2) which is the Lifshitz spacetime. In this form, it is apparent that it describes an asymptotically Lifshitz black hole with a planar horizon. With the following coordinate transformation444We were informed that the KS form of the Lifshitz black hole was first obtained through this transformation in Ayon-Beato:2014wla ., $$\displaystyle\mathrm{d}t\rightarrow\mathrm{d}t+\alpha\,\mathrm{d}r,\qquad\qquad\alpha=\frac{h(r)-1}{h(r)}\,r^{-(z+1)},$$ (4.3) one can write the metric in the KS form where the background is the Lifshitz spacetime with the metric $$\bar{g}_{\mu\nu}\mathrm{d}x^{\mu}\mathrm{d}x^{\nu}=L^{2}\left[-r^{2z}\mathrm{d}t^{2}+\frac{\mathrm{d}r^{2}}{r^{2}}+r^{2}\sum_{i=1}^{d-2}\mathrm{d}x_{i}^{2}\right],$$ (4.4) The null vector and the scalar are given by $$k_{\mu}\mathrm{d}x^{\mu}=\mathrm{d}t+{\frac{1}{r^{z+1}}}\mathrm{d}r,\qquad\qquad\phi(r)=L^{2}\,\left[1-h(r)\right]r^{2z}.$$ (4.5) Note that, for $z=1$, the background metric becomes the AdS spacetime in Poincare coordinates. For $z>1$, the background metric is not maximally symmetric and the deviation tensor will give a non-trivial contribution. The Ricci tensor for the background metric reads $$\bar{R}^{\mu}_{\ \nu}=\text{diag}\left[-\frac{z\left(z+d-2\right)}{L^{2}},-\frac{z^{2}+d-2}{L^{2}},-\frac{z+d-2}{L^{2}},-\frac{z+d-2}{L^{2}}\right],$$ (4.6) which reduces to that of AdS spacetime (A.5) when $z=1$. The relevant part is still a constant given by $$\bar{R}^{\mu}_{\ 0}=-\frac{z\left(z+d-2\right)}{L^{2}}\delta^{\mu}_{\ 0},$$ (4.7) which leads to the following background contribution to the gauge theory source $$\Delta^{\mu}\text{(Lifshitz)}=-\left[\frac{z\left(z+d-2\right)}{L^{2}}+\frac{2\Lambda}{d-2}\right]\delta^{\mu}_{\ 0},$$ (4.8) and, as a result, the following constant charge density $$\rho_{c}=-2\left[\frac{z\left(z+d-2\right)}{L^{2}}+\frac{2\Lambda}{d-2}\right].$$ (4.9) After this general discussion, we will study two different realizations of the Lifshitz black hole with different matter couplings. 4.1   Lifshitz Black Hole from a Massless Scalar and a Gauge Field The first solution that we consider is obtained by the following coupling of a massless scalar to a gauge field Taylor:2008tg $$\mathcal{L}_{m}=\frac{1}{2}\partial_{\mu}\varphi\partial^{\mu}\varphi-\frac{1}{4}e^{\lambda\varphi}f_{\mu\nu}f^{\mu\nu},$$ (4.10) whose contribution to the trace-reversed equations is $$\widetilde{T}_{\mu\nu}=\frac{1}{2}\partial_{\mu}\varphi\partial_{\nu}\varphi+\frac{1}{2}e^{\lambda\varphi}f_{\mu\alpha}f_{\nu}^{\ \alpha}-\frac{1}{4(d-2)}g_{\mu\nu}e^{\lambda\varphi}f_{\alpha\beta}f^{\alpha\beta}.$$ (4.11) The equations for the matter fields are $$\displaystyle\partial_{\mu}\left(\sqrt{-g}e^{\lambda\varphi}f^{\mu\nu}\right)=0,$$ (4.12) $$\displaystyle\partial_{\mu}\left(\sqrt{-g}\partial^{\mu}\varphi\right)-\frac{\lambda}{4}\sqrt{-g}e^{\lambda\varphi}f_{\mu\nu}f^{\mu\nu}=0.$$ (4.13) The Lifshitz black hole is a solution in this theory with the following metric function Pang:2009ad $$h(r)=1-\frac{r_{+}^{z+d-2}}{r^{z+d-2}},\qquad\qquad\qquad z\geq 1,$$ (4.14) provided that the matter fields and the cosmological constant are given by $$\displaystyle f_{rt}$$ $$\displaystyle=$$ $$\displaystyle q\,e^{-\lambda\varphi}r^{z-d+1},\qquad\qquad e^{\lambda\varphi}=r^{\lambda\sqrt{2(z-1)(d-2)}},$$ $$\displaystyle\lambda^{2}$$ $$\displaystyle=$$ $$\displaystyle\frac{2(d-2)}{z-1},\qquad\qquad\qquad\,q^{2}=2L^{2}(z-1)(z+d-2),$$ $$\displaystyle\Lambda$$ $$\displaystyle=$$ $$\displaystyle-\frac{(z+d-3)(z+d-2)}{2L^{2}}.$$ (4.15) For $z=1$, the matter fields vanish and one obtains the Schwarzschild-AdS black hole with a planar horizon. By using the coordinate transformation (4.3), the metric can be put in the KS form around the Lifshitz background (4.4) with the null vector and the scalar given in (4.5). The explicit form of the scalar for the metric function (4.14) reads $$\phi(r)=\frac{L^{2}r_{+}^{z+d-2}}{r^{d-z-2}},$$ (4.16) The corresponding gauge field is $$A_{\mu}\mathrm{d}x^{\mu}=\frac{L^{2}r_{+}^{z+d-2}}{r^{d-z-2}}\left[\mathrm{d}t+{\frac{1}{r^{z+1}}}\mathrm{d}r\right],$$ (4.17) and with the following non-zero component of the field strength tensor $$F_{rt}=-\frac{(d-z+2)L^{2}r_{+}^{z+d-2}}{r^{d-z+1}}.$$ (4.18) Since the matter configuration (4.15) does not change under the coordinate transformation, it can be directly used in the rest of the calculations. It turns out that the contribution from the deviation tensor and the energy-momentum tensor to the gauge theory source are equal to each other and given by $$\Delta^{\mu}=\widetilde{T}^{\mu}=-\frac{\left(d-3\right)\left(z-1\right)\left(z+d-2\right)}{\left(d-2\right)L^{2}},$$ (4.19) and therefore, the single copy is $$\bar{\nabla}_{\nu}F^{\nu\mu}=0.$$ (4.20) Although we started from a non-vacuum solution, the gauge field given in (4.17) is vacuum solution of the gauge theory. The modification to the Poisson’s equation can again be written in terms of the background Ricci scalar and the KS scalar as $$\mathcal{Z}=\frac{z(z-d+2)}{z^{2}+(d-2)z+\frac{1}{2}(d-1)(d-2)}\bar{R}\,\phi,$$ (4.21) which leads to the following zeroth copy $$\bar{\nabla}^{2}\phi+\frac{z(z-d+2)}{z^{2}+(d-2)z+\frac{1}{2}(d-1)(d-2)}\bar{R}\,\phi=0.$$ (4.22) 4.2   Lifshitz Black Hole from a Massive Vector and a Gauge Field The second solution that we consider is a charged Lifshitz black hole obtained through the following matter coupling Pang:2009pd $$\mathcal{L}_{m}=-\frac{1}{4}f_{\mu\nu}f^{\mu\nu}-\frac{1}{2}m^{2}a_{\mu}a^{\mu}-\frac{1}{4}\mathcal{F}_{\mu\nu}\mathcal{F}^{\mu\nu},$$ (4.23) where $a_{\mu}$ is a massive vector field with the field strength $f_{\mu\nu}=2\,\partial_{[\mu}a_{\nu]}$ and $\mathcal{F}_{\mu\nu}$ is the field strength of the gauge field. The matter field equations are $$\displaystyle\partial_{\mu}\left(\sqrt{-g}f^{\mu\nu}\right)$$ $$\displaystyle=$$ $$\displaystyle m^{2}\sqrt{-g}\,a^{\nu}$$ (4.24) $$\displaystyle\partial_{\mu}\left(\sqrt{-g}\mathcal{F}^{\mu\nu}\right)$$ $$\displaystyle=$$ $$\displaystyle 0,$$ (4.25) and the contribution to the trace-revesed equations is given by $$\widetilde{T}_{\mu\nu}=\frac{1}{2}f_{\mu\alpha}f_{\nu}^{\ \alpha}-\frac{1}{4(d-2)}g_{\mu\nu}f_{\alpha\beta}f^{\alpha\beta}+\frac{1}{2}m^{2}a_{\mu}a_{\nu}+\frac{1}{2}\mathcal{F}_{\mu\alpha}\mathcal{F}_{\nu}^{\ \alpha}-\frac{1}{4(d-2)}g_{\mu\nu}\mathcal{F}_{\alpha\beta}\mathcal{F}^{\alpha\beta}$$ (4.26) The charged Lifshitz black hole is a solution with the metric function $$h(r)=1-\frac{q^{2}}{2(d-2)^{2}r^{z}},$$ (4.27) for the matter configuration $$a_{t}=L\sqrt{\frac{2(z-1)}{z}}h(r)r^{z},\qquad\qquad\mathcal{F}_{rt}=qLr^{z-d-1}.$$ (4.28) The mass of the vector field, the cosmological constant and the Lifshitz exponent should also be fixed as follows $$m=\sqrt{\frac{(d-2)z}{L^{2}}},\qquad\Lambda=-\frac{(d-3)z+(d-2)^{2}+z^{2}}{2L^{2}},\qquad z=2\left(d-2\right).$$ (4.29) The metric can be put in the KS form through the coordinate transformation (4.3) around the Lifshitz background (4.4) with the null vector and the scalar given in (4.5). The explicit form of the scalar for the metric function (4.14) reads $$\phi(r)=\frac{L^{2}q^{2}r^{3z}}{2\left(d-2\right)^{2}}.$$ (4.30) The single copy gauge field and the non-zero component of the field strength tensor are $$\displaystyle A_{\mu}\mathrm{d}x^{\mu}$$ $$\displaystyle=$$ $$\displaystyle\frac{L^{2}q^{2}r^{3z}}{2\left(d-2\right)^{2}}\left[\mathrm{d}t+{\frac{1}{r^{z+1}}}\mathrm{d}r\right],$$ (4.31) $$\displaystyle F_{rt}$$ $$\displaystyle=$$ $$\displaystyle\frac{3zL^{2}q^{2}r^{3z-1}}{2\left(d-2\right)^{2}}.$$ (4.32) This time, the coordinate transformation (4.3) affects the matter configuration (4.28) non-trivially, yielding an additional radial component of the massive vector as follows $$a_{r}=\alpha\,a_{t}.$$ (4.33) The contribution from the deviation tensor and the energy-momentum tensor to the gauge theory source are this time given by $$\displaystyle\Delta^{\mu}$$ $$\displaystyle=$$ $$\displaystyle-\frac{\left(z-1\right)\left[\left(d-3\right)z+\left(d-2\right)^{2}\,\right]}{\left(d-2\right)L^{2}}\delta^{\mu}_{\ 0},$$ (4.34) $$\displaystyle\widetilde{T}^{\mu}$$ $$\displaystyle=$$ $$\displaystyle\Delta^{\mu}+\frac{q^{2}}{2L^{2}r^{z}}\delta^{\mu}_{\ 0},$$ (4.35) Similar to the previous example, the constant charge density contribution from the deviation tensor (4.34) again disappears, however, this time the contribution from the energy-momentum tensor (4.35) has an additional term, which leads to a non-vacuum solution. The single copy is $$\bar{\nabla}_{\nu}F^{\nu\mu}=J^{\mu},\qquad\qquad J^{\mu}=-\frac{q^{2}}{L^{2}r^{z}}\delta^{\mu}_{\ 0}.$$ (4.36) The modification to Poisson’s equation in this case can be written as $$\mathcal{Z}=\frac{z^{2}}{z^{2}+(d-2)z+\frac{1}{2}(d-1)(d-2)}\bar{R}\,\phi,$$ (4.37) which yields $$\bar{\nabla}^{2}\phi+\frac{z^{2}}{z^{2}+(d-2)z+\frac{1}{2}(d-1)(d-2)}\bar{R}\,\phi=j,\qquad\qquad j=q^{2}r^{z}.$$ (4.38) 5   Summary and Discussions In this paper, extending the construction of Carrillo-Gonzalez:2017iyj , we gave a formulation of the classical double copy with a generic, curved background spacetime. Apart from obtaining solutions of Maxwell’s theory defined on curved backgrounds, our formulation makes the effect of the background spacetime on the gauge theory source much more transparent through the deviation tensor that we defined in (2.13). For an arbitrary Killing vector of the background and the full metric, the result is given in (2.17-2.18). Choosing a flat background for a solution with a non-zero cosmological constant yields a constant charge density filling all space in the gauge theory due to the general property presented in (3.1). The effect disappears when the background is chosen to be a constant curvature spacetime, which can be explained due to the vanishing of the deviation tensor for a suitably chosen cosmological constant (3.5). Furthermore, we studied two different realizations of the Lifsthiz black hole, whose background is not maximally symmetric. While the contribution to the gauge theory source again turns out to be a constant as described in (4.8-4.9), it is removed by the matter fields in the gravity side, yielding a vacuum solution in one case. In the light of our results, there are several directions to pursue. Although the extra part in the single copy equation (2.16) was shown to vanish for all the examples in the literature, a general proof or, at least, the conditions under which it is true are still lacking. The resolution of this might lead to a better understanding of the classical double copy. The study of wave-type solutions with a curved background, as done in Carrillo-Gonzalez:2017iyj for constant curvature backgrounds might also be interesting. In addition to the simplifying assumptions about the background metric, the assumption of the minimal matter coupling can also be released for certain types of theories. We will return to it elsewhere. Acknowledgements. M. K. G. is supported by TÜBİTAK Grant No 118F091. Appendix A Maximally Symmetric Spacetimes and the Deviation Tensor In this appendix, we review some important properties of maximally symmetric spacetimes by following Natsuume:2014sfa , which will lead us to the definition of the deviation tensor discussed in the main text. For a maximally symmetric spacetime, the Riemann tensor is given by555We use barred quantities since, in this work, we consider the possibility of a background metric being that of a maximally symmetric spacetime. $$\bar{R}_{\mu\alpha\nu\beta}=\frac{\epsilon}{L^{2}}\left(\bar{g}_{\mu\nu}\bar{g}_{\alpha\beta}-\bar{g}_{\mu\beta}\bar{g}_{\nu\alpha}\right)$$ (A.1) where $\epsilon=+1,0,-1$ correspond to de Sitter (dS), Minkowski and Anti-de Sitter (AdS) spacetimes and $L$ is the dS/AdS radius when $\epsilon\neq 0$. Taking the trace yield the Ricci tensor and the Ricci scaler as $$\displaystyle\bar{R}_{\mu\nu}$$ $$\displaystyle=$$ $$\displaystyle\epsilon\,\frac{d-1}{L^{2}}\,\bar{g}_{\mu\nu}$$ (A.2) $$\displaystyle\bar{R}$$ $$\displaystyle=$$ $$\displaystyle\epsilon\,\frac{d(d-1)}{L^{2}}$$ (A.3) Using (A.2), one can show that the spacetime is a solution of vacuum Einstein equations if the cosmological constant is chosen as $$\Lambda=\epsilon\,\frac{(d-1)(d-2)}{2\,L^{2}}$$ (A.4) and, therefore, the Ricci tensor becomes $$\bar{R}_{\mu\nu}=\frac{2\,\Lambda}{d-2}\,\bar{g}_{\mu\nu}.$$ (A.5) This motivates us to define the deviation tensor from a maximally symmetric spacetime as $$\Delta_{\mu\nu}=\bar{R}_{\mu\nu}-\frac{2\,\Lambda}{d-2}\,\bar{g}_{\mu\nu},$$ (A.6) which vanishes for maximally symmetric spacetimes provided that the cosmological constant is given by (A.4). When the background is Minkowski spacetime, one has $\bar{R}_{\mu\nu}=0$ and, $$\Delta\text{(Minkowski)}_{\mu\nu}=-\frac{2\,\Lambda}{d-2}\,\bar{g}_{\mu\nu},$$ (A.7) which is the origin of the constant charge density in the gauge theory source discussed in the main text. References (1) Z. Bern, J. J. M. Carrasco and H. Johansson, Phys. Rev. Lett. 105, 061602 (2010) [arXiv:1004.0476 [hep-th]]. (2) Z. Bern, T. Dennen, Y. t. Huang and M. Kiermaier, Phys. Rev. D 82, 065003 (2010) [arXiv:1004.0693 [hep-th]]. (3) A. Luna, R. Monteiro, I. Nicholson, A. Ochirov, D. O’Connell, N. Westerberg and C. D. White, JHEP 04, 069 (2017) [arXiv:1611.07508 [hep-th]]. (4) W. D. Goldberger and A. K. Ridgway, Phys. Rev. D 95, no.12, 125010 (2017) [arXiv:1611.03493 [hep-th]]. (5) W. D. Goldberger, S. G. Prabhu and J. O. Thompson, Phys. Rev. D 96, no.6, 065009 (2017) [arXiv:1705.09263 [hep-th]]. (6) W. D. Goldberger and A. K. Ridgway, Phys. Rev. D 97, no.8, 085019 (2018) [arXiv:1711.09493 [hep-th]]. (7) W. D. Goldberger, J. Li and S. G. Prabhu, Phys. Rev. D 97, no.10, 105018 (2018) [arXiv:1712.09250 [hep-th]]. (8) C. H. Shen, JHEP 11, 162 (2018) [arXiv:1806.07388 [hep-th]]. (9) M. Carrillo González, R. Penco and M. Trodden, JHEP 11, 065 (2018) [arXiv:1809.04611 [hep-th]]. (10) J. Plefka, J. Steinhoff and W. Wormsbecher, Phys. Rev. D 99, no.2, 024021 (2019) [arXiv:1807.09859 [hep-th]]. (11) J. Plefka, C. Shi, J. Steinhoff and T. Wang, Phys. Rev. D 100, no.8, 086006 (2019) [arXiv:1906.05875 [hep-th]]. (12) W. D. Goldberger and J. Li, JHEP 02, 092 (2020) [arXiv:1912.01650 [hep-th]]. (13) A. P.V. and A. Manu, Phys. Rev. D 101, no.4, 046014 (2020) [arXiv:1907.10021 [hep-th]]. (14) A. Anastasiou, L. Borsten, M. J. Duff, L. J. Hughes and S. Nagy, Phys. Rev. Lett. 113, no.23, 231606 (2014) [arXiv:1408.4434 [hep-th]]. (15) L. Borsten and M. J. Duff, Phys. Scripta 90, 108012 (2015) [arXiv:1602.08267 [hep-th]]. (16) A. Anastasiou, L. Borsten, M. J. Duff, M. J. Hughes, A. Marrani, S. Nagy and M. Zoccali, Phys. Rev. D 96, no.2, 026013 (2017) [arXiv:1610.07192 [hep-th]]. (17) G. L. Cardoso, S. Nagy and S. Nampuri, JHEP 10, 127 (2016) [arXiv:1609.05022 [hep-th]]. (18) L. Borsten, Phys. Rev. D 97, no.6, 066014 (2018) [arXiv:1708.02573 [hep-th]]. (19) A. Anastasiou, L. Borsten, M. J. Duff, A. Marrani, S. Nagy and M. Zoccali, Contemp. Math. 721, 1-27 (2019) [arXiv:1711.08476 [hep-th]]. (20) A. Anastasiou, L. Borsten, M. J. Duff, S. Nagy and M. Zoccali, Phys. Rev. Lett. 121, no.21, 211601 (2018) [arXiv:1807.02486 [hep-th]]. (21) G. Lopes Cardoso, G. Inverso, S. Nagy and S. Nampuri, PoS CORFU2017, 177 (2018) [arXiv:1803.07670 [hep-th]]. (22) A. Luna, S. Nagy and C. White, JHEP 09, 062 (2020) [arXiv:2004.11254 [hep-th]]. (23) L. Borsten and S. Nagy, JHEP 07, 093 (2020) [arXiv:2004.14945 [hep-th]]. (24) L. Borsten, B. Jurčo, H. Kim, T. Macrelli, C. Saemann and M. Wolf, [arXiv:2007.13803 [hep-th]]. (25) A. Luna, I. Nicholson, D. O’Connell and C. D. White, JHEP 03, 044 (2018) [arXiv:1711.03901 [hep-th]]. (26) D. A. Kosower, B. Maybee and D. O’Connell, JHEP 02, 137 (2019) [arXiv:1811.10950 [hep-th]]. (27) B. Maybee, D. O’Connell and J. Vines, JHEP 12, 156 (2019) [arXiv:1906.09260 [hep-th]]. (28) Y. F. Bautista and A. Guevara, [arXiv:1908.11349 [hep-th]]. (29) Y. F. Bautista and A. Guevara, [arXiv:1903.12419 [hep-th]]. (30) C. Cheung, I. Z. Rothstein and M. P. Solon, Phys. Rev. Lett. 121, no.25, 251101 (2018) [arXiv:1808.02489 [hep-th]]. (31) Z. Bern, C. Cheung, R. Roiban, C. H. Shen, M. P. Solon and M. Zeng, JHEP 10, 206 (2019) [arXiv:1908.01493 [hep-th]]. (32) Z. Bern, C. Cheung, R. Roiban, C. H. Shen, M. P. Solon and M. Zeng, Phys. Rev. Lett. 122, no.20, 201603 (2019) [arXiv:1901.04424 [hep-th]]. (33) Z. Bern, A. Luna, R. Roiban, C. H. Shen and M. Zeng, [arXiv:2005.03071 [hep-th]]. (34) G. Kälin and R. A. Porto, JHEP 01, 072 (2020) [arXiv:1910.03008 [hep-th]]. (35) G. Kälin and R. A. Porto, JHEP 11, 106 (2020) [arXiv:2006.01184 [hep-th]]. (36) G. L. Almeida, S. Foffa and R. Sturani, JHEP 11, 165 (2020) [arXiv:2008.06195 [gr-qc]]. (37) H. Godazgar, M. Godazgar, R. Monteiro, D. Peinador Veiga and C. N. Pope, [arXiv:2010.02925 [hep-th]]. (38) E. Chacón, H. García-Compeán, A. Luna, R. Monteiro and C. D. White, [arXiv:2008.09603 [hep-th]]. (39) R. Monteiro, D. O’Connell and C. D. White, JHEP 12, 056 (2014) [arXiv:1410.0239 [hep-th]]. (40) A. Luna, R. Monteiro, D. O’Connell and C. D. White, Phys. Lett. B 750, 272-277 (2015) [arXiv:1507.01869 [hep-th]]. (41) N. Bahjat-Abbas, A. Luna and C. D. White, JHEP 12, 004 (2017) [arXiv:1710.01953 [hep-th]]. (42) M. Carrillo-González, R. Penco and M. Trodden, JHEP 04, 028 (2018) [arXiv:1711.01296 [hep-th]]. (43) A. Luna, R. Monteiro, I. Nicholson, D. O’Connell and C. D. White, JHEP 06, 023 (2016) [arXiv:1603.05737 [hep-th]]. (44) D. S. Berman, E. Chacón, A. Luna and C. D. White, JHEP 01, 107 (2019) [arXiv:1809.04063 [hep-th]]. (45) I. Bah, R. Dempsey and P. Weck, JHEP 02, 180 (2020) [arXiv:1910.04197 [hep-th]]. (46) M. Carrillo González, B. Melcher, K. Ratliff, S. Watson and C. D. White, JHEP 07, 167 (2019) [arXiv:1904.11001 [hep-th]]. (47) A. Banerjee, E. Ó. Colgáin, J. A. Rosabal and H. Yavartanoo, Phys. Rev. D 102, 126017 (2020) [arXiv:1912.02597 [hep-th]]. (48) A. Ilderton, Phys. Lett. B 782, 22-27 (2018) [arXiv:1804.07290 [gr-qc]]. (49) R. Monteiro, I. Nicholson and D. O’Connell, Class. Quant. Grav. 36, 065006 (2019) [arXiv:1809.03906 [gr-qc]]. (50) A. Luna, R. Monteiro, I. Nicholson and D. O’Connell, Class. Quant. Grav. 36, 065003 (2019) [arXiv:1810.08183 [hep-th]]. (51) K. Lee, JHEP 10, 027 (2018) [arXiv:1807.08443 [hep-th]]. (52) W. Cho and K. Lee, JHEP 07, 030 (2019) [arXiv:1904.11650 [hep-th]]. (53) K. Kim, K. Lee, R. Monteiro, I. Nicholson and D. Peinador Veiga, JHEP 02, 046 (2020) [arXiv:1912.02177 [hep-th]]. (54) L. Alfonsi, C. D. White and S. Wikeley, JHEP 07, 091 (2020) [arXiv:2004.07181 [hep-th]]. (55) N. Bahjat-Abbas, R. Stark-Muchão and C. D. White, JHEP 04, 102 (2020) [arXiv:2001.09918 [hep-th]]. (56) C. D. White, Phys. Lett. B 763, 365-369 (2016) [arXiv:1606.04724 [hep-th]]. (57) P. J. De Smet and C. D. White, Phys. Lett. B 775, 163-167 (2017) [arXiv:1708.01103 [hep-th]]. (58) G. Elor, K. Farnsworth, M. L. Graesser and G. Herczeg, JHEP 12, 121 (2020) [arXiv:2006.08630 [hep-th]]. (59) M. K. Gumus and G. Alkac, Phys. Rev. D 102, no.2, 024074 (2020) [arXiv:2006.00552 [hep-th]]. (60) C. Keeler, T. Manton and N. Monga, JHEP 08, 147 (2020) [arXiv:2005.04242 [hep-th]]. (61) N. Arkani-Hamed, Y. t. Huang and D. O’Connell, JHEP 01, 046 (2020) [arXiv:1906.10100 [hep-th]]. (62) Y. T. Huang, U. Kol and D. O’Connell, Phys. Rev. D 102, no.4, 046005 (2020) [arXiv:1911.06318 [hep-th]]. (63) R. Alawadhi, D. S. Berman, B. Spence and D. Peinador Veiga, JHEP 03, 059 (2020) [arXiv:1911.06797 [hep-th]]. (64) N. Moynihan, JHEP 01, 014 (2020) [arXiv:1909.05217 [hep-th]]. (65) R. Alawadhi, D. S. Berman and B. Spence, JHEP 09, 127 (2020) [arXiv:2007.03264 [hep-th]]. (66) D. A. Easson, C. Keeler and T. Manton, Phys. Rev. D 102, no.8, 086015 (2020) [arXiv:2007.16186 [gr-qc]]. (67) E. Casali and A. Puhm, [arXiv:2007.15027 [hep-th]]. (68) A. Cristofoli, JHEP 11, 160 (2020) [arXiv:2006.08283 [hep-th]]. (69) C. D. White, Phys. Rev. Lett. 126, no.6, 061602 (2021) [arXiv:2012.02479 [hep-th]]. (70) H. Stephani, D. Kramer, M. A. MacCallum, C. Hoenselaers and E. Herlt, “Exact solutions of Einstein’s field equations,” (71) M. Banados, C. Teitelboim and J. Zanelli, Phys. Rev. Lett. 69, 1849-1851 (1992) [arXiv:hep-th/9204099 [hep-th]]. (72) M. Banados, M. Henneaux, C. Teitelboim and J. Zanelli, Phys. Rev. D 48, 1506-1525 (1993) [erratum: Phys. Rev. D 88, 069902 (2013)] [arXiv:gr-qc/9302012 [gr-qc]]. (73) H. Kim, Phys. Rev. D 59, 064002 (1999) [arXiv:gr-qc/9809047 [gr-qc]]. (74) E. Ayón-Beato, M. Hassaïne and M. M. Juárez-Aubry, Phys. Rev. D 90, no.4, 044026 (2014) [arXiv:1406.1588 [hep-th]]. (75) M. Taylor, “Non-relativistic holography,” [arXiv:0812.0530 [hep-th]]. (76) D. W. Pang, Commun. Theor. Phys. 62, 265-271 (2014) [arXiv:0905.2678 [hep-th]]. (77) D. W. Pang, JHEP 01, 116 (2010) [arXiv:0911.2777 [hep-th]]. (78) M. Natsuume, Lect. Notes Phys. 903, pp.97-98 (2015) [arXiv:1409.3575 [hep-th]].
SDP Relaxation with Randomized Rounding for Energy Disaggregation Kiarash Shaloudegi Imperial College London k.shaloudegi16@imperial.ac.uk &András György Imperial College London a.gyorgy@imperial.ac.uk \ANDCsaba Szepesvári University of Alberta szepesva@ualberta.ca &Wilsun Xu University of Alberta wxu@ualberta.ca Abstract We develop a scalable, computationally efficient method for the task of energy disaggregation for home appliance monitoring. In this problem the goal is to estimate the energy consumption of each appliance over time based on the total energy-consumption signal of a household. The current state of the art is to model the problem as inference in factorial HMMs, and use quadratic programming to find an approximate solution to the resulting quadratic integer program. Here we take a more principled approach, better suited to integer programming problems, and find an approximate optimum by combining convex semidefinite relaxations randomized rounding, as well as a scalable ADMM method that exploits the special structure of the resulting semidefinite program. Simulation results both in synthetic and real-world datasets demonstrate the superiority of our method.   SDP Relaxation with Randomized Rounding for Energy Disaggregation   Kiarash Shaloudegi Imperial College London k.shaloudegi16@imperial.ac.uk András György Imperial College London a.gyorgy@imperial.ac.uk Csaba Szepesvári University of Alberta szepesva@ualberta.ca Wilsun Xu University of Alberta wxu@ualberta.ca 1 Introduction Energy efficiency is becoming one of the most important issues in our society. Identifying the energy consumption of individual electrical appliances in homes can raise awareness of power consumption and lead to significant saving in utility bills. Detailed feedback about the power consumption of individual appliances helps energy consumers size=,color=green!20!white,size=,color=green!20!white,todo: size=,color=green!20!white,Csaba: was costumers..?!? to identify potential areas for energy savings, and increases their willingness to invest in more efficient products. Notifying home owners of accidentally running stoves, ovens, etc., may not only result in savings but also improves safety. Energy disaggregation or non-intrusive load monitoring (NILM) uses data from utility smart meters to separate individual load consumptions (i.e., a load signal) from the total measured power (i.e., the mixture of the signals) in households. The bulk of the research in NILM has mostly concentrated on applying different data mining and pattern recognition methods to track the footprint of each appliance in total power measurements. Several techniques, such as artificial neural networks (ANN) (Prudenzi, 2002; Chang et al., 2012; Liang et al., 2010), deep neural networks (Kelly and Knottenbelt, 2015), $k$-nearest neighbor (k-NN) (Figueiredo et al., 2012; Weiss et al., 2012), sparse coding (Kolter et al., 2010), or ad-hoc size=,color=green!20!white,size=,color=green!20!white,todo: size=,color=green!20!white,Csaba: ok? heuristic methods (Dong et al., 2012) have been employed. Recent works, rather than turning electrical events into features fed into classifiers, consider the temporal structure of the data(Zia et al., 2011; Kolter and Jaakkola, 2012; Kim et al., 2011; Zhong et al., 2014; Egarter et al., 2015; Guo et al., 2015), resulting in state-of-the-art performance (Kolter and Jaakkola, 2012). These works usually model the individual appliances by independent hidden Markov models (HMMs), which leads to a factorial HMM (FHMM) model describing the total consumption. FHMMs, introduced by Ghahramani and Jordan (1997), are powerful tools for modeling times series generated from multiple independent sources, and are great for modeling speech with multiple people simultaneously talking (Rennie et al., 2009), or energy monitoring which we consider here (Kim et al., 2011). Doing exact inference in FHMMs is NP hard; size=,color=green!20!white,size=,color=green!20!white,todo: size=,color=green!20!white,Csaba: Can we cite a source? therefore, computationally efficient approximate methods have been the subject of study. Classic approaches include sampling methods, such as MCMC or particle filtering (Koller and Friedman, 2009) and variational Bayes methods (Wainwright and Jordan, 2007; Ghahramani and Jordan, 1997). In practice, both methods are nontrivial to make work and we are not aware of any works that would have demonstrated good results in our application domain with the type of FHMMs we need to work and at practical scales. In this paper we follow the work of Kolter and Jaakkola (2012) to model the NILM problem by FHMMs. The distinguishing features of FHMMs in this setting are that (i) the output is the sum of the output of the underlying HMMs (perhaps with some noise), and (ii) the number of transitions are small in comparison to the signal length. FHMMs with the first property are called additive. In this paper we derive an efficient, convex relaxation based method for FHMMs of the above type, which significantly outperforms the state-of-the-art algorithms. Our approach is based on revisiting relaxations to the integer programming formulation of Kolter and Jaakkola (2012). In particular, we replace the quadratic programming relaxation of Kolter and Jaakkola, 2012 with a relaxation to an semi-definite program (SDP), which, based on the literature of relaxations is expected to be tighter and thus better. While SDPs are convex and could in theory be solved using interior-point (IP) methods in polynomial time (Malick et al., 2009), IP scales poorly with the size of the problem and is thus unsuitable to our large scale problem which may involve as many a million variables. To address this problem, capitalizing on the structure of our relaxation coming from our FHMM model, we develop a novel variant of ADMM (Boyd et al., 2011) that uses Moreau-Yosida regularization and combine it with a version of randomized rounding that is inspired by the the recent work of Park and Boyd (2015). Experiments on synthetic and real data confirm that our method significantly outperforms other algorithms from the literature, and we expect that it may find its applications in other FHMM inference problems, too. 1.1 Notation Throughout the paper, we use the following notation: $\mathbb{R}$ denotes the set of real numbers, $\mathbb{S}_{+}^{n}$ denotes the set of $n\times n$ positive semidefinite matrices, $\mathbb{I}_{\{E\}}$ denotes the indicator function of an event $E$ (that is, it is $1$ if the event is true and zero otherwise), $\mathbf{1}$ denotes a vector of appropriate dimension whose entries are all $1$. For an integer $K$, $[K]$ denotes the set $\{1,2,\ldots,K\}$. $\mathcal{N}(\mu,\Sigma)$ denotes the Gaussian distribution with mean $\mu$ and covariance matrix $\Sigma$. For a matrix $A$, $\textbf{trace}(A)$ denotes its trace and $\textbf{diag}(A)$ denotes the vector formed by the diagonal entries of $A$. 2 System Model Following Kolter and Jaakkola (2012), the energy usage of the household is modeled using an additive factorial HMM (Ghahramani and Jordan, 1997). Suppose there are $M$ appliances in a household. Each of them is modeled via an HMM: let $P_{i}\in\mathbb{R}^{K_{i}\times K_{i}}$ denote the transition-probability matrix of appliance $i\in[M]$, and assume that for each state $s\in[K_{i}]$, the energy consumption of the appliance is constant $\mu_{i,s}$ ($\mu_{i}$ denotes the corresponding $K_{i}$-dimensional column vector $(\mu_{i,1},\ldots,\mu_{i,K_{i}})^{\top}$). Denoting by $x_{t,i}\in\{0,1\}^{K_{i}}$ the indicator vector of the state $s_{t,i}$ of appliance $i$ at time $t$ (i.e., $x_{t,i,s}=\mathbb{I}_{\{s_{t,i}=s\}}$), the total power consumption at time $t$ is $\sum_{i\in[M]}\mu^{\top}_{i}x_{t,i}$, which we assume is observed with some additive zero mean Gaussian noise of variance $\sigma^{2}$: $y_{t}\sim\mathcal{N}(\sum_{i\in[M]}\mu^{\top}_{i}x_{t,i},\sigma^{2})$.111Alternatively, we can assume that the power consumption $y_{t,i}$of each appliance is normally distributed with mean $\mu^{\top}_{i}x_{t,i}$ and variance $\sigma_{i}^{2}$, where $\sigma^{2}=\sum_{i\in[M]}\sigma_{i}^{2}$, and $y_{t}=\sum_{i\in[M]}y_{t,i}$. Given this model, the maximum likelihood estimate of the appliance state vector sequence can be obtained by minimizing the log-posterior function $$\begin{split}&\displaystyle\arg\min_{x_{t,i}}\qquad\sum_{t=1}^{T}\frac{(y_{t}-% \sum_{i=1}^{M}x^{\top}_{t,i}\mu_{i})^{2}}{2\sigma^{2}}-\sum_{t=1}^{T-1}\sum_{i% =1}^{M}x^{\top}_{t,i}(\log P_{i})x_{t+1,i}\\ &\displaystyle\text{subject to}\qquad x_{t,i}\in\{0,1\}^{K_{i}},\;\mathbf{1}^{% \top}x_{t,i}=1,\;i\in[M]\;\text{and}\;t\in[T],\end{split}$$ (1) where $\log P_{i}$ denotes a matrix obtained from $P_{i}$ by taking the logarithm of each entry. In our particular application, in addition to the signal’s temporal structure, large changes in total power (in comparison to signal noise) contain valuable information that can be used to further improve the inference results (in fact, solely this information was used for energy disaggregation, e.g., by Dong et al., 2012, 2013; Figueiredo et al., 2012). This observation was used by Kolter and Jaakkola (2012) to amend the posterior with a term that tries to match the large signal changes to the possible changes in the power level when only the state of a single appliance changes. size=,color=blue!20!white,size=,color=blue!20!white,todo: size=,color=blue!20!white,Andras: Add the expression difference FHMM or autoregressive FHMM? Formally, let $\Delta y_{t}=y_{t+1}-y_{t}$, $\Delta\mu^{(i)}_{m,k}=\mu_{i,k}-\mu_{i,m}$, and define the matrices $E_{t,i}\in\mathbb{R}^{K_{i}\times K_{i}}$ by $(E_{t,i})_{m,k}={(\Delta y_{t}-\Delta\mu^{(i)}_{m,k})^{2}}/{(2\sigma_{\text{% diff}}^{2})},$ for some constant $\sigma_{\text{diff}}>0$. Intuitively, $(E_{t,i})_{m,k}$ is the negative log-likelihood (up to a constant) of observing a change $\Delta y_{t}$ in the power level when appliance $i$ transitions from state $m$ to state $k$ under some zero-mean Gaussian noise with variance $\sigma_{\text{diff}}^{2}$. Making the heuristic approximation that the observation noise and this noise are independent (which clearly does not hold under the previous model), Kolter and Jaakkola (2012) added the term $(-\sum_{t=1}^{T-1}\sum_{i=1}^{M}x^{\top}_{t,i}E_{t,i}x_{t+1,i})$ to the objective of (1), arriving at $$\begin{split}&\displaystyle\arg\min_{x_{t,i}}\quad f(x_{1},\ldots,x_{T}):=\sum% _{t=1}^{T}\frac{(y_{t}-\sum_{i=1}^{M}x^{\top}_{t,i}\mu_{i})^{2}}{2\sigma^{2}}-% \sum_{t=1}^{T-1}\sum_{i=1}^{M}x^{\top}_{t,i}(E_{t,i}+\log P_{i})x_{t+1,i}\\ &\displaystyle\text{subject to}\quad x_{t,i}\in\{0,1\}^{K_{i}},\;\mathbf{1}^{% \top}x_{t,i}=1,\;i\in[M]\;\text{and}\;t\in[T]~{}.\\ \end{split}$$ (2) In the rest of the paper we derive an efficient approximate solution to (2), and demonstrate that it is superior to the approximate solution derived by Kolter and Jaakkola (2012) with respect to several measures quantifying the accuracy of load disaggregation solutions. size=,color=blue!20!white,size=,color=blue!20!white,todo: size=,color=blue!20!white,Andras: We should also compare the two solution with respect to the objective defined above. 3 SDP Relaxation and Randomized Rounding There are two major challenges to solve the optimization problem (2) exactly: (i) the optimization is over binary vectors $x_{t,i}$; and (ii) the objective function $f$, even when considering its extension to a convex domain, is in general non-convex (due to the second term). As a remedy we will relax (2) to make it an integer quadratic programming problem, then apply an SDP relaxation and randomized rounding to solve approximately the relaxed problem. We start with reviewing the latter methods. 3.1 Approximate Solutions for Integer Quadratic Programming In this section we consider approximate solutions to the integer quadratic programming problem $$\begin{split}&\displaystyle\text{minimize}\qquad f(x)=x^{\top}Dx+2d^{\top}x\\ &\displaystyle\text{subject to}\qquad x\in\{0,1\}^{n},\end{split}$$ (3) where $D\in\mathbb{S}_{+}^{n}$ is positive semidefinite, and $d\in\mathbb{R}^{n}$. While an exact solution of (3) can be found by enumerating all possible combination of binary values within a properly chosen box or ellipsoid, size=,color=blue!20!white,size=,color=blue!20!white,todo: size=,color=blue!20!white,Andras: Give proper reference other than (Park and Boyd, 2015). the running time of such exact methods is nearly exponential in the number $n$ of binary variables, making these methods unfit for large scale problems. One way to avoid exponential running times is to replace (3) with a convex problem with the hope that the solutions of the convex problems can serve as a good starting point to find high-quality solutions to (3). The standard approach to this is to linearize (3) by introducing a new variable $X\in\mathbb{S}_{+}^{n}$ tied to $x$ trough $X=xx^{\top}$, so that $x^{\top}Dx=\textbf{trace}(DX)$, and then relax the nonconvex constraints $X=xx^{\top}$, $x\in\{0,1\}^{n}$ to $X\succeq xx^{\top}$, $\textbf{diag}(X)=x$, $x\in[0,1]^{n}$. This leads to the relaxed SDP problem $$\begin{split}&\displaystyle\text{minimize}\qquad\textbf{trace}(D^{\top}X)+2d^{% \top}x\\ &\displaystyle\text{subject to}\qquad\begin{bmatrix}1&x^{\top}\\ x&X\\ \end{bmatrix}\succeq 0,\quad\textbf{diag}(X)=x,\quad x\in[0,1]^{n}\\ \end{split}$$ (4) By introducing $\hat{X}=\begin{bmatrix}1&x^{\top}\\ x&X\end{bmatrix}$ this can be written in the compact SDP form size=,color=blue!20!white,size=,color=blue!20!white,todo: size=,color=blue!20!white,Andras: Check the constraint $\textbf{diag}(X)\geq x$ size=,color=green!20!white,size=,color=green!20!white,todo: size=,color=green!20!white,Csaba: I removed the $x\in[0,1]^{n}$ constraint from below. It did not make any sense! $$\begin{split}&\displaystyle\text{minimize}\qquad\textbf{trace}(\hat{D}^{\top}% \hat{X})\\ &\displaystyle\text{subject to}\qquad\hat{X}\succeq 0,\quad\mathcal{A}\hat{X}=% b\,.\end{split}$$ (5) where $\hat{D}=\begin{bmatrix}0&d^{\top}\\ d&D\end{bmatrix}\in\mathbb{S}_{+}^{n+1}$, $b\in\mathbb{R}^{m}$ and $\mathcal{A}:\mathbb{S}_{+}^{n}\to\mathbb{R}^{m}$ is an appropriate linear operator. This general SDP optimization problem can be solved with arbitrary precision in polynomial time using interior-point methods (Malick et al., 2009; Wen et al., 2010). As discussed before, this approach becomes impractical in terms of both the running time and the required memory if either the number of variables or the optimization constraints are large (Wen et al., 2010). We will return to the issue of building scaleable solvers for NILM in Section 5. Note that introducing the new variable $X$, the problem is projected into a higher dimensional space, which is computationally more challenging than just simply relaxing the integrality constraint in (3), but leads to a tighter approximation of the optimum (c.f., Park and Boyd, 2015; see also Lovász and Schrijver, 1991; Burer and Vandenbussche, 2006). To obtain a feasible point of (3) from the solution of (5), we still need to change the solution $x$ to a binary vector. This can be done via randomized rounding (Park and Boyd, 2015; Goemans and Williamson, 1995): Instead of letting $x\in[0,1]^{n}$, the integrality constraint $x\in\{0,1\}^{n}$ in (3) can be replaced by the inequalities $x_{i}(x_{i}-1)\geq 0$ for all $i\in[n]$. Although these constraints are nonconvex, they admit an interesting probabilistic interpretation: the optimization problem $$\begin{split}&\displaystyle\text{minimize}\qquad\mathbb{E}_{w\sim\mathcal{N}(% \mu,\Sigma)}[{w}^{\top}Dw+2d^{\top}w]\\ &\displaystyle\text{subject to}\qquad\mathbb{E}_{w\sim\mathcal{N}(\mu,\Sigma)}% [w_{i}(w_{i}-1)]\geq 0,\qquad i\in[n],\quad\mu\in\mathbb{R}^{n},\quad\Sigma% \succeq 0\end{split}$$ is equivalent to $$\begin{split}&\displaystyle\text{minimize}\qquad\textbf{trace}((\Sigma+\mu\mu^% {\top})D)+2d^{\top}\mu\\ &\displaystyle\text{subject to}\qquad\Sigma_{i,i}+\mu_{i}^{2}-\mu_{i}\geq 0,% \qquad i\in[n],\end{split}$$ (6) which is in the form of (4) with $X=\Sigma+\mu\mu^{\top}$ and $x=\mu$ (above, $\mathbb{E}_{x\sim P}[f(x)]$ stands for $\int f(x)dP(x)$). This leads to the rounding procedure: starting from a solution $(x^{*},X^{*})$ of (4), we randomly draw several samples $w^{(j)}$ from $\mathcal{N}(x^{*},X^{*}-x^{*}{x^{*}}^{\top})$, round $w^{(j)}_{i}$ to $0$ or $1$ to obtain $x^{(j)}$, and keep the $x^{(j)}$ with the smallest objective value. In a series of experiments, Park and Boyd (2015) found this procedure to be better than just naively rounding the coordinates of $x^{*}$. 4 An Efficient Algorithm for Inference in FHMMs To arrive at our method we apply the results of the previous subsection to (2). To do so, as mentioned at the beginning of the section, we need to change the problem to a convex one, since the elements of the second term in the objective of (2), $-x^{\top}_{t,i}(E_{t,i}+\log P_{i})x_{t+1,i}$ are not convex. To address this issue, we relax the problem by introducing new variables $Z_{t,i}=x_{t,i}x^{\top}_{t+1,i}$ and replace the constraint $Z_{t,i}=x_{t,i}x^{\top}_{t+1,i}$ with two new ones: $$Z_{t,i}\mathbf{1}=x_{t,i}\quad\text{and}\quad Z_{t,i}^{\top}\mathbf{1}=x_{t+1,% i}.$$ To simplify the presentation, we will assume that $K_{i}=K$ for all $i\in[M]$. Then problem (2) becomes $$\begin{split}\displaystyle\arg\min_{x_{t,i}}&\displaystyle\qquad\sum_{t=1}^{T}% \left\{\,\frac{1}{2\sigma^{2}}\left(y_{t}-x_{t}^{\top}\mu\right)^{2}-p_{t}^{% \top}z_{t}\,\right\}\\ \displaystyle\text{subject to}&\displaystyle\qquad x_{t}\in\{0,1\}^{MK},\qquad t% \in[T],\\ &\displaystyle\qquad\hat{z}_{t}\in\{0,1\}^{MKK},\qquad t\in[T-1],\\ &\displaystyle\qquad\mathbf{1}^{\top}x_{t,i}=1,\qquad t\in[T]\;\text{and}\;i% \in[M],\\ &\displaystyle\qquad Z_{t,i}\mathbf{1}^{\top}=x_{t,i},\qquad Z_{t,i}^{\top}% \mathbf{1}^{\top}=x_{t+1,i}\,,\qquad t\in[T-1]\;\text{and}\;i\in[M],\\ \end{split}$$ (7) where $x_{t}^{\top}=[x^{\top}_{t,1},\ldots,x^{\top}_{t,M}]$, $\mu^{\top}=[\mu^{\top}_{1},\ldots,\mu^{\top}_{M}]$, $z_{t}^{\top}=[\textbf{vec}(Z_{t,1})^{\top},\ldots,\textbf{vec}(Z_{t,M})^{\top}]$ and $p_{t}^{\top}=[\textbf{vec}(E_{t,1}+\log P_{1}),\ldots,\textbf{vec}(\log P_{T})]$, with $\textbf{vec}(A)$ denoting the column vector obtained by concatenating the columns of $A$ for a matrix $A$. Expanding the first term of (7) and following the relaxation method of Section 3.1, we get the following SDP problem:222The only modification is that we need to keep the equality constraints in (7) that are missing from (3). $$\begin{split}\displaystyle\arg\min_{X_{t},z_{t}}&\displaystyle\qquad\sum_{t=1}% ^{T}\textbf{trace}(D_{t}^{\top}X_{t})+d_{t}^{\top}z_{t}\\ \displaystyle\text{subject to}&\displaystyle\qquad\;\mathcal{A}X_{t}=b,\qquad% \mathcal{B}X_{t}+\mathcal{C}z_{t}+\mathcal{E}X_{t+1}=g,\\ &\displaystyle\qquad\;X_{t}\succeq 0,\qquad X_{t},z_{t}\geq 0\,.\end{split}$$ (8) Here $\mathcal{A}:\mathbb{S}_{+}^{MK+1}\to\mathbb{R}^{m}$, $\mathcal{B},\mathcal{E}:\mathbb{S}_{+}^{MK+1}\to\mathbb{R}^{m^{\prime}}$ and $\mathcal{C}\in\mathbb{R}^{MKK\times m^{\prime}}$ are all appropriate linear operators, and the integers $m$ and $m^{\prime}$ are determined by the number of equality constraints, while $D_{t}=\frac{1}{2\sigma^{2}}\begin{bmatrix}0&-y_{t}\mu^{\top}\\ -y_{t}\mu&\mu\mu^{\top}\end{bmatrix}$ and $d_{t}=p_{t}.$ Notice that (8) is a simple, though huge-dimensional SDP problem in the form of (5) where $\hat{D}$ has a special block structure. Next we apply the randomized rounding method from Section 3.1 to provide an approximate solution to our original problem (2). Starting from an optimal solution $(z^{*},X^{*})$ of (8) , and utilizing that we have an SDP problem for each time step $t$, we obtain Algorithm 1 that performs the rounding sequentially for $t=1,2,\ldots,T$. However we run the randomized method for three consecutive time steps, since $X_{t}$ appears at both time steps $t-1$ and $t+1$ in addition to time $t$ (cf., equation 9). Following Park and Boyd (2015), in the experiments we introduce a simple greedy search within Algorithm 1: after finding the initial point $x^{k}$, we greedily try to objective the target value by change the status of a single appliance at a single time instant. The search stops when no such improvement is possible, and we use the resulting point as the estimate. 5 ADMM Solver for Large-Scale, Sparse Block-Structured SDP Problems Given the relaxation and randomized rounding presented in the previous subsection all that remains is to find $X^{*}_{t},z^{*}_{t}$ to initialize Algorithm 1. Although interior point methods can solve SDP problems efficiently, even for problems with sparse constraints as (4), the running time to obtain an $\epsilon$ optimal solution is of the order of $n^{3.5}\log(1/\epsilon)$ (Nesterov, 2004, Section 4.3.3), which becomes prohibitive in our case since the number of variables scales linearly with the time horizon $T$. As an alternative solution, first-order methods can be used for large scale problems (Wen et al., 2010). Since our problem (8) is an SDP problem where the objective function is separable, ADMM is a promising candidate to find a near-optimal solution. To apply ADMM, we use the Moreau-Yosida quadratic regularization (Malick et al., 2009), which is well suited for the primal formulation we consider. When implementing ADMM over the variables $(X_{t},z_{t})_{t}$, the sparse structure of our constraints allows to consider the SDP problems for each time step $t$ sequentially: $$\begin{split}&\displaystyle\arg\min_{X_{t},z_{t}}\qquad\textbf{trace}(D_{t}^{% \top}X_{t})+d_{t}^{\top}z_{t}\\ &\displaystyle\text{subject to}\qquad\;\mathcal{A}X_{t}=b,\\ &\displaystyle\qquad\qquad\qquad\;\mathcal{B}X_{t}+\mathcal{C}z_{t}+\mathcal{E% }X_{t+1}=g,\\ &\displaystyle\qquad\qquad\qquad\;\mathcal{B}X_{t-1}+\mathcal{C}z_{t-1}+% \mathcal{E}X_{t}=g,\\ &\displaystyle\qquad\qquad\qquad\;X_{t}\succeq 0,\qquad X_{t},z_{t}\geq 0~{}.% \end{split}$$ (9) The regularized Lagrangian function for (9) is333We drop the subscript $t$ and replace $t+1$ and $t-1$ with $+$ and $-$ signs, respectively. $$\begin{split}\displaystyle\mathcal{L}_{\mu}=&\displaystyle\textbf{trace}(D^{% \top}X)+d^{\top}z+\dfrac{1}{2\mu}\lVert X-S\rVert^{2}_{F}+\dfrac{1}{2\mu}% \lVert z-r\rVert^{2}_{2}+\lambda^{\top}(b-\mathcal{A}X)\\ &\displaystyle+\nu^{\top}(g-\mathcal{B}X-\mathcal{C}z-\mathcal{E}X_{+})+\nu_{-% }^{\top}(g-\mathcal{B}X_{-}-\mathcal{C}z_{-}-\mathcal{E}X)\\ &\displaystyle-\textbf{trace}(W^{\top}X)-\textbf{trace}(P^{\top}X)-h^{\top}z,% \end{split}$$ (10) where $\lambda$, $\nu$, $W\geq 0$, $P\succeq 0$, and $h\geq 0$ are dual variables, and $\mu>0$ is a constant. By taking the derivatives of $\mathcal{L}_{\mu}$ and computing the optimal values of $X$ and $z$, one can derive the standard ADMM updates, which, due to space constraints, are given in Appendix A. The final algorithm, which updates the variables for each $t$ sequentially, is given by Algorithm 2. Algorithms 1 and 2 together give an efficient algorithm for finding an approximate solution to (2) and thus also to the inference problem of additive FHMMs. 6 Learning the Model The previous section provided an algorithm to solve the inference part of our energy disaggregation problem. However, to be able to run the inference method, we need to set up the model. To learn the HMMs describing each appliance, we use the method of Kontorovich et al. (2013) to learn the transition matrix, and the spectral learning method of Anandkumar et al. (2012) (following Mattfeld, 2014) to determine the emission parameters. However, when it comes to the specific application of NILM, the problem of unknown, time-varying bias also needs to be addressed, which appears due to the presence of unknown/unmodeled appliances in the measured signal. A simple idea, which is also followed by Kolter and Jaakkola (2012), is to use a ‘‘generic model’’ whose contribution to the objective function is downweighted. Surprisingly, incorporating this idea in the FHMM inference creates some unexpected challenges.444For example, the incorporation of this generic model breaks the derivation of the algorithm of Kolter and Jaakkola (2012). See Appendix B for a discussion of this. Therefore, in this work we come up with a practical, heuristic solution tailored to NILM. First we identify all electric events defined by a large change $\Delta y_{t}$ in the power usage (using some ad-hoc threshold). Then we discard all events that are similar to any possible level change $\Delta\mu^{(i)}_{m,k}$. The remaining large jumps are regarded as coming from a generic HMM model describing the unregistered appliances: they are clustered into $K-1$ clusters, and an HMM model is built where each cluster is regarded as power usage coming from a single state of the unregistered appliances. We also allow an “off state” with power usage $0$. 7 Experimental Results We evaluate the performance of our algorithm in two setups:555Our code is available online at https://github.com/kiarashshaloudegi/FHMM_inference. we use a synthetic dataset to test the inference method in a controlled environment, while we used the REDD dataset of Kolter and Johnson (2011) to see how the method performs on non-simulated, “real” data. The performance of our algorithm is compared to the structured variational inference (SVI) method of Ghahramani and Jordan (1997), the method of Kolter and Jaakkola (2012) and that of Zhong et al. (2014); we shall refer to the last two algorithms as KJ and ZGS, respectively. 7.1 Experimental Results: Synthetic Data The synthetic dataset was generated randomly (the exact procedure is described in Appendix C). To evaluate the performance, we use normalized disaggregation error as suggested by Kolter and Jaakkola (2012) and also adopted by Zhong et al. (2014). This measures the reconstruction error for each individual appliance. Given the true output $y_{t,i}$ and the estimated output $\hat{y}_{t,i}$ (i.e. $\hat{y}_{t,i}=\mu_{i}^{\top}\hat{x}_{t,i}$), the error measure is defined as $$\text{NDE}=\sqrt{{\textstyle\sum_{t,i}(y_{t,i}-\hat{y}_{t,i})^{2}}/{\textstyle% \sum_{t,i}\left(y_{t,i}\right)^{2}}}\,.$$ Figures 2 and  2 show the performance of the algorithms as the number HMMs ($M$) (resp., number of states, $K$) is varied. Each plot is a report for $T=1000$ steps averaged over $100$ random models and realizations, showing the mean and standard deviation of NDE. Our method, shown under the label ADMM-RR, runs ADMM for $2500$ iterations, runs the local search at the end of each $250$ iterations, and chooses the result that has the maximum likelihood. ADMM is the algorithm which applies naive rounding. It can be observed that the variational inference method is significantly outperformed by all other methods, while our algorithm consistently obtained better results than its competitors, KJ coming second and ZGS third. 7.2 Experimental Results: Real Data In this section, we also compared the 3 best methods on the real dataset REDD (Kolter and Johnson, 2011). We use the first half of the data for training and the second half for testing. Each HMM (i.e., appliance) is trained separately using the associated circuit level data, and the HMM corresponding to unregistered appliances is trained using the main panel data. In this set of experiments we monitor appliances consuming more than $100$ watts. ADMM-RR is run for $1000$ iterations, and the local search is run at the end of each $250$ iterations, and the result with the largest likelihood is chosen. To be able to use the ZGS method on this data, we need to have some prior information about the usage of each appliance; the authors suggestion is to us national energy surveys, but in the lack of this information (also about the number of residents, type of houses, etc.) we used the training data to extract this prior knowledge, which is expected to help this method. Detailed results about the precision and recall of estimating which appliances are ‘on’ at any given time are given in Table 1. In Appendix D we also report the error of the total power usage assigned to different appliances (Table 2), as well as the amount of assigned power to each appliance as a percentage of total power (Figure  3). As a summary, we can see that our method consistently outperformed the others, achieving an average precision and recall of $60.97\%$ and $78.56\%$, with about $50\%$ better precision than KJ with essentially the same recall ($38.68/75.02\%$), while significantly improving upon ZGS ($17.97/36.22\%$). Considering the error in assigning the power consumption to different appliances, our method achieved about $30-35\%$ smaller error (ADMM-RR: $2.87\%$, KJ: $4.44\%$, ZGS: $3.94\%$) than its competitors. In our real-data experiments, there are about 1 million decision variables: $M=7$ or $6$ appliances (for phase A and B power, respectively) with $K=4$ states each and for about $T=30,000$ time steps for one day, $1$ sample every $6$ seconds. KJ and ZGS solve quadratic programs, increasing their memory usage ($14$GB vs $6$GB in our case). On the other hand, our implementation of their method, using the commercial solver MOSEK inside the Matlab-based YALMIP (Löfberg, 2004), runs in $5$ minutes, while our algorithm, which is purely Matlab-based takes $5$ hours to finish. We expect that an optimized C++ version of our method could achieve a significant speed-up compared to our current implementation. 8 Conclusion FHMMs are widely used in energy disaggregation. However, the resulting model has a huge (factored) state space, making standard inference FHMM algorithms infeasible even for only a handful of appliances. In this paper we developed a scalable approximate inference algorithm, based on a semidefinite relaxation combined with randomized rounding, which significantly outperformed the state of the art in our experiments. A crucial component of our solution is a scalable ADMM method that utilizes the special block-diagonal-like structure of the SDP relaxation and provides a good initialization for randomized rounding. We expect that our method may prove useful in solving other FHMM inference problems, as well as in large scale integer quadratic programming. Acknowledgements This work was supported in part by the Alberta Innovates Technology Futures through the Alberta Ingenuity Centre for Machine Learning and by NSERC. K. is indebted to Pooria Joulani and Mohammad Ajallooeian, whom provided much useful technical advise, while all authors are grateful for Zico Kolter for sharing his code. References Anandkumar et al. [2012] A. Anandkumar, D. Hsu, and S. M. Kakade. A Method of Moments for Mixture Models and Hidden Markov Models. In COLT, volume 23, pages 33.1–33.34, 2012. Boyd et al. [2011] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. FTML, 3(1):1–122, 2011. Burer and Vandenbussche [2006] S. Burer and D. Vandenbussche. Solving Lift-and-Project Relaxations of Binary Integer Programs. SIAM Journal on Optimization, 16(3):726–750, 2006. Chang et al. [2012] H.-H. Chang, K.-L. Chen, Y.-P. Tsai, and W.-J. Lee. A New Measurement Method for Power Signatures of Nonintrusive Demand Monitoring and Load Identification. IEEE T. on Industry Applications, 48:764–771, 2012. Dong et al. [2012] M. Dong, P. C. M. Meira, W. Xu, and W. Freitas. An Event Window Based Load Monitoring Technique for Smart Meters. IEEE Transactions on Smart Grid, 3(2):787–796, June 2012. Dong et al. [2013] M. Dong, Meira, W. Xu, and C. Y. Chung. Non-Intrusive Signature Extraction for Major Residential Loads. IEEE Transactions on Smart Grid, 4(3):1421–1430, Sept. 2013. Egarter et al. [2015] D. Egarter, V. P. Bhuvana, and W. Elmenreich. PALDi: Online Load Disaggregation via Particle Filtering. IEEE Transactions on Instrumentation and Measurement, 64(2):467–477, 2015. Figueiredo et al. [2012] M. Figueiredo, A. de Almeida, and B. Ribeiro. Home Electrical Signal Disaggregation for Non-intrusive Load Monitoring (NILM) Systems. Neurocomputing, 96:66–73, Nov. 2012. Ghahramani and Jordan [1997] Z. Ghahramani and M. Jordan. Factorial Hidden Markov Models. Machine learning, 29(2):245–273, 1997. Goemans and Williamson [1995] M. X. Goemans and D. P. Williamson. Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming. J. of the ACM, 42(6):1115–1145, 1995. Guo et al. [2015] Z. Guo, Z. J. Wang, and A. Kashani. Home Appliance Load Modeling From Aggregated Smart Meter Data. IEEE Transactions on Power Systems, 30(1):254–262, Jan. 2015. Kelly and Knottenbelt [2015] J. Kelly and W. Knottenbelt. Neural NILM: Deep Neural Networks Applied to Energy Disaggregation. In BuildSys, pages 55–64, 2015. Kim et al. [2011] H. Kim, M. Marwah, M. F. Arlitt, G. Lyon, and J. Han. Unsupervised Disaggregation of Low Frequency Power Measurements. In ICDM, volume 11, pages 747–758, 2011. Koller and Friedman [2009] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. Adaptive computation and machine learning. MIT Press, Cambridge, MA, 2009. Kolter and Jaakkola [2012] J. Z. Kolter and T. Jaakkola. Approximate Inference in Additive Factorial HMMs with Application to Energy Disaggregation. In AISTATS, pages 1472–1482, 2012. Kolter and Johnson [2011] J. Z. Kolter and M. J. Johnson. REDD: A Public Data Set for Energy Disaggregation Research. In Workshop on Data Mining Applications in Sustainability (SIGKDD), pages 59–62, 2011. Kolter et al. [2010] J. Z. Kolter, S. Batra, and A. Y. Ng. Energy Disaggregation via Discriminative Sparse Coding. In Advances in Neural Information Processing Systems, pages 1153–1161, 2010. Kontorovich et al. [2013] A. Kontorovich, B. Nadler, and R. Weiss. On Learning Parametric-Output HMMs. In ICML, pages 702–710, 2013. Liang et al. [2010] J. Liang, S. K. K. Ng, G. Kendall, and J. W. M. Cheng. Load Signature Study -Part I: Basic Concept, Structure, and Methodology. IEEE Transactions on Power Delivery, 25(2):551–560, Apr. 2010. Löfberg [2004] J. Löfberg. YALMIP : A Toolbox for Modeling and Optimization in MATLAB. In CACSD, 2004. Lovász and Schrijver [1991] L. Lovász and A. Schrijver. Cones of Matrices and Set-functions and 0-1 Optimization. SIAM Journal on Optimization, 1(2):166–190, 1991. Malick et al. [2009] J. Malick, J. Povh, F. Rendl, and A. Wiegele. Regularization Methods for Semidefinite Programming. SIAM Journal on Optimization, 20(1):336–356, Jan. 2009. ISSN 1052-6234, 1095-7189. Mattfeld [2014] C. Mattfeld. Implementing spectral methods for hidden Markov models with real-valued emissions. arXiv preprint arXiv:1404.7472, 2014. Nesterov [2004] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, 2004. Park and Boyd [2015] J. Park and S. Boyd. A Semidefinite Programming Method for Integer Convex Quadratic Minimization. arXiv preprint arXiv:1504.07672, 2015. Prudenzi [2002] A. Prudenzi. A neuron nets based procedure for identifying domestic appliances pattern-of-use from energy recordings at meter panel. In PESW, volume 2, pages 941–946, 2002. Rennie et al. [2009] S. J. Rennie, J. R. Hershey, and P. Olsen. Single-channel speech separation and recognition using loopy belief propagation. In ICASSP, pages 3845–3848, 2009. Wainwright and Jordan [2007] M. J. Wainwright and M. I. Jordan. Graphical Models, Exponential Families, and Variational Inference. FTML, 1(1–2):1–305, 2007. Weiss et al. [2012] M. Weiss, A. Helfenstein, F. Mattern, and T. Staake. Leveraging smart meter data to recognize home appliances. In PerCom, pages 190–197, 2012. Wen et al. [2010] Z. Wen, D. Goldfarb, and W. Yin. Alternating direction augmented Lagrangian methods for semidefinite programming. Mathematical Programming Computation, 2(3-4):203–230, Dec. 2010. Zhong et al. [2014] M. Zhong, N. Goddard, and C. Sutton. Signal Aggregate Constraints in Additive Factorial HMMs, with Application to Energy Disaggregation. In NIPS, pages 3590–3598, 2014. Zia et al. [2011] T. Zia, D. Bruckner, and A. Zaidi. A hidden Markov model based procedure for identifying household electric loads. In IECON, pages 3218–3223, 2011. Appendix A ADMM updates In this section we derive the ADMM updates for the regularized Lagrangian $\mathcal{L}_{\mu}$ given by (10). Taking derivatives with respect to $X$ and $z$ and setting them to zeros, we get $$\begin{split}\displaystyle\nabla_{X}\mathcal{L}_{\mu}&\displaystyle=D+\dfrac{1% }{\mu}(X-S)-\mathcal{A}^{\top}\lambda-\mathcal{B}^{\top}\nu-\mathcal{E}^{\top}% \nu_{-}-W-P=0,\\ \displaystyle X^{*}&\displaystyle=S+\mu(\mathcal{A}^{\top}\lambda+\mathcal{B}^% {\top}\nu+\mathcal{E}^{\top}\nu_{-}+W+P-D)\end{split}$$ and $$\begin{split}\displaystyle\nabla_{z}\mathcal{L}_{\mu}&\displaystyle=d+\dfrac{1% }{\mu}(z-r)-\mathcal{C}^{\top}\nu-h=0,\\ \displaystyle z^{*}&\displaystyle=r+\mu(\mathcal{C}^{\top}\nu+h-d)\,.\end{split}$$ Substituting $X=X^{*}$ and $z=z^{*}$ in (10) defines $\hat{\mathcal{L}}_{\mu}$. Then the standard ADMM iteration yields $$\begin{split}\displaystyle P^{k+1}&\displaystyle=\arg\min_{P\succeq 0}\,\hat{% \mathcal{L}}_{\mu}(S^{k},P,W^{k},\lambda^{k},\nu^{k},\nu_{-}^{k}),\\ \displaystyle W^{k+1}&\displaystyle=\arg\min_{W\geq 0}\,\hat{\mathcal{L}}_{\mu% }(S^{k},P^{k+1},W,\lambda^{k},\nu^{k},\nu_{-}^{k}),\\ \displaystyle\lambda^{k+1}&\displaystyle=\arg\min_{\lambda}\,\hat{\mathcal{L}}% _{\mu}(S^{k},P^{k+1},W^{k+1},\lambda,\nu^{k},\nu_{-}^{k}),\\ \displaystyle S^{k+1}&\displaystyle=S^{k}+\mu(\mathcal{A}^{\top}\lambda^{k+1}+% \mathcal{B}^{\top}\nu^{k}+\mathcal{E}^{\top}\nu_{-}^{k+1}+W^{k+1}+P^{k+1}-D),% \\ \displaystyle r^{k+1}&\displaystyle=r^{k}+\mu(\mathcal{C}^{\top}\nu^{k}+h^{k}-% d),\\ \displaystyle h^{k+1}&\displaystyle=\arg\min_{h\geq 0}\,\hat{\mathcal{L}}_{\mu% }(r^{k+1},\nu^{k}),\\ \displaystyle\nu^{k+1}&\displaystyle=\arg\min_{\nu}\,\hat{\mathcal{L}}_{\mu}(S% ^{k+1},P^{k+1},W^{k+1},\lambda^{k+1},\nu,\nu_{-}^{k+1},h^{k+1},r^{k+1}).\end{split}$$ By rearranging the terms in $\hat{\mathcal{L}}_{\mu}$, the following update equations can be found: $$\begin{split}\displaystyle W^{k+1}=&\displaystyle\max\{(D-\mathcal{A}^{\top}% \lambda^{k}-\mathcal{B}^{\top}\nu^{k}-\mathcal{E}^{\top}\nu^{k}_{-}-P^{k}-D-S^% {k}/\mu),\mathbf{0}\},\\ \displaystyle P^{k+1}=&\displaystyle(D-\mathcal{A}^{\top}\lambda^{k}-\mathcal{% B}^{\top}\nu^{k}-\mathcal{E}^{\top}\nu^{k}_{-}-W^{k}-D-S^{k}/\mu)_{+},\\ \displaystyle\lambda^{k+1}=&\displaystyle\dfrac{1}{\mu}(\mathcal{A}\mathcal{A}% ^{\top})^{\dagger}\big{(}b-\mathcal{A}(\mathcal{B}^{\top}\nu^{k}+\mathcal{E}^{% \top}\nu_{-}^{k}+W^{k+1}+P^{k+1}-D)\big{)},\\ \displaystyle h^{k+1}=&\displaystyle\max\{d-\mathcal{C}^{\top}\nu^{k}-r^{k}/% \mu,\mathbf{0}\},\\ \displaystyle\nu^{k+1}=&\displaystyle\dfrac{1}{\mu}(\mathcal{B}\mathcal{B}^{% \top}+\mathcal{C}\mathcal{C}^{\top}+\mathcal{E}\mathcal{E}^{\top})^{\dagger}% \left(g-\mathcal{B}\big{(}S^{k+1}+\mu(\mathcal{A}^{\top}\lambda^{k+1}+\mathcal% {E}^{\top}\nu_{-}^{k+1}+W^{k+1}+P^{k+1}-D)\big{)}\right.\\ &\displaystyle\left.-\mathcal{C}\big{(}r^{k+1}+\mu(h^{k+1}-d)\big{)}-\mathcal{% E}\big{(}S_{+}^{k}+\mu(\mathcal{A}^{\top}\lambda_{+}^{k}+\mathcal{B}^{\top}\nu% _{+}^{k}+W_{+}^{k}+P_{+}^{k}-D)\big{)}\right)\,.\end{split}$$ (11) Here $\max:\mathcal{X}\times\mathcal{X}\to\mathcal{X}$ works elementwise, and for any square matrix $A$, $A^{\dagger}$ denotes the Moore-Penrose pseudo inverse, and for any real symmetric matrix $A$, $A_{+}$ is the projection of $A$ onto the positive semidefinite cone (if the spectral decomposition of $A$ is given by $A=\sum_{i}\lambda_{i}v_{i}v_{i}^{\top}$, where $\lambda_{i}$ and $v_{i}$ are the $i^{\text{th}}$ eigenvalue and eigenvector of $A$, respectively, then $A_{+}=\sum_{\lambda_{i}>0}\lambda_{i}v_{i}v_{i}^{\top}$). Note that the projections are done on matrices of small size. Note also that the pseudo-inverses of the matrices involved need only be calculated once. Appendix B Discussion of the Derivation in Kolter and Jaakkola [2012] in the Presence of the “Generic Model” The “generic model” affects the derivation of the algorithm of Kolter and Jaakkola [2012] as follows. The authors of this paper claim to derive the final optimization problem given in equation (15) of their paper from (9) and (10) as follows: equation (9) defines the problem $\min_{z\in Z,Q\in A}f_{1}(z,Q)$, while (10) defines the problem $\min_{z^{\prime}\in Z^{\prime},Q\in A}f_{2}(z^{\prime},Q)$ where $z^{\prime}=g(z)$. Here, $z,z^{\prime}$ are variables that describe the state of the “generic model” over time. The claim in the paper is that with some set $B$ (coming from their “one-at-a-time” constraint), $\min_{z\in Z,z^{\prime}\in Z^{\prime},Q\in A\cap B,z^{\prime}=g(z)}f_{1}(z,A)+% f_{2}(z^{\prime},Q)$ is equivalent to the minimization problem in equation (15). However, careful checking the derivation shows that (15) is equivalent to $\min_{z\in Z,z^{\prime}\in Z^{\prime},Q\in A\cap B}f_{1}(z,A)+\min_{z^{\prime}% \in Z^{\prime}}f_{2}(z^{\prime},Q)$, which is smaller in general. Appendix C Generating the Synthetic Dataset The synthetic dataset used in the experiments was generated in the following way: The power levels corresponding to each on state ($\mu$) were generated uniformly at random from $[100,4500]$ with the additional constraint that the difference of any two non-zero levels must be greater than $100$ (to encourage identifiability). The levels for “off states” were set to $0$. The transition matrices for each appliance were generated the following way: diagonal elements for “off states” were drawn uniformly at random from $[0,35]$ and for on-states from $[0,30]$, while non-diagonal elements were selected from $[0,1]$ to ensure sparse transitions. Finally, the data matrices were normalized to ensure they are proper transition matrices. The output of each appliance was subject to an additive Gaussian noise with variance $\sigma\in[0,6]$ selected proportionally to the energy consumption level of the given on state, and $1$ for off states. Appendix D Additional Results for the Real-Data Experiment In Table 1 we provided prediction and recall values for our experiments on real data. As promised, here we provide some additional results about these experiments: Table 2 presents the total power usage assigned to different appliances, and Figure  3 shows the amount of assigned power to each appliance.
Sparse Methods for Automatic Relevance Determination Samuel H. Rudy${}^{1*}$, Themistoklis P. Sapsis${}^{1}$ ${}^{1}$Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139 Abstract This work considers methods for imposing sparsity in Bayesian regression with applications in nonlinear system identification. We first review automatic relevance determination (ARD) and analytically demonstrate the need to additional regularization or thresholding to achieve sparse models. We then discuss two classes of methods, regularization based and thresholding based, which build on ARD to learn parsimonious solutions to linear problems. In the case of orthogonal covariates, we analytically demonstrate favorable performance with regards to learning a small set of active terms in a linear system with a sparse solution. Several example problems are presented to compare the set of proposed methods in terms of advantages and limitations to ARD in bases with hundreds of elements. The aim of this paper is to analyze and understand the assumptions that lead to several algorithms and to provide theoretical and empirical results so that the reader may gain insight and make more informed choices regarding sparse Bayesian regression. Keywords– Sparse regression, automatic relevance determination, system identification ††${}^{*}$ Corresponding author (shrudy@mit.edu). Python code: https://github.com/snagcliffs/SparseARD Introduction In many modeling and engineering problems it is critical to build statistical models from data which include estimates of the model uncertainty. This is often achieved through non-parametric Bayesian regression in the form of Gaussian processes and similar methods [26]. While these methods offer tremendous flexibility and have seen success in a wide variety of applications they have two significant shortcomings; they are not interpretable and they often fail in high dimensional settings. The simplest case of parametric Bayesian regression is Bayesian ridge regression, where one learns a distribution for model parameters by assuming identical independently distributed (iid) Gaussian priors on model weights. However, Bayesian ridge requires the researcher to provide a single length scale for the prior that stays fixed across dimensions. It is therefore not invariant to changes in units. Furthermore Bayesian ridge regression does not yield sparse models. In a high dimensional setting this may hinder the interpretability of the learned model. Automatic Relevance Determination (ARD) [20, 32, 36] addresses both of these problems. ARD learns length scales associated with each free variable in a regression problem. In the context of linear regression, ARD is often referred to as Sparse Bayesian Learning (SBL) [33, 36, 38] due to its tendency to learn sparse solutions to linear problems. ARD has been applied to problems in compressed sensing [3], sparse regression [35, 42, 43, 40, 12], matrix factorization, [30], classification of gene expression data [16], earthquake detection [22], Bayesian neural networks [20], as well as other fields. More recently, some works have used ARD for interpretable nonlinear system identification. In this setting a linear regression problem is formulated to learn the equations of motion for a dynamical system from a large collection of candidate functions, called a library. Traditionally, frequentist methods have been applied to select a small set of active terms from the candidate functions. These include symbolic regression [6, 29, 24], sequential thresholding [8, 27, 28], information theoretic methods [2, 15], relaxation methods, [31, 45], and constrained sparse optimization [34]. Bayesian methods for nonlinear system identification [39, 21] including ARD have been applied to the nonlinear system identification problem for improved robustness in the case of low data [40] and for uncertainty quantification [42, 43, 12]. The critical challenge for any library method for nonlinear system identification is learning the correct set of active terms. Motivated by problems such as nonlinear system identification, where accuracy in determining the sparsity pattern on a predictor is paramount, we focus on the ability of ARD to accurately learn a small subset of active terms in a linear system. This is in contrast to convergence in the sense of any norm. Indeed, it was previously shown [40] that ARD converges to the true predictor as noise present in the training data shrinks to zero. However, we show analytically that ARD fails to obtain the true sparsity pattern in the case of an orthonormal design matrix, leading to extraneous terms for arbitrarily small magnitudes of noise. This result motivates further considerations for imposing sparsity on the learned model. This paper explores several intuitive methods for imposing sparsity in the ARD framework. We discuss the assumptions that lead to each technique, any approximations we use to make them tractable, and in some cases provide theoretical results regarding their accuracy with respect to selection of active terms. We stress that while sparse regression is a mature field with many approaches designed to approximate the $\ell^{0}$-penalized least squares problem [31, 41, 2, 44, 5], most of these techniques do not consider uncertainty. We therefore only compare results of the proposed techniques to ARD. The paper is organized as follows. In section 2 we provide a brief discussion on the automatic relevance determination method for Bayesian linear regression. Section 3 introduces two regularization-based methods for imposing sparsity of learned predictors from the ARD algorithm. Section 4 introduces various thresholding-based approaches. In each case we provide analytical results for the expected false positive and negative rates with respect to coefficients being set to zero. Section 5 includes a more detailed comparison between methods. Section 6 includes results of each of the proposed methods applied to a variety of problems including a sparse linear system, function fitting, and nonlinear system identification. Discussion and comments towards future work are included in Section 7. Setup We start with the likelihood model, $$\displaystyle y$$ $$\displaystyle=\boldsymbol{\theta}(\mathbf{x})\boldsymbol{\xi}+\nu$$ (1) $$\displaystyle\nu$$ $$\displaystyle\sim\mathcal{N}(0,\sigma^{2}),$$ where $\boldsymbol{\theta}:\mathbb{R}^{n}\to\mathbb{R}^{d}$ forms a nonlinear basis, $y$ is scalar, $\mathbf{x}\in\mathbb{R}^{n}$, $\boldsymbol{\xi}\in\mathbb{R}^{d}$ and $\nu$ is normally distributed error with variance $\sigma^{2}$. We assume a prior distribution on weights $\xi$ with variance given by hyper-parameter $\boldsymbol{\gamma}$. $$\xi_{i}\sim\mathcal{N}(0,\gamma_{i})$$ (2) Automatic relevance determination seeks to learn the value of parameter $\boldsymbol{\gamma}$ that maximizes evidence. This approach is known as evidence maximization, empirical Bayes, or type-II maximum likelihood [4, 19]. Given a dataset $\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{m}$ we marginalize over $\boldsymbol{\xi}$ to obtain the posterior likelihood of $\boldsymbol{\gamma}$. This gives, $$\displaystyle p(\mathcal{D}|\boldsymbol{\gamma})$$ $$\displaystyle=\int p(\mathcal{D}|\boldsymbol{\xi})p(\boldsymbol{\xi};% \boldsymbol{\gamma})\,d\boldsymbol{\xi}$$ (3) $$\displaystyle\propto|\boldsymbol{\Sigma}_{y}|^{-\frac{1}{2}}exp\left(-\frac{1}% {2}\mathbf{y}^{T}\boldsymbol{\Sigma}_{y}^{-1}\mathbf{y}\right),$$ where $\boldsymbol{\Sigma}_{y}=\sigma^{2}\mathbf{I}_{m}+\boldsymbol{\theta}(\mathbf{X% })\boldsymbol{\Gamma}\boldsymbol{\theta}(\mathbf{X})^{T}$, $\boldsymbol{\Gamma}=diag(\boldsymbol{\gamma})$, $\mathbf{y}$ is a column vector of all observed outputs and $\boldsymbol{\theta}(\mathbf{X})\in\mathbb{R}^{d\times m}$ is a matrix whose rows are the nonlinear features of each observed $\mathbf{x}$. We estimate $\boldsymbol{\gamma}$ by maximizing Eq. (3) and the subsequent distribution for $\boldsymbol{\xi}$, given by $\boldsymbol{\xi}\sim\mathcal{N}(\boldsymbol{\mu}_{\xi},\boldsymbol{\Sigma}_{% \xi})$. Letting $\boldsymbol{\Theta}=\boldsymbol{\theta}(\mathbf{X})$ this is, $$\displaystyle\boldsymbol{\mu}_{\xi}$$ $$\displaystyle=\sigma^{-2}\boldsymbol{\Sigma}_{\xi}\boldsymbol{\Theta}^{T}% \mathbf{y}$$ (4) $$\displaystyle\boldsymbol{\Sigma}_{\xi}$$ $$\displaystyle=\left(\sigma^{-2}\boldsymbol{\Theta}^{T}\boldsymbol{\Theta}+% \boldsymbol{\Gamma}^{-1}\right)^{-1}$$ In practice $\boldsymbol{\gamma}$ is found by minimizing the negative log of Eq. (3) given by, $$L(\boldsymbol{\gamma})=-\log p(\mathcal{D};\boldsymbol{\gamma})\propto log|% \boldsymbol{\Sigma}_{y}|+\mathbf{y}^{T}\boldsymbol{\Sigma}_{y}^{-1}\mathbf{y}.$$ (5) Following [40] (see Appendix A) the second term in (5) is equivalent to, $$\mathbf{y}^{T}\boldsymbol{\Sigma}_{y}^{-1}\mathbf{y}=\underset{\boldsymbol{\xi% }}{min}\frac{1}{\sigma^{2}}\|\mathbf{y}-\boldsymbol{\Theta}\boldsymbol{\xi}\|_% {2}^{2}+\boldsymbol{\xi}^{T}\boldsymbol{\Gamma}^{-1}\boldsymbol{\xi},$$ (6) which gives the following representation of the loss function (5), $$L(\boldsymbol{\gamma})=\underset{\boldsymbol{\xi}}{min}\,\left(log|\boldsymbol% {\Sigma}_{y}|+\frac{1}{\sigma^{2}}\|\mathbf{y}-\boldsymbol{\theta}\boldsymbol{% \xi}\|_{2}^{2}+\boldsymbol{\xi}^{T}\boldsymbol{\Gamma}^{-1}\boldsymbol{\xi}% \right).$$ (7) To minimize (7) we solve a sequence of $\ell^{1}$-penalized least squares problems developed in [36]. This is shown in Alg. 1. Some works have used Gamma distribution priors on scale parameters $\boldsymbol{\gamma}$ and precision $\sigma^{-2}$ [32]. This leads to a problem that is solved using coordinate descent of a slightly altered loss function from that shown in Eq. (7). More recent works [36, 40, 42] have not used this formulation, so much of the following work does not use hierarchical priors. We note however, that the case of a Gamma distribution prior on $\boldsymbol{\gamma}$ with shape parameter $k=1$ is in fact a Laplace prior. This case has been studied as a Bayesian compressed sensing method [3] and is a special case of the formulation considered in Sec. 3. The minimization problem in step 4 of Alg. 1 may be re-written, after rescaling $\boldsymbol{\Theta}$ and $\boldsymbol{\xi}$, to obtain the commonly used Lagrangian form of the least absolute shrinkage and selection operator (Lasso) [31]. Letting, $$\boldsymbol{\zeta}^{(k+1)}=\underset{\boldsymbol{\zeta}}{argmin}\left\|\mathbf% {y}-\boldsymbol{\Theta}\,diag\left(\boldsymbol{\eta}^{(k+1)}\right)^{-1}% \boldsymbol{\zeta}\right\|_{2}^{2}+\|\boldsymbol{\zeta}\|_{1}$$ (8) we get, $$\boldsymbol{\xi}^{(k+1)}=diag\left(\boldsymbol{\eta}^{(k+1)}\right)^{-1}% \boldsymbol{\zeta}.$$ (9) Typical solvers for Eq. (8) include coordinate descent [37], proximal gradient methods [23], alternating direction method of multipliers [7], and least angle regression (LARS) [10]. Several example datasets considered in this manuscript resulted in ill-conditioned $\boldsymbol{\Theta}$ and therefore slow convergence of algorithms for solving the Lasso subroutine. We found empirically that all methods performed equally well on orthogonal $\boldsymbol{\Theta}$ but for ill-conditioned cases LARS far outperformed other optimization routines. As we have noted, it is often the case that solutions to Eq. (5) exhibit some degree of sparsity. However, such solutions are only sparse in comparison to those derived by methods such as Bayesian ridge, where all coefficients are nonzero. For problems where we seek to find a very small set of nonzero terms, Alg. 1 must be adjusted to push extraneous terms to zero. In the following two sections we will discuss five methods for doing so. Regularization Based Methods We begin with a discussion of two methods for regularizing ARD to obtain more sparse predictors: inflating the variance passed into Alg. 1 and including a prior for the distribution of $\boldsymbol{\gamma}$. In each case the sparse predictor is found as the fixed point of an iterative algorithm. In subsequent sections we will discuss thresholding based methods that alternate between iterative optimization and thresholding operations. In certain cases we refer to the set valued subgradient of a continuous piecewise differentiable function. In cases where the subgradient is a singleton we treat it as a real number. Variance Inflation The error variance $\sigma^{2}$ of likelihood model (1) may be intuitively thought of as a level of mistrust for the data $\mathcal{D}$. Extremely large values of $\sigma^{2}$ will push estimates of $\boldsymbol{\xi}$ to be dominated by priors. It is shown in [36] that the ARD prior given by (2) is equivalent to a concave regularization term. We therefore expect large $\sigma^{2}$ to encourage more sparse models. The regularization may be strengthened by passing in an artificially large value of $\sigma^{2}$ to the iterative algorithm for solving Eq. (7) or, if also learning $\sigma^{2}$, by applying an inflated value at each step in the algorithm. We will call this process ARD with variance inflation (ARDvi), shown in Algorithm 2. Note that this differs from Alg. 1 only slightly by treating the variance used in the standard ARD algorithm as a tuning parameter, with a higher variance indicating less trust in the data and a greater regularization. Sparsity properties of ARDvi for orthogonal features To better understand the effect of variance inflation we consider Alg. 2 in the case where columns of $\boldsymbol{\Theta}$ are othogonal. Note that this implies $m\geq n$. Let $\sqrt{\boldsymbol{\rho}}$ be the vector of column norms of $\boldsymbol{\Theta}$ so that $\boldsymbol{\Theta}^{T}\boldsymbol{\Theta}=diag(\boldsymbol{\rho})$. Define $\overline{\boldsymbol{\Theta}}$ to be the extension of $\boldsymbol{\Theta}$ to an orthogonal basis of $\mathbb{R}^{m}$ that $\overline{\boldsymbol{\Theta}}^{T}\overline{\boldsymbol{\Theta}}=\textbf{R}=% diag(\overline{\boldsymbol{\rho}})$ with the first $n$ entries of $\overline{\boldsymbol{\rho}}$ given by $\boldsymbol{\rho}$. Now let $\boldsymbol{\gamma}^{*}$ be a fixed point of algorithm 2, $\overline{\boldsymbol{\Gamma}}^{*}=diag(\boldsymbol{\gamma}^{*},\textbf{0}_{m-% n})\in\mathbb{R}^{m\times m}$, and $\mathbf{c}^{*}$, $\boldsymbol{\xi}^{*}$ be defined by steps 3 and 4. The expression in step 3 is given by, $$\displaystyle c_{i}^{*}$$ $$\displaystyle=\boldsymbol{\Theta}_{i}^{T}\left(\alpha\sigma^{2}\mathbf{I}+% \boldsymbol{\Theta}\boldsymbol{\Gamma}^{*}\boldsymbol{\Theta}^{T}\right)^{-1}% \boldsymbol{\Theta}_{i}$$ (10) $$\displaystyle=\boldsymbol{\Theta}_{i}^{T}\left(\alpha\sigma^{2}\overline{% \boldsymbol{\Theta}}\textbf{R}^{-1}\overline{\boldsymbol{\Theta}}^{T}+% \overline{\boldsymbol{\Theta}}\overline{\boldsymbol{\Gamma}}^{*}\overline{% \boldsymbol{\Theta}}^{T}\right)^{-1}\boldsymbol{\Theta}_{i}$$ $$\displaystyle=\boldsymbol{\Theta}_{i}^{T}\left(\overline{\boldsymbol{\Theta}}% \left(\alpha\sigma^{2}\textbf{R}^{-1}+\overline{\boldsymbol{\Gamma}}^{*}\right% )\overline{\boldsymbol{\Theta}}^{T}\right)^{-1}\boldsymbol{\Theta}_{i}$$ $$\displaystyle=\boldsymbol{\Theta}_{i}^{T}\overline{\boldsymbol{\Theta}}\textbf% {R}^{-1}\left(\alpha\sigma^{2}\textbf{R}^{-1}+\overline{\boldsymbol{\Gamma}}^{% *}\right)^{-1}\textbf{R}^{-1}\overline{\boldsymbol{\Theta}}^{T}\boldsymbol{% \Theta}_{i}$$ $$\displaystyle=\mathbf{e}_{i}^{T}\left(\alpha\sigma^{2}\textbf{R}^{-1}+% \overline{\boldsymbol{\Gamma}}^{*}\right)^{-1}\mathbf{e}_{i}$$ $$\displaystyle=\dfrac{1}{\alpha\sigma^{2}\rho_{i}^{-1}+\boldsymbol{\gamma}_{i}^% {*}}$$ $$\displaystyle=\dfrac{1}{\alpha\sigma^{2}\rho_{i}^{-1}+\frac{|\xi_{i}^{*}|}{% \sqrt{c_{i}^{*}}}}$$ $$\displaystyle\sqrt{c_{i}^{*}}$$ $$\displaystyle=\dfrac{-|\xi_{i}^{*}|+\sqrt{{\xi_{i}^{*}}^{2}+4\alpha\sigma^{2}% \rho_{i}^{-1}}}{2\alpha\sigma^{2}\rho_{i}^{-1}}$$ The Karush-Kugn-Tucker (KKT) stationarity condition for the $\xi$ update in step 4 gives, $$\displaystyle 0$$ $$\displaystyle\in\boldsymbol{\Theta}_{i}^{T}\left(\boldsymbol{\Theta}% \boldsymbol{\xi}^{*}-\mathbf{y}\right)+\alpha\sigma^{2}\sqrt{c_{i}^{*}}% \partial|\xi_{i}^{*}|$$ (11) $$\displaystyle\in\boldsymbol{\Theta}_{i}^{T}\left(\boldsymbol{\Theta}\xi^{*}-(% \boldsymbol{\Theta}\boldsymbol{\xi}+\boldsymbol{\nu})\right)+\alpha\sigma^{2}% \sqrt{c_{i}^{*}}\partial|\xi_{i}^{*}|$$ $$\displaystyle\in\rho_{i}\xi^{*}_{i}-\rho_{i}\xi_{i}+\boldsymbol{\Theta}_{i}^{T% }\boldsymbol{\nu}+\alpha\sigma^{2}\sqrt{c_{i}^{*}}\partial|\xi_{i}^{*}|,$$ where $\boldsymbol{\xi}$ denotes the true value from Eq. (1). We can find the false positive probability for a term being included in the model by setting $\xi_{i}=0$ and finding conditions under which $\xi^{*}_{i}\neq 0$. Subbing in the value for $\sqrt{c_{i}^{*}}$ from Eq. (10) and dividing by $\rho_{i}$ gives, $$\displaystyle\xi_{i}=0\Rightarrow-\rho_{i}^{-1}\boldsymbol{\Theta}_{i}^{T}% \boldsymbol{\nu}\in\xi_{i}^{*}+\dfrac{1}{2}\left(\sqrt{{\xi_{i}^{*}}^{2}+4% \alpha\sigma^{2}\rho_{i}^{-1}}-|\xi_{i}^{*}|\right)\partial|\xi_{i}^{*}|,$$ (12) where $\partial|\xi_{i}^{*}|$ is a set valued function taking value $[-1,1]$ if $\xi_{i}^{*}=0$ or $\{\text{sgn}(\xi_{i}^{*})\}$ otherwise. If $\xi_{i}^{*}=0$ then, $$|\rho_{i}^{-1}\boldsymbol{\Theta}_{i}^{T}\boldsymbol{\nu}|\leq\underset{z\in% \partial|\xi_{i}^{*}|}{\text{sup}}\left|z\sqrt{\alpha\sigma^{2}\rho_{i}^{-1}}% \right|=\sqrt{\alpha\sigma^{2}\rho_{i}^{-1}},$$ (13) while for $\xi^{*}\neq 0$, $$\displaystyle|\rho_{i}^{-1}\boldsymbol{\Theta}_{i}^{T}\boldsymbol{\nu}|$$ $$\displaystyle=\left|\xi_{i}^{*}+\dfrac{1}{2}\left(\sqrt{{\xi_{i}^{*}}^{2}+4% \alpha\sigma^{2}\rho_{i}^{-1}}-|\xi_{i}^{*}|\right)sgn(\xi_{i}^{*})\right|$$ (14) $$\displaystyle=\dfrac{1}{2}\left(\left|\xi_{i}^{*}\right|+\sqrt{{\xi_{i}^{*}}^{% 2}+4\alpha\sigma^{2}\rho_{i}^{-1}}\right)$$ $$\displaystyle>\sqrt{\alpha\sigma^{2}\rho_{i}^{-1}}.$$ It follows that $p(\xi_{i}^{*}\neq 0|\xi_{i}=0)=p(\rho_{i}^{-1}|\boldsymbol{\Theta}_{i}^{T}% \boldsymbol{\nu}|>\sqrt{\alpha\sigma^{2}\rho_{i}^{-1}})$. Since $\rho_{i}^{-1}\boldsymbol{\Theta}_{i}^{T}\boldsymbol{\nu}\sim\mathcal{N}(0,\rho% _{i}^{-1}\sigma^{2})$ we find that the false positive rate is, $$FP_{VI}(\alpha)=p(\xi_{i}^{*}\neq 0|\xi_{i}=0)=1-\text{erf}\left(\sqrt{\frac{% \alpha}{2}}\right),$$ (15) where $erf$ is the Gauss error function. Of particular note is that the number of false positives is independent from the variance of the linear model’s error term, $\sigma^{2}$. While the mean predictor learned from ARD does converge in any norm to the true solution for $\sigma^{2}\to 0$, the expected number of nonzero terms in the learned predictor stays constant. If one desires a sparse predictor, this motivates including a small threshold parameter below which coefficients are ignored, which we will discuss in a subsequent section. We define a false negative by Algorithm 2 inding some $\gamma^{*}_{i}=0$ (and respectively $\xi_{i}^{*}$) when the true solution $\xi_{i}\neq 0$ and find the likelihood of such a case in a similar manner. Applying Eq. (10) and the KKT conditions we find, $$\displaystyle 0$$ $$\displaystyle\in-\xi_{i}+\rho_{i}^{-1}\boldsymbol{\Theta}_{i}^{T}\boldsymbol{% \nu}+\sqrt{\alpha\sigma^{2}\rho_{i}^{-1}}sgn(\xi_{i}^{*})$$ (16) $$\displaystyle\rho_{i}^{-1}\boldsymbol{\Theta}_{i}^{T}\boldsymbol{\nu}$$ $$\displaystyle\in\xi_{i}-\sqrt{\alpha\sigma^{2}\rho_{i}^{-1}}sgn(\xi_{i}^{*})$$ $$\displaystyle\in\left[\xi_{i}-\sqrt{\alpha\sigma^{2}\rho_{i}^{-1}},\xi_{i}+% \sqrt{\alpha\sigma^{2}\rho_{i}^{-1}}\right]$$ The false negative likelihood is therefore, $$FN_{VI}(\alpha)=p(\xi_{i}^{*}=0|\xi_{i}\neq 0)=\frac{1}{2}\left(\text{erf}% \left(\frac{\xi_{i}+\sqrt{\alpha\sigma^{2}\rho_{i}^{-1}}}{\sigma\sqrt{2\rho_{i% }^{-1}}}\right)-\text{erf}\left(\frac{\xi_{i}-\sqrt{\alpha\sigma^{2}\rho_{i}^{% -1}}}{\sigma\sqrt{2\rho_{i}^{-1}}}\right)\right)$$ (17) Note that this function vanishes for large $|\xi_{i}|$, indicating that important terms, as measured by $|\xi_{i}|$ are far less likely to be missed. Figure 1 demonstrates the validity of equations (15) and (17) on a simple test problem. We construct a matrix $\boldsymbol{\Theta}\in\mathbb{R}^{250\times 250}$ with orthogonal columns having random magnitude such that $\rho_{i}\sim\mathcal{U}([1,3])$ and random $\boldsymbol{\xi}$ with $\|\boldsymbol{\xi}\|_{0}=25$ having non-zero terms distributed according to $\mathcal{N}(0,1)$. The mean number of added and missed nonzero terms across 50 trials are shown and agree very well with the predicted values. As anticipated, the number of missing terms decays to zero as $\sigma\to 0$, but the same is not true for the number of added terms, which only decays as the inflation parameter $\alpha$ is increased. The failure of ARDvi to converge as $\sigma\to 0$ to the true sparsity pattern for fixed $\alpha$ is certainly troubling, but for sufficiently large $\alpha$ only an arbitrarily small number of terms will be added. Regularization via Sparsity Promoting Hierarchical Priors Since the sparsity of $\boldsymbol{\xi}$ is controlled by that of $\boldsymbol{\gamma}$ we can also attempt to regularize $\boldsymbol{\gamma}$ through the use of a hierarchical prior. Previous approaches to ARD have suggested hierarchical priors on $\boldsymbol{\gamma}$ in the form of Gamma distributions [32]. However, except for certain cases, the general class of Gamma distributions does not impose sparsity. Instead, we consider the use of a sparsity promoting hierarchical prior on the scale parameters $\boldsymbol{\gamma}$. We consider distributions of the form, $$p(\gamma_{i})\propto exp\left(\frac{-g(\gamma_{i})-f(\gamma_{i})}{2}\right),$$ (18) where $f$, $g$ are each convex and concave functions in $\gamma_{i}$, respectively. Given data $\mathcal{D}$ we can follow a procedure similar to the one used in Sec. 2 and find, $$\displaystyle p(\boldsymbol{\gamma}|\mathcal{D})$$ $$\displaystyle\propto p(\mathcal{D}|\boldsymbol{\gamma})p(\boldsymbol{\gamma})=% \int p(\mathcal{D}|\boldsymbol{\xi})p(\boldsymbol{\xi}|\boldsymbol{\gamma})\,d% \boldsymbol{\xi}\,p(\boldsymbol{\gamma})$$ (19) $$\displaystyle=(2\pi)^{-m/2}|\boldsymbol{\Sigma}_{y}|^{-\frac{1}{2}}exp\left(-% \frac{1}{2}\mathbf{y}^{T}\boldsymbol{\Sigma}_{y}^{-1}\mathbf{y}\right)\prod_{i% =1}^{d}e^{\left(\frac{-g(\gamma_{i})-f(\gamma_{i})}{2}\right)}.$$ A fully Bayesian approach would estimate $\boldsymbol{\theta}$ through the joint posterior likelihood of pairs $\boldsymbol{\theta},\boldsymbol{\gamma}$, but this would be computationally expensive. Instead, we approximate $\boldsymbol{\gamma}$ by its maximum-a-posteriori estimate $\boldsymbol{\gamma}_{MAP}=argmax\,p(\boldsymbol{\gamma}|\mathcal{D})$, a process sometimes labelled type-II MAP [19]. The MAP estimate of $\boldsymbol{\gamma}$ is found by minimizing the negative log of the posterior distribution, $$\displaystyle L_{ARDr}(\boldsymbol{\gamma})$$ $$\displaystyle=-\log p(\mathcal{D};\boldsymbol{\gamma})\propto log|\boldsymbol{% \Sigma}_{y}|+\mathbf{y}^{T}\boldsymbol{\Sigma}_{y}^{-1}\mathbf{y}+\sum_{i=1}^{% d}\left(f(\gamma_{i})+g(\gamma_{i})\right)$$ (20) $$\displaystyle=\underset{\boldsymbol{\xi}}{min}\,\left(log|\boldsymbol{\Sigma}_% {y}|+\frac{1}{\sigma^{2}}\|\mathbf{y}-\boldsymbol{\Theta}\boldsymbol{\xi}\|_{2% }^{2}+\boldsymbol{\xi}^{T}\boldsymbol{\Gamma}^{-1}\boldsymbol{\xi}+\sum_{i=1}^% {d}\left(f(\gamma_{i})+g(\gamma_{i})\right)\right).$$ As expected, Eq. (20) closely resembles Eq. (7) and may be solved with a similar method. Alg. 3 constructs a sequence $\gamma^{(i)}$ which monotonically increases the likelihood given by Eq. (20). Since $\mathcal{L}_{ARDr}$ is nonconvex, we can only guarantee convergence to a local minimum. We initialize Alg. 3 using the unregularized ARD value of $\gamma$. Algorithm 3 allows for significant freedom in choosing $f$ and $g$. The concave component of the prior, $g$, acts as a sparsity encouraging regularizer on $\boldsymbol{\gamma}$, as is common for concave priors [11]. Examples of concave $g$ include the identity, $tanh$, and approximations of the $\ell^{0}$-norm. We consider functions of the following form; $$\displaystyle g_{\lambda,\eta}(\gamma_{i})$$ $$\displaystyle=\text{min}\{\lambda\gamma_{i},\eta\}$$ (21) where $\lambda$ is a parameter controlling the strength of the regularizer and $\eta$ is a width parameter. The convex prior $f$ may an indicator function restricting $\gamma$ to a specific domain or left as a constant. In either case implementing the above algorithm is trivial. If $f$ is not a linear or indicator function then step 5 in Alg. 3 will require an internal iterative algorithm. Sparsity properties of ARDr for orthogonal features The behavior of Alg. 3 is complicated by the generality of functions $f$ and $g$. In the simplest case we let $f$ be constant and $g$ be the linear function $g(\gamma_{i})=\lambda\gamma_{i}$. This is the formulation used in [3] and a special case of using a Gamma distribution prior with shape parameter $k=1$ on $\gamma_{i}$. The update step in line 5 of Alg. 3 gives $\gamma_{i}^{(k+1)}=|\xi_{i}^{(k+1)}|{c_{i}^{(k+1)}}^{-1/2}$ as in the unregularized case. For a fixed point of Alg. 3 we have, $$\displaystyle c_{i}^{*}$$ $$\displaystyle=\boldsymbol{\Theta}_{i}^{T}\left(\sigma^{2}\mathbf{I}+% \boldsymbol{\Theta}\boldsymbol{\Gamma}^{*}\boldsymbol{\Theta}^{T}\right)^{-1}% \boldsymbol{\Theta}_{i}+\frac{\partial g}{\partial\gamma_{i}}$$ (22) $$\displaystyle=\dfrac{1}{\sigma^{2}\rho_{i}^{-1}+\frac{|\xi_{i}^{*}|}{\sqrt{c_{% i}^{*}}}}+\lambda$$ $$\displaystyle 0$$ $$\displaystyle=\rho_{i}^{-1}\sigma^{2}{c_{i}^{*}}^{\frac{3}{2}}+|\xi_{i}^{*}|{c% _{i}^{*}}-\left(\lambda\rho_{i}^{-1}\sigma^{2}+1\right){c_{i}^{*}}^{\frac{1}{2% }}-\lambda|\xi_{i}^{*}|$$ The KKT conditions for the $\xi_{i}$ update are unchanged from the unregularized case and are given by, $$0\in\rho_{i}\xi_{i}^{*}-\rho_{i}\xi_{i}+\boldsymbol{\theta}^{T}_{i}\boldsymbol% {\nu}+\sigma^{2}\sqrt{c_{i}^{*}}sgn(\xi_{i}^{*}).$$ (23) If $\xi_{i}^{*}=0$ then Eq. (22) tells us $\sqrt{c_{i}^{*}}=\sqrt{\rho_{i}\sigma^{-2}+\lambda}$ and therefore, $$\xi_{i}^{*}=0\Rightarrow\xi_{i}-\rho_{i}^{-1}\boldsymbol{\Theta}_{i}^{T}% \boldsymbol{\nu}\in\left[-\sqrt{\rho_{i}^{-1}\sigma^{2}+\lambda\rho_{i}^{-2}% \sigma^{4}},\sqrt{\rho_{i}^{-1}\sigma^{2}+\lambda\rho_{i}^{-2}\sigma^{4}}\right]$$ (24) The converse of Eq. (24) is shown in Appendix B. From this equivalence it follows that the false positive and negative rates for Alg. 3 are given by, $$FP_{r}(\lambda)=p(\xi_{i}^{*}\neq 0|\xi_{i}=0)=1-\text{erf}\left(\sqrt{\frac{1% +\lambda\rho_{i}^{-1}\sigma^{2}}{2}}\right)$$ (25) $$FN_{r}(\lambda)=\frac{1}{2}\left(\text{erf}\left(\frac{\xi_{i}+\sqrt{\rho_{i}^% {-1}\sigma^{2}+\lambda\rho_{i}^{-2}\sigma^{4}}}{\sigma\sqrt{2\rho_{i}^{-1}}}% \right)-\text{erf}\left(\frac{\xi_{i}-\sqrt{\rho_{i}^{-1}\sigma^{2}+\lambda% \rho_{i}^{-2}\sigma^{4}}}{\sigma\sqrt{2\rho_{i}^{-1}}}\right)\right).$$ (26) These rates are verified empirically by testing 50 trials using 250x250 $\boldsymbol{\Theta}$ with orthogonal columns and random $\rho_{i}$ as in Sec. 3.1. Results are shown in Fig. 2. Note that for fixed $\lambda>0$ the false negative rate does indeed approach zero as $\sigma\to 0$, however, the false positive rate increases. This indicates that a linear model with smaller error requires higher regularization to achieve a sparse solution. For $\lambda\sigma^{2}$ held fixed as $\sigma$ varies, the false negative rate still approaches zero and the false positive rate is constant. This latter case is shown in Fig. 3. Figure 3 shows a similar convergence pattern to what we observed for ARDvi in Fig. 1. The number of added terms (false positives) remains constant as $\sigma\to 0$ for any fixed regularization parameter $\lambda$. However, we note again that for sufficiently large $\lambda$ the fixed false positive rate may be made arbitrarily small. In the following section we will construct thresholding methods including one for which the false positive and negative rates converge to zero as $\sigma\to 0$. Thresholding Based Methods As we have shown, automatic relevance determination will not realize the correct non-zero coefficients in a general sparse regression problem, but it will converge in any norm [40]. Therefore, applying an arbitrarily small threshold on $|\xi_{i}|$ will ensure selection of the correct nonzero coefficients in the limit of low noise. In this section we discuss methods for thresholding the output from Alg. 1 using the mean magnitude of coefficients $|\xi_{i}|$ or based on the posterior distribution of $\xi_{i}$. Magnitude Based Thresholding Sequential thresholding based on the magnitude of coefficients has been used extensively in regression [8, 5] and also in conjunction with automatic relevance determination methods for identifying nonlinear dynamical systems with uncertainty quantification in [42]. Here we consider the method initially proposed in [42], called threshold sparse Bayesian regression (TSBR). To distinguish from other thresholding methods we use the term magnitude sequential threshold sparse Bayesian learning (M-STSBL). Magnitude based thresholding assumes that coefficients learned in the ARD algorithm with sufficiently small magnitude, $|\xi_{j}|<\tau$ are irrelevant and may be treated as zero. The sequential hard-thresholding method for automatic relevance determination is implemented in Alg. 4. Non-zero terms are indexed by $\mathcal{G}$ whose complement $\mathcal{G}^{c}$ tracks terms removed from the model. At each iteration the algorithm either recursively calls itself with fewer features or terminates if all features are kept non-zero. Sparsity properties of M-STSBL for orthogonal features We consider the number of errors using Alg. 4 in a similar context to the analysis of the variance inflation and regularized method. First consider the likelihood of a false non-zero term. Recall from the previous section that the KKT conditions for a fixed point of Alg. 1 imply, $$\displaystyle 0$$ $$\displaystyle\in\xi^{*}_{i}-\xi_{i}+\rho_{i}^{-1}\boldsymbol{\Theta}_{i}^{T}% \boldsymbol{\nu}+\rho_{i}^{-1}\sigma^{2}\sqrt{c_{i}^{*}}\partial\|\xi_{i}^{*}% \|_{1}$$ (27) $$\displaystyle=\xi^{*}_{i}-\xi_{i}+\rho_{i}^{-1}\boldsymbol{\Theta}_{i}^{T}% \boldsymbol{\nu}+\frac{1}{2}\left(\sqrt{{\xi_{i}^{*}}^{2}+4\rho_{i}^{-1}\sigma% ^{2}}-|\xi_{i}^{*}|\right)sgn(\xi_{i}^{*}),$$ We can rewrite this as, $$\phi_{\sigma,\rho}(\xi_{i}^{*})=\xi_{i}-\rho_{i}^{-1}\boldsymbol{\Theta}_{i}^{% T}\boldsymbol{\nu}\sim\mathcal{N}(0,\rho_{i}^{-1}\sigma^{2})$$ (28) where, $$\displaystyle\phi_{\sigma,\rho}(\xi_{i}^{*})$$ $$\displaystyle=\frac{\xi_{i}^{*}}{2}+\frac{1}{2}\sqrt{\xi_{i}^{2}+4\rho_{i}^{-1% }\sigma^{2}}sgn(\xi_{i}^{*})$$ (29) $$\displaystyle=\frac{\xi_{i}^{*}}{2}\left(1+\sqrt{1+4\rho_{i}^{-1}\sigma^{2}{% \xi_{i}^{*}}^{-2}}\right),\text{ for }\xi_{i}^{*}\neq 0,$$ is invertible on $\mathbb{R}\setminus\{0\}$ and strictly increasing. Therefore, $$\left|\xi_{i}-\rho_{i}^{-1}\boldsymbol{\Theta}_{i}^{T}\boldsymbol{\nu}\right|>% \phi_{\sigma,\rho}(\tau)\Leftrightarrow|\xi_{i}^{*}|>\tau.$$ (30) This gives the likelihood of a false non-zero coefficient as, $$FP_{M}(\tau)=p(\xi_{i}^{*}\neq 0|\xi_{i}=0)=1-\text{erf}\left(\frac{\phi_{% \sigma,\rho}(\tau)}{\sigma\sqrt{2\rho_{i}^{-1}}}\right),$$ (31) and the likelihood for a false zero coefficient as, $$FN_{M}(\tau)=p(\xi_{i}^{*}=0|\xi_{i}\neq 0)=\frac{1}{2}\left(\text{erf}\left(% \frac{\xi_{i}+\phi_{\sigma,\rho}(\tau)}{\sigma\sqrt{2\rho_{i}^{-1}}}\right)-% \text{erf}\left(\frac{\xi_{i}-\phi_{\sigma,\rho}(\tau)}{\sigma\sqrt{2\rho_{i}^% {-1}}}\right)\right).$$ (32) Equations (32) and (31) are verified empirically by testing on 50 trials over a 250x250 $\boldsymbol{\Theta}$ using the same experimental design as in Sec. 3.1. Results shown in Fig. 4. In contrast to regularization based approaches, we now have the desirable condition where the number of false positive terms each goes to zero as $\sigma\to 0$. However, the number false negatives now only shrinks to a fixed positive number - a consequence of using a hard threshold. This motivates alternative criteria for thresholding. In the next section, we will discuss thresholding based not strictly on magnitude but on the marginal posterior likelihood that a coefficient is zero. Likelihood Based Thresholding While Alg. 4 was shown to be effective in [42] it is not independent from the units of measurement used for each feature and is not practical in the case where some true coefficients are small. An alternative means of thresholding is to do so based on the marginal likelihood of a coefficient being zero. The marginal posterior distribution of $\xi_{i}$ is given by, $$p(\xi_{i})\sim\mathcal{N}(\mu_{\xi,i},\Sigma_{\xi,ii}),$$ (33) where $\mu_{\xi,i},\Sigma_{\xi,ii}$ are given by Eq.(4) and the marginal likelihood that $\xi_{i}=0$ is, $$p(\xi_{i}=0)=\mathcal{N}(0\,|\,\mu_{\xi,i},\Sigma_{\xi,ii})=\frac{1}{\sqrt{2% \pi\Sigma_{\xi,ii}}}e^{-\frac{1}{2}\mu_{\xi,i}^{2}\Sigma_{\xi,ii}^{-1}}.$$ (34) We can construct a sequential thresholding algorithm shown by Alg. 5 by removing terms whose marginal likelihood evaluated at zero is sufficiently large. The remaining subset of features in then passed recursively to the same procedure until convergence, marked by no change in the number of features. This process is described by Alg. 5 where parameter $\tau$ is the marginal likelihood at zero above which features are removed. Sparsity properties of L-STSBL for orthogonal features We again consider the case where $\boldsymbol{\Theta}^{T}\boldsymbol{\Theta}=diag(\boldsymbol{\rho})$. Let, $$h_{L}(\xi_{i},\Sigma_{\xi,ii})=(2\pi\Sigma_{\xi,ii})^{-1/2}exp\left(-\frac{1}{% 2}\xi_{i}^{2}\Sigma_{\xi,ii}^{-1}\right),$$ (35) so that the thresholding criteria is $h_{L}(\xi_{i},\Sigma_{\xi,ii})>\tau$. In Alg. 1 $\boldsymbol{\eta^{*}}=2\sigma^{2}\sqrt{\mathbf{c}^{*}}=2\sigma^{2}\boldsymbol{% \xi}^{*}{\boldsymbol{\gamma}^{*}}^{-1}$ so $\boldsymbol{\xi}^{*}=\sigma^{-2}\boldsymbol{\Sigma}_{\xi}\boldsymbol{\Theta}^{% T}\mathbf{y}$ is the mean posterior estimate. The orthogonality of the columnsof $\boldsymbol{\Theta}$ allows us to express the marginal posterior variance $\Sigma_{\xi,ii}$ as a function of $|\xi_{i}|$. Letting $\boldsymbol{\Sigma}_{\xi}^{*}$ be the covariance from a fixed point of Alg. 1 we have, $$\displaystyle{\Sigma_{\xi,ii}^{*}}$$ $$\displaystyle=\left(\sigma^{-2}\boldsymbol{\Theta}^{T}\boldsymbol{\Theta}+{% \boldsymbol{\Gamma}^{*}}^{-1}\right)^{-1}_{ii}$$ (36) $$\displaystyle=\left(\sigma^{-2}\operatorname{diag}(\boldsymbol{\rho})+{% \boldsymbol{\Gamma}^{*}}^{-1}\right)^{-1}_{ii}$$ $$\displaystyle=\left(\frac{\rho_{i}}{\sigma^{2}}+{\gamma_{i}^{*}}^{-1}\right)^{% -1}$$ $$\displaystyle{\Sigma_{\xi}^{*}}^{-1}_{ii}$$ $$\displaystyle=\frac{\rho_{i}}{\sigma^{2}}+\frac{\sqrt{c_{i}^{*}}}{|\xi_{i}^{*}|}$$ $$\displaystyle=\frac{\rho_{i}}{\sigma^{2}}+\frac{\sqrt{1+4\rho_{i}^{-1}\sigma^{% 2}|\xi_{i}^{*}|^{-2}}-1}{2\rho_{i}^{-1}\sigma^{2}}$$ $$\displaystyle=\frac{\sqrt{1+4\rho_{i}^{-1}\sigma^{2}|\xi_{i}^{*}|^{-2}}+1}{2% \rho_{i}^{-1}\sigma^{2}}.$$ This allows us to express, $$h_{L}(\xi_{i}^{*},\Sigma_{\xi,ii}^{*})=\tilde{h}_{L,\rho,\sigma}(|\xi_{i}^{*}|% )=(2\pi)^{-1/2}\sqrt{{\Sigma_{\xi,ii}^{*}}^{-1}(|\xi_{i}|)}\,exp\left(-\frac{1% }{2}|\xi_{i}^{*}|^{2}{\Sigma_{\xi,ii}^{*}}^{-1}(|\xi_{i}|)\right),$$ (37) where ${\Sigma_{\xi,ii}^{*}}^{-1}(|\xi_{i}^{*}|)$ and $|\xi_{i}^{*}|^{2}{\Sigma_{\xi,ii}^{*}}^{-1}(|\xi_{i}^{*}|)$ are strictly decreasing and increasing functions of $|\xi_{i}^{*}|$, respectively. It follows that $\tilde{h}_{L,\rho,\sigma}$ is strictly decreasing and therefore invertible with $\tilde{h}_{L,\rho,\sigma}^{-1}$ easily computed by bisection. For $\tau>0$ there is some $\tilde{h}_{L,\rho,\sigma}^{-1}(\tau)$ such that, $$|\xi_{i}^{*}|>\tilde{h}_{L,\rho,\sigma}^{-1}(\tau)\Leftrightarrow\tilde{h}_{L,% \rho,\sigma}(\xi_{i}^{*})\leq\tau,$$ (38) and recalling Eq. (30), $$|\xi_{i}-\rho_{i}^{-1}\boldsymbol{\Theta}_{i}^{T}\boldsymbol{\nu}|>\phi_{% \sigma,\rho}\left(\tilde{h}_{L,\rho,\sigma}^{-1}(\tau)\right)\Leftrightarrow% \tilde{h}_{L,\rho,\sigma}(\xi_{i}^{*})\leq\tau.$$ (39) This gives, $$FP_{L}(\tau)=p(\xi_{i}^{*}\neq 0|\xi_{i}=0)=1-\text{erf}\left(\frac{\phi_{% \sigma,\rho}\left(\tilde{h}_{L,\rho,\sigma}^{-1}(\tau)\right)}{\sigma\sqrt{2% \rho_{i}^{-1}}}\right),$$ (40) and, $$FN_{L}(\tau)=\frac{1}{2}\left(\text{erf}\left(\frac{\xi_{i}+\phi_{\sigma,\rho}% \left(\tilde{h}_{L,\rho,\sigma}^{-1}(\tau)\right)}{\sigma\sqrt{2\rho_{i}^{-1}}% }\right)-\text{erf}\left(\frac{\xi_{i}-\phi_{\sigma,\rho}\left(\tilde{h}_{L,% \rho,\sigma}^{-1}(\tau)\right)}{\sigma\sqrt{2\rho_{i}^{-1}}}\right)\right).$$ (41) Equations (41) and (40) are verified empirically using the same experimental setup as in previous sections. Results shown in Fig. 5. Similar to M-STSBL, solutions of L-STSBL converge towards the correct sparsity pattern as $\sigma\to 0$. However, Fig. 5 indicates highly favorable results in the number of missing terms. Eq. (36) indicates that for $\sigma\ll 1$ the marginal variance $\Sigma_{\xi,ii}\sim\mathcal{O}(\sigma^{2})$. As a consequence, the exponential in Eq. (35) becomes very small and the algorithms is much more conservative about pruning terms. Thresholding via Sparse Prior on $\boldsymbol{\xi}$ Algorithm 5 performs thresholding based on the marginal likelihood of a given coefficient being zero without consideration for the likelihood of the coefficient prior to applying a threshold. We now propose a thresholding method which includes the latter. We consider a prior on $\boldsymbol{\xi}$ which varies from Eq. (2) only where $\|\boldsymbol{\xi}\|_{0}<0$ and use MAP estimates of $\boldsymbol{\xi}$ to prune terms. Consider the same model described in Sec. 2 but with the following prior on $\xi_{i}$, $$p(\xi_{i})=\mathcal{N}(\xi_{i}|0,\gamma_{i})\,e^{\tau\delta_{\xi_{i},0}}.$$ (42) Note that this is equivalent to (2) almost everywhere so the integral in (3) is not affected. The posterior for $\xi$ under assumption (42) is then, $$p(\boldsymbol{\xi}|\mathcal{D},\tau)\propto\dfrac{1}{(2\pi)^{d/2}|\boldsymbol{% \Sigma}_{\xi}|}\,e^{-\frac{1}{2}(\boldsymbol{\xi}-\boldsymbol{\mu}_{\xi})^{T}% \boldsymbol{\Sigma}_{\xi}^{-1}(\boldsymbol{\xi}-\boldsymbol{\mu}_{\xi})-\tau\|% \boldsymbol{\xi}\|_{0}}$$ (43) with $\boldsymbol{\mu}_{\xi},\boldsymbol{\Sigma}_{\xi}$ defined as in Eq. (4). When $\tau=0$ this reduces to the standard ARD posterior but for $\tau>0$ the likelihood shrinks exponentially in the number of nonzero terms. Since the two posteriors differ only on a set of measure zero, the inclusion of $exp(\tau\delta_{\xi_{i},0})$ in Eq. (42) only affects the solution if we use the MAP estimate of $\boldsymbol{\xi}$ as a means to select active terms. Doing so induces a thresholding operation to find $\boldsymbol{\xi}_{MAP}$. For a group $S=\{s_{1},s_{2},\ldots,s_{q}\}\subseteq\{1,\ldots,d\}$ where $\boldsymbol{\mu}_{\xi,s_{i}}\neq 0$ let $\boldsymbol{\xi}_{-S}=\boldsymbol{\mu}_{\xi}-\sum_{q}\mu_{\xi,s_{i}}\mathbf{e}% _{s_{i}}$ where $\mathbf{e}_{i}$ is the unit vector in the $\text{i}^{\text{th}}$ coordinate. The likelihood of the thresholded vector $\boldsymbol{\xi}_{-S}$ is given by, $$\displaystyle p(\boldsymbol{\xi}_{-S}|\mathcal{D},\tau)$$ $$\displaystyle=C\,exp\left(-\frac{1}{2}(\boldsymbol{\xi}_{-S}-\boldsymbol{\mu}_% {\xi})^{T}\boldsymbol{\Sigma}_{\xi}^{-1}(\boldsymbol{\xi}_{-S}-\boldsymbol{\mu% }_{\xi})-\tau\|\boldsymbol{\xi}_{-S}\|_{0}\right)$$ (44) $$\displaystyle=C\,exp\left(-\frac{1}{2}\left(\sum_{q}\mu_{\xi,s_{i}}\mathbf{e}_% {s_{i}}\right)^{T}\boldsymbol{\Sigma}_{\xi}^{-1}\left(\sum_{q}\mu_{\xi,s_{i}}% \mathbf{e}_{s_{i}}\right)-\tau(\|\boldsymbol{\mu}_{\xi}\|_{0}-q)\right)$$ $$\displaystyle=p(\boldsymbol{\mu}_{\xi}|\mathcal{D},\tau)exp\left(-\frac{1}{2}% \boldsymbol{\mu}_{\xi,S}^{T}\boldsymbol{\Sigma}_{\xi,S}^{-1}\boldsymbol{\mu}_{% \xi,S}+q\tau\right),$$ where $\boldsymbol{\Sigma}_{\xi,S}^{-1}$ is the square sub-matrix of $\boldsymbol{\Sigma}_{\xi}^{-1}$ formed by the rows and columns indexed by $S$. Then, $$p(\boldsymbol{\xi}_{-S}|\mathcal{D},\tau)>p(\boldsymbol{\mu}_{\xi}|\mathcal{D}% ,\tau)\text{ if }\frac{1}{2}\boldsymbol{\mu}_{\xi,S}^{T}\boldsymbol{\Sigma}_{% \xi,S}^{-1}\boldsymbol{\mu}_{\xi,S}<q\tau,$$ (45) and the MAP estimate of $\xi$ is given by, $$\boldsymbol{\xi}_{MAP}=\boldsymbol{\xi}_{-S}\text{ where }S=\underset{S\in% \mathcal{P}([d])}{arg\,max}\,\,p(\boldsymbol{\xi}_{-S}|\mathcal{D},\tau)$$ (46) Equation (46) is combinatorially hard so we approximate it in a manner that makes solution tractable. Most simply we can treat the precision matrix $\boldsymbol{\Sigma}_{\xi}^{-1}$ as diagonal so that decisions with regards to each variable are decoupled. Alternatively, we can use a greedy algorithm to construct the $S$ maximizing Eq. (46). In this case we iteratively add to $S$ the most likely additional term until no term increases the likelihood. The algorithm may be further refined as a forward-backward greedy algorithm. Here we restrict our attention to the diagonal approximation of the posterior covariance. This gives the simple threshold, $$\xi_{i}=0\text{ if }\frac{1}{2}\mu_{\xi,i}^{2}\Sigma_{\xi,ii}^{-1}<\tau,$$ (47) which is implemented in Alg. 6. The same pruning technique has also been used for connections in Bayesian neural networks, using the variational approximation of the posterior [13]. We call this technique maximum a-posteriori sequential threshold sparse Bayesian learning (MAP-STSBL). Sparsity properties of MAP-STSBL for orthogonal features In the case of orthogonal columns of $\boldsymbol{\Theta}$ we can use the same simplification as in Sec. 4.2.1 to simplify the thresholding criteria in Eq. (47) to, $$h_{MAP}(\xi_{i}^{*},{\Sigma_{\xi,ii}^{*}})=\frac{1}{2}|\xi_{i}^{*}|^{2}{\Sigma% _{\xi,ii}^{*}}^{-1}=\frac{|\xi_{i}^{*}|^{2}\left(\sqrt{1+4\rho_{i}^{-1}\sigma^% {2}|\xi_{i}^{*}|^{-2}}+1\right)}{4\rho_{i}^{-1}\sigma^{2}}=\tilde{h}_{MAP}(|% \xi_{i}^{*}|)$$ (48) which has inverse given by, $$\tilde{h}_{MAP}^{-1}(\tau)=2\sqrt{\frac{\rho_{i}^{-1}\sigma^{2}\tau^{2}}{1+2% \tau}}$$ (49) Then for $\tau>0$ there is $\tilde{h}_{MAP}^{-1}(\tau)$ such that, $$|\xi_{i}^{*}|>\tilde{h}_{MAP}^{-1}(\tau)\Leftrightarrow\tilde{h}_{MAP}(\xi_{i}% ^{*})\geq\tau,$$ (50) and, $$|\xi_{i}-\rho_{i}^{-1}\boldsymbol{\Theta}_{i}^{T}\boldsymbol{\nu}|>\phi_{% \sigma,\rho}\left(\tilde{h}_{MAP}^{-1}(\tau)\right)=\sigma\sqrt{\rho_{i}^{-1}(% 2\tau+1)}\Leftrightarrow\tilde{h}_{MAP}(\xi_{i}^{*})\leq\tau.$$ (51) This gives, $$FP_{MAP}(\tau)=p(\xi_{i}^{*}\neq 0|\xi_{i}=0)=1-\text{erf}\left(\sqrt{\frac{2% \tau+1}{2}}\right),$$ (52) and, $$FN_{MAP}(\tau)=\frac{1}{2}\left(\text{erf}\left(\frac{\xi_{i}+\sigma\sqrt{\rho% _{i}^{-1}(2\tau+1)}}{\sigma\sqrt{2\rho_{i}^{-1}}}\right)-\text{erf}\left(\frac% {\xi_{i}-\sigma\sqrt{\rho_{i}^{-1}(2\tau+1)}}{\sigma\sqrt{2\rho_{i}^{-1}}}% \right)\right).$$ (53) Equations (52) and (53) are verified empirically in Fig. 6. The results are very similar to those for ARDvi. Indeed, equations (52) and (53) show that for orthogonal features there is a transformation $\alpha\to 2\tau+1$ under which ARDvi and MAP-STSBL realize the same sparsity pattern. We will show empirically in a subsequent section that this is not true in the case where columns of $\boldsymbol{\Theta}$ are not orthogonal. Comparison The false positive and negative likelihoods for $\xi_{i}$ each of the methods discussed in Sections 3 and 4 are summarized by, $$\displaystyle FP_{\bullet}(\xi_{i};\omega)$$ $$\displaystyle=1-\text{erf}\left(\frac{\psi_{\bullet}(\omega)}{\sigma\sqrt{2% \rho^{-1}}}\right)$$ (54) $$\displaystyle FN_{\bullet}(\xi_{i};\omega)$$ $$\displaystyle=\frac{1}{2}\left(\text{erf}\left(\frac{\xi_{i}+\sigma\psi_{% \bullet}(\omega)}{\sigma\sqrt{2\rho^{-1}}}\right)-\text{erf}\left(\frac{\xi_{i% }-\sigma\psi_{\bullet}(\omega)}{\sigma\sqrt{2\rho^{-1}}}\right)\right)$$ where $\bullet$ refers to the method, $\omega$ to the input ($\alpha$, $\lambda$, or $\tau$) and, $$\displaystyle\psi_{ARDvi}(\alpha)$$ $$\displaystyle=\sigma\sqrt{\alpha\rho_{i}^{-1}}$$ (55) $$\displaystyle\psi_{ARDr}(\lambda)$$ $$\displaystyle=\sqrt{\rho_{i}^{-1}\sigma^{2}+\lambda\rho_{i}^{-2}\sigma^{4}}$$ $$\displaystyle\psi_{M-STSBL}(\tau)$$ $$\displaystyle=\phi_{\sigma,\rho}\left(\tau\right)$$ $$\displaystyle\psi_{L-STSBL}(\tau)$$ $$\displaystyle=\phi_{\sigma,\rho}\left(\tilde{h}_{L,\rho,\sigma}^{-1}(\tau)\right)$$ $$\displaystyle\psi_{MAP-STSBL}(\tau)$$ $$\displaystyle=\sigma\sqrt{(2\tau+1)\rho_{i}^{-1}}.$$ Note that if $\rho_{i}=\rho_{j}$ for all $i$, $j$ then the false positive and negative rates are all equivalent under transformations of the parameters used for each method. Curves $(FP_{\bullet}(\xi_{i};\omega),FN_{\bullet}(\xi_{i};\omega))$ parameterized by $\psi(\omega)$ are shown in Fig. 7 for several values of $\xi_{i}$. If $\rho_{i}$ are unequal then the specific parameter pair that will yield similar results for one column given two different algorithms will not hold for another column. Hence, the methods differ in how they scale with $\rho_{i}$. The exception is the pair ARDvi and MAP-STSBL which have the same false positive and negative rates for any $\rho_{i}$ under the transformation $\alpha=2\tau+1$. To visualize the dependence of each false positive and false negative rate on $\rho_{i}$ we find parameters $\omega_{\bullet}$ for each method such that the $FP_{\bullet}(\omega_{\bullet})=FN_{\bullet}(\omega_{\bullet})$ when $\rho_{i}=1$ and plot the resulting rates over a range of $\rho_{i}$. This is shown in Fig. 8. The false negative rate for each method decreases monotonically in $\rho_{i}$. This is intuitive, since larger $\rho_{i}$ corresponds to that term having a larger effect on $\mathbf{y}$. The false negative rate as a function of $\rho_{i}$ is constant for ARDvi and MAP-STSBL, decreasing for L-STSBL and M-STSBL and increasing for ARDr. These trends are explained by the asymptotic behavior of $\psi_{\bullet}$ for large $\rho$. We have, $$\displaystyle\psi_{vi/MAP}(\alpha)$$ $$\displaystyle\sim\mathcal{O}(\rho^{-1/2})$$ (56) $$\displaystyle\psi_{ARDr}(\lambda)$$ $$\displaystyle\sim\mathcal{O}(\rho^{-1})$$ $$\displaystyle\psi_{M-STSBL}(\tau)$$ $$\displaystyle\sim\mathcal{O}(1)$$ $$\displaystyle\psi_{L-STSBL}(\tau)$$ $$\displaystyle\sim\mathcal{O}(\rho^{q}),\,q\in(-1/2,0)$$ where the last statement is inferred from Fig. 8. Since $\psi_{\bullet}$ is multiplied by $\rho^{1/2}$ in the expression for the false positive rate, which is a decreasing function of $\psi_{\bullet}$, the trends in Fig. 8 follow from Eq. (56). If we allow ourselves to equate $\rho_{i}$ with sample size, then M-STSBL and L-STSBL have the desireable property that the false positive rate is decreases. For orthogonal covariates, ARDvi and MAP-STSBL have equivalent behavior with regards to expected sparsity. However, they begin to yield different results in that case that columns of $\boldsymbol{\Theta}$ are correlated. This may be the result of the MAP-threshold criteria no longer aligning with the increased sparsity due to inflated variance, but there is also a fundamental change in the thresholding algorithms which occurs when we move away from orthogonal covariates. We have shown that when $\boldsymbol{\Theta}^{T}\boldsymbol{\Theta}$ is diagonal the sparsity of $\xi_{i}$ depends only on the the inner product of the error $\boldsymbol{\nu}$ with $\boldsymbol{\Theta}_{i}$. Hence, the recursion defined in each thresholding algorithm terminates at a depth of one. This is not true when columns are correlated. For dense $\boldsymbol{\Theta}^{T}\boldsymbol{\Theta}$ the recursion limit is the number of columns, though the algorithm tends to terminate far earlier. An analytical comparison between the algorithms considered in the previous sections for non-orthogonal data is beyond the scope of this work. However, there is one clear trade off between computational complexity and clarity of the algorithm’s mechanism for inducing sparsity. Thresholding algorithms offer clear criteria for setting additional terms to zero since we know the magnitude or likelihood at which a coefficient was pruned and at what step. Regularization methods do not provide the same clarity but avoid the cost of increased computational time due to recursion. In particular, for problems with many covariates, the depth limit in the thresholding algorithms is high. We consider M-STSBL to be slightly more clear than MAP-STSBL and L-STSBL, since the thresholding parameter is a magnitude. We initialize ARDr using ARD, so it is slightly more expensive than ARDvi. This is summarized in Fig. 10. We will present several examples in the following section to compare the algorithms’ performance on an empirical basis. Numerical experiments In this section we compare the performance of each of the methods considered in this work on several test problems. These inlude a $250$ dimensional linear problem, function fitting in a Fourier basis, and system identification for the Lorenz 63, Lorenz 96, and Kuramoto Sivashinsky equations. We test each of the proposed methods using a range of input parameters. For the linear and function fitting examples we select optimal parameters for the regularizatio with the Akaike Information Criterion [1] with small sample size correction (AICc) [9], given by, $$AIC_{c}(\boldsymbol{\gamma})=2k-2\ln\left(p(\boldsymbol{\gamma})\right)=2k-% \underset{\boldsymbol{\xi}}{min}\,\left(log|\boldsymbol{\Sigma}_{y}|+\frac{1}{% \sigma^{2}}\|\mathbf{y}-\boldsymbol{\Theta}\boldsymbol{\xi}\|_{2}^{2}+% \boldsymbol{\xi}^{T}\boldsymbol{\Gamma}^{-1}\boldsymbol{\xi}\right),$$ (57) where $k=\|\boldsymbol{\gamma}\|_{0}+1$ is the number of terms fit by the model including error variance $\sigma^{2}$. For consistency across methods, we do not consider regularization terms when evaluating the likelihood. For examples of nonlinear system identification we found $AIC_{c}$ selected models with extraneous variables even when the true model was available. This is perhaps due to the errors being non-Gaussian and correlated between observations, since numerical differentiation uses adjacent points. We therefore select optimal regularization parameters for the system identification based on minimal mismatch in sparsity to the true solution. This is not practical in an application setting but highlights differences between the algorithms presented in this work without the need for more robust model selection. Algorithm 3 allows for substantial freedom in the choice of specific regularization functions $f$ and $g$. For the purposes of comparing with other methods discussed in this work we restrict our attention to the case where $f$ is a constant and $g(\gamma_{i})=\lambda\sigma^{-2}\,min\{\gamma_{i},\eta\}$ which is constant for $\gamma_{i}>\eta$ and linear with positive slope $\lambda\sigma^{-2}$ for $\gamma_{i}\leq\eta$. We search over parameter $\lambda$, keeping $\eta$ fixed at a value of $0.1$. It is reasonable to assume that Alg. 3 may obtain superior results if domain knowledge is available to inform the choice of regularization or if a parameter search is performed over both $\lambda$ and $\eta$. Simple Linear Example We first consider the methods presented in this work applied to a simple linear regression. We consider a problem with $\mathbf{X}\in\mathbb{R}^{250\times 250}$, $\boldsymbol{\Theta}$ being the identity, and construct random linear maps by setting 25 of 250 coefficients to be Gaussian distributed with unit variance and setting the rest to zero. Since the othonormal case is explored in the Sections 3 and 4 we construct $\mathbf{X}$ (equivalently $\boldsymbol{\theta}(\mathbf{X})$) to have condition number $\kappa(\mathbf{X})=10^{2}$ with singular values spread evenly on a log scale between $10^{-2}$ and $0$. Observations are perturbed by Gaussian noise $\nu\sim\mathcal{N}(0,\sigma^{2})$ with $\sigma=0.1\,\text{std}(\boldsymbol{\Theta}\boldsymbol{\xi})$. That is, $\sigma$ is set to ten percent of the standard deviations of the unperturbed output. The magnitude of the noise is not known by the algorithm and is re-estimated after each iteration. We test each of the methods presented in this work for a total of 100 trials, each with random data, true $\boldsymbol{\xi}$, and noise. Model selection is performed with $AIC_{c}$ with a wide range of input parameters. Four error metrics are tracked; the $\ell^{2}$ and $\ell^{1}$ difference between the mean posterior estimate and true $\boldsymbol{\xi}$ as well as the number of non-zero terms that the learning algorithm adds and misses. These values are shown in Table 1 and in Fig. 11. Boxes indicate the inter-quartile range and median error across the 100 trials with whiskers indicating maximum and minimum values. Each method gives far higher $\ell^{1}$ than $\ell^{2}$ error indicating these quantities are dominated by the many small terms added by the regression. However, the thresholding based methods exhibit far lower metric error and number of added terms with only a small increase in the number of missed terms. Interpolation from Few Observations We consider fitting a function defined on $T^{2}=[0,\pi]^{2}$ with sparse representation in a Fourier basis. We let $\mathbf{X}\in[0,\pi)^{250\times 2}$ have rows uniformly sampled on $T^{2}$ and $\boldsymbol{\theta}:T^{2}\to\mathbb{R}^{900}$ be the mapping to the basis constructed by the first 30 Fourier modes in each direction so that $\boldsymbol{\Theta}\in\mathbb{R}^{250\times 900}$. Similar to the linear example, we set 50 of the 900 coefficients to be Gaussian distributed with unit variance and the rest are set to zero. Noise is again set to have standard deviation equal to 10 percent of the standard deviation of unperturbed values of $\mathbf{y}$ and the magnitude of the noise is re-estimated after each iteration. Results across 10 trials for fitting a function with a sparse Fourier basis are shown in Fig. 12. Within each trial we test a wide range of input parameters for each technique and select a model using the $AIC_{c}$. Regularization and thresholding techniques all exhibit far lower $\ell^{1}$ and $\ell^{2}$ error and include far fewer extraneous terms. The thresholding methods all show some increase in the number of missed terms. We expect the increase in false negatives would be lessened if active terms had magnitudes bounded away from zero. As was the case for the linear example, all regularized methods exhibit lower metric error than unregularized ARD and add substantially fewer extraneous terms. However, unlike the linear example there is a noteable increase in the number of missing terms using thresholding methods and, contrary to intuition, a decrease in the number of missed terms using the regularization based methods. Equations of Motion for the Lorenz 63 System Our first example of applying the techniques to a nonlinear system identification problem is the Lorenz 63 system given by, $$\displaystyle\dot{x}_{1}$$ $$\displaystyle=s(x_{2}-x_{1})$$ (58) $$\displaystyle\dot{x}_{2}$$ $$\displaystyle=x_{1}(\rho-x_{3})-x_{2}$$ $$\displaystyle\dot{x}_{3}$$ $$\displaystyle=x_{1}x_{2}-\beta x_{3},$$ with the standard set of coefficients $s=10$, $\rho=28$ and $\beta=\frac{8}{3}$ [17]. We will follow work by [8] for nonlinear system identification and use trajectories from Eq. 58 as data $\mathbf{X}$ and the numerically computed velocity as $\mathbf{y}$. We construct datasets to test each algorithm by integrating Eq. (58) for 250 steps of length $dt=0.05$ from an initial condition drawn from $\mathcal{N}\left((0,0,15),5^{2}\mathbf{I}\right)$ resulting in a times series in $\mathbb{R}^{251\times 3}$. We add Gaussian noise with standard deviation equal to 1 percent of the standard devition of the time series to get $\mathbf{X}$ and subsequently compute temporal derivatives $y^{(j)}\approx\dot{x}_{j}$ using a $6^{\text{th}}$ order finite difference scheme applied to the noisy time series. We use the quintic feature map in three variables $\boldsymbol{\theta}:\mathbb{R}^{3}\to\mathbb{R}^{{5+3\choose 5}}$ given by, $$\boldsymbol{\theta}(x_{1},x_{2},x_{3})=\left(1,x_{1},x_{2},x_{3},x_{1}^{2},x_{% 2}^{2},x_{3}^{2},x_{1}x_{2},\ldots,x_{1}^{4}x_{3},x_{1}^{5},x_{2}^{5},x_{3}^{5% }\right)$$ (59) This gives a matrix $\boldsymbol{\theta}(\mathbf{X})\in\mathbb{R}^{251\times 56}$. The system identification problem is then to find sparse solutions to, $$\dot{x}_{j}=y^{(j)}=\boldsymbol{\theta}(\mathbf{x})\xi^{(j)}$$ (60) for each dimension $j=1,2,3$. Note that since noise is added to the data $\mathbf{X}$ directly rather than to the true $\boldsymbol{\theta}(\mathbf{X})\mathbf{y}$, columns of $\boldsymbol{\theta}(\mathbf{X})$ will be perturbed by nonlinear maps of Gaussian noise. The error in our polynomial regression will therefore be non-Gaussian, violating the likelihood model we start with in Eq. 1. This difference does not significantly affect the regression algorithms but does lead to problems with $AIC_{c}$ based system identification since the likelihood computed by Eq. (5) makes assumptions regarding error statistics that do not hold. We therefore user oracle model selection, choosing the input parameter that yields the minimal number of added and missed terms compared to the true solution. This of course assumes knowledge of the true solution which would not be the case in an application setting but allows us to focus on comparing sparse regression algorithms rather than on model selection. We test each of the methods for ten trials, each using the same length of time series but with different random initial conditions and noise instances. Figure 13 shows error metrics for the coefficients of the learned equations. Since we are solving three distinct problems, the errors shown in Fig. 13 are summed over each of the three dimensions. Thresholding based methods and variance inflation, learn much sparser models than ARD and ARDr, with ARDvi having no increase in the number of missed terms. In this case, M-STSBL outperforms both L-STSBL and MAP-STSBL, possibly due to the fact that none of the true coefficients are small. Equations of Motion for the Lorenz 96 System We next the consider the higher dimensional Lorenz 96 system given by, $$\displaystyle\dot{x}_{j}$$ $$\displaystyle=(x_{j+1}-x_{j-2})x_{j-1}-x_{j}+F$$ (61) with $n=40$ and $F=16$ [18]. We construct a dataset to test each algorithm by integrating Eq. (61) with $dt=0.05$ from an initial condition $x_{j}=\exp(-\frac{1}{16}(j-20)^{2})$ resulting in a times series in $\mathbb{R}^{200\times 40}$. We add Gaussian noise with standard deviation equal to 1 percent of the standard devition of the time series to get $\mathbf{X}$ and subsequently compute temporal derivatives $y^{(j)}\approx\dot{x}_{j}$ using a $6^{\text{th}}$ order finite difference scheme applied to the noisy time series. We use the quadratic feature map in 40 variables $\boldsymbol{\theta}:\mathbb{R}^{40}\to\mathbb{R}^{{2+40\choose 2}}$ given by, $$\boldsymbol{\theta}(x_{1},x_{2},\ldots,x_{40})=\left(1,x_{1},x_{2},\ldots,x_{4% 0},x_{1}^{2},x_{2}^{2},\ldots,x_{40}^{2},x_{1}x_{2},x_{1}x_{3},\ldots,x_{39}x_% {40}\right)$$ (62) This gives a matrix $\boldsymbol{\theta}(\mathbf{X})\in\mathbb{R}^{201\times 861}$. We solve for the equations of motion just as we did in the Lorenz 63 case. Model selection is again performed assuming full knowledge of the true sparsity pattern. We test each of the methods on a single trial across each of the 40 dimensions. Figure 14 shows error metrics for the coefficients of the learned equations across the 40 dimensions. Each of the proposed techniques learns a more sparse set of coefficients than ARD with the modal number of added terms for each of the proposed methods being zero. However, metric error is not improved significantly and in the case of MAP-STSBL contains outlier values with substantially increased error where the algorithm failed to include the forcing term $F=16$. This indicates that extraneous terms in the ARD estimate were generally small. The modal number of missed terms for each method is zero, but all methods except ARDvi added a single term in a some fraction of the dimensions and MAP-STSBL occasionally missing two. Equations of Motion for the Kuramoto-Sivashinsky Equation We also test each of the sparse regression methods considered in this work on the Kuramoto Sivashinsky (KS) equation. The KS equation, given by, $$u_{t}+uu_{x}+u_{xx}+u_{xxxx}=0,$$ (63) is often used as model for deterministic spatiotemporal chaos and has proved a challenging case for other sparse regression methods [25, 27]. We use the ETDRK4 method developed in [14] to solve the Kuramoto Sivashinsky equation on the domain $(x,t)\in[0,32\pi]\times[0,150]$ with periodic boundary conditions, initial condition $x$, timesteps $dt=0.14$ and spatial discretization $dx=32\pi/512$. We add artificial noise to the numerical solution with standard deviation equal to 0.1 percent of the standard deviation of the data. This small magnitude is consistent with previous published works in system identification for the KS equation. Numerical differentiation with respect to time and for the first four spatial derivatives is done by applying sixth order finite difference schemes directly to the noisy data. We take $y$ to be $u_{t}$ reshaped into a vector and $\boldsymbol{\theta}$ to be the set of powers of $u$ up to $4$ multiplied by spatial derivatives up to fourth order so that $$\boldsymbol{\theta}(\mathbf{X})=\boldsymbol{\theta}(u(x,t))=\left(1,u,\ldots u% ^{2},u_{x},uu_{x},\ldots u^{4}u_{xxxx}\right).$$ (64) With the given discretization of Eq. (63), the feature map (64) gives $\mathbf{y}\in\mathbb{R}^{(1024\cdot 512)\times 1}$ and $\boldsymbol{\theta}(\mathbf{X})\in\mathbb{R}^{(1024\cdot 512)\times 25}$. Each iteration of Alg. 1 requires storing and inverting $\boldsymbol{\Sigma}_{y}\in\mathbb{R}^{m\times m}$ where $m1024\cdot 512$ is the number of observations. Allocating memory for and working with $\boldsymbol{\Sigma}_{y}$ in this case would be problematic on many standard computers. We instead observe a small fraction of the data through random projections, exploiting the simple fact that, $$\mathbf{y}=\boldsymbol{\theta}(\mathbf{X})\boldsymbol{\xi}\rightarrow\mathbf{C% }\mathbf{y}=\mathbf{C}\boldsymbol{\theta}(\mathbf{X})\boldsymbol{\xi},$$ (65) for any matrix $\mathbf{C}$. We take each column of $\mathbf{C}$ to be a unit direction vector sampled uniformly and without replacement from $\mathbb{R}^{1024\cdot 512}$ so that we are simply sampling rows from the full linear system. To test the effectiveness of each algorithm we take 10 different samples of size 2500 and solve the linear system for each one. Figure 15 summarized the error of each of the proposed regressions applied to the 10 random subsets given by Eq. 65. While none of the proposed methods perform well for the task of identifying the Kuramoto Sivashinsky equation from data, the proposed sparse methods do learn more parsimonious models. This comes at the cost of higher $\ell^{2}$ error and a significantly increased number of missing terms. The modal number of missed terms for ARDr is in fact all three of the non-zero terms. While these results are dissapointing, they are also unsurprising. The Kuramoto Sivashinsky equation has proved challenging for past system identification methods [27]. This example showcases some of the limitations of the methodology proposed in this work and the continuing difficulty of sparse regression based methods, both classical and Bayesian, for system identification. Discussion We have presented several techniques for learning sparse Bayesian methods that build on Automatic Relevance Determination to achieve greater levels of parsimony in the resulting linear model. These methods may be classified in two families; regularization based methods including variance inflation and regularization of $\boldsymbol{\gamma}$, which find the variance coefficients $\boldsymbol{\gamma}$ as the fixed point of a single application of an iterative algorithm, and thresholding based methods, which alternate between solving a smooth optimization problem and simplifying the model via thresholding extraneous terms. For the latter class we tested magnitude based thresholding based on the mean posterior estimate of $\boldsymbol{\xi}$, as well as a likelihood based threshold using the posterior distribution, and adjusting the prior on $\boldsymbol{\xi}$ to find an alternative probabilistic threshold. For each of these algorithms, we have derived probabilistic estimates for the number of false positive and false negative active terms in the orthogonal case. While most practical problems involve non-orthonogonal matrices, these estimates can be taken as guides for the behavior of the algorithms as regularization or thresholding parameters change. A significant barrier to use of the proposed class of sparse regression methods on many problems is the computational complexity. Each iteration of Alg. 3 requires computing the inverse of an $m\times m$ matrix, where $m$ is the number of samples available. Future work could explore low-rank approximations of this step, but in the current work this was a computational bottleneck and forced us to only consider small problems. Subsampling approaches such as those in [43] might also be useful for large datasets. We stress that this work does not attempt to demonstrate the superiority of any of the proposed methods for the subset selection problem in sparse Bayesian regression. Ultimately, if one desires a level of sparsity beyond that provided by standard ARD, a choice of additional assumptions should be made with respect to the context of the problem being considered. We have outlined the assumptions that lead to each of the proposed algorithms and demonstrated their accuracy both analytically on orthogonal linear systems as a canonical test case and empirically on several more complicated problems. In application settings, model selection could be performed both over parameter values for each algorithm as well as between algorithms to determine a final result. Acknowledgments This material is based upon work supported by the National Science Foundation under Award No. 1902972, the Army Research Office (Grant No. W911NF-17-1-0306), and a MathWorks Faculty Research Innovation Fellowship. References [1] Hirotugu Akaike. A new look at the statistical model identification. IEEE transactions on automatic control, 19(6):716–723, 1974. [2] Abd AlRahman R AlMomani, Jie Sun, and Erik Bollt. How entropic regression beats the outliers problem in nonlinear system identification. Chaos: An Interdisciplinary Journal of Nonlinear Science, 30(1):013107, 2020. [3] S Derin Babacan, Rafael Molina, and Aggelos K Katsaggelos. Bayesian compressive sensing using laplace priors. IEEE Transactions on image processing, 19(1):53–63, 2009. [4] Christopher M Bishop. Pattern recognition and machine learning. springer, 2006. [5] Thomas Blumensath and Mike E Davies. Iterative hard thresholding for compressed sensing. Applied and computational harmonic analysis, 27(3):265–274, 2009. [6] Josh Bongard and Hod Lipson. Automated reverse engineering of nonlinear dynamical systems. PNAS, 104(24):9943–9948, 2007. [7] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein, et al. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine learning, 3(1):1–122, 2011. [8] S. L. Brunton, J. L. Proctor, and J. N. Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. PNAS, 113(15):3932–3927, 2016. [9] Joseph E Cavanaugh. Unifying the derivations for the akaike and corrected akaike information criteria. Statistics & Probability Letters, 33(2):201–208, 1997. [10] Bradley Efron, Trevor Hastie, Iain Johnstone, Robert Tibshirani, et al. Least angle regression. The Annals of statistics, 32(2):407–499, 2004. [11] Jianqing Fan and Runze Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American statistical Association, 96(456):1348–1360, 2001. [12] R Fuentes, N Dervilis, K Worden, and EJ Cross. Efficient parameter identification and model selection in nonlinear dynamical systems via sparse bayesian learning. In Journal of Physics: Conference Series, volume 1264, page 012050. IOP Publishing, 2019. [13] Alex Graves. Practical variational inference for neural networks. In Advances in neural information processing systems, pages 2348–2356, 2011. [14] Aly-Khan Kassam and Lloyd N Trefethen. Fourth-order time-stepping for stiff pdes. SIAM Journal on Scientific Computing, 26(4):1214–1233, 2005. [15] Pileun Kim, Jonathan Rogers, Jie Sun, and Erik Bollt. Causation entropy identifies sparsity structure for parameter estimation of dynamic systems. Journal of Computational and Nonlinear Dynamics, 12(1), 2017. [16] Yi Li, Colin Campbell, and Michael Tipping. Bayesian automatic relevance determination algorithms for classifying gene expression data. Bioinformatics, 18(10):1332–1339, 2002. [17] Edward N Lorenz. Deterministic nonperiodic flow. J. Atmos. Sciences, 20(2):130–141, 1963. [18] Edward N Lorenz. Predictability: A problem partly solved. In Proc. Seminar on predictability, volume 1, 1996. [19] Kevin P. Murphy. Machine Learning: A Probabilistic Perspective. The MIT Press, 2012. [20] Radford M Neal. Bayesian learning for neural networks, volume 118. Springer Science & Business Media, 2012. [21] Robert K Niven, Ali Mohammad-Djafari, Laurent Cordier, Markus Abel, and Markus Quade. Bayesian identification of dynamical systems. Multidisciplinary Digital Publishing Institute Proceedings, 33(1):33, 2020. [22] Chang Kook Oh, James L Beck, and Masumi Yamada. Bayesian learning using automatic relevance determination prior with an application to earthquake early warning. Journal of Engineering Mechanics, 134(12):1013–1020, 2008. [23] Neal Parikh, Stephen Boyd, et al. Proximal algorithms. Foundations and Trends® in Optimization, 1(3):127–239, 2014. [24] Markus Quade, Markus Abel, Kamran Shafi, Robert K Niven, and Bernd R Noack. Prediction of dynamical systems by symbolic regression. Physical Review E, 94(1):012214, 2016. [25] Maziar Raissi. Deep hidden physics models: Deep learning of nonlinear partial differential equations. The Journal of Machine Learning Research, 19(1):932–955, 2018. [26] Carl Edward Rasmussen. Gaussian processes in machine learning. In Summer School on Machine Learning, pages 63–71. Springer, 2003. [27] Samuel Rudy, Alessandro Alla, Steven L Brunton, and J Nathan Kutz. Data-driven identification of parametric partial differential equations. SIAM Journal on Applied Dynamical Systems, 18(2):643–660, 2019. [28] Hayden Schaeffer. Learning partial differential equations via data discovery and sparse optimization. In Proc. R. Soc. A, volume 473, page 20160446. The Royal Society, 2017. [29] Michael Schmidt and Hod Lipson. Distilling free-form natural laws from experimental data. Science, 324(5923):81–85, 2009. [30] Vincent YF Tan and Cédric Févotte. Automatic relevance determination in nonnegative matrix factorization with the/spl beta/-divergence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(7):1592–1605, 2012. [31] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1):267–288, 1996. [32] Michael E Tipping. Sparse bayesian learning and the relevance vector machine. Journal of machine learning research, 1(Jun):211–244, 2001. [33] Michael E Tipping, Anita C Faul, et al. Fast marginal likelihood maximisation for sparse bayesian models. In AISTATS, 2003. [34] Giang Tran and Rachel Ward. Exact recovery of chaotic systems from highly corrupted data. Multiscale Modeling & Simulation, 15(3):1108–1129, 2017. [35] D. P. Wipf and B. D. Rao. Sparse bayesian learning for basis selection. IEEE Transactions on Signal Processing, 52(8):2153–2164, 2004. [36] David P Wipf and Srikantan S Nagarajan. A new view of automatic relevance determination. In Advances in neural information processing systems, pages 1625–1632, 2008. [37] Tong Tong Wu, Kenneth Lange, et al. Coordinate descent algorithms for lasso penalized regression. The Annals of Applied Statistics, 2(1):224–244, 2008. [38] Yi Wu and David P Wipf. Dual-space analysis of the sparse linear model. In Advances in Neural Information Processing Systems, pages 1745–1753, 2012. [39] Yibo Yang, Mohamed Aziz Bhouri, and Paris Perdikaris. Bayesian differential programming for robust systems identification under uncertainty. arXiv preprint arXiv:2004.06843, 2020. [40] Ye Yuan, Junlin Li, Liang Li, Frank Jiang, Xiuchuan Tang, Fumin Zhang, Sheng Liu, Jorge Goncalves, Henning U Voss, Xiuting Li, et al. Machine discovery of partial differential equations from spatiotemporal data. arXiv preprint arXiv:1909.06730, 2019. [41] Linan Zhang and Hayden Schaeffer. On the convergence of the sindy algorithm. Multiscale Modeling & Simulation, 17(3):948–972, 2019. [42] Sheng Zhang and Guang Lin. Robust data-driven discovery of governing physical laws with error bars. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 474(2217):20180305, 2018. [43] Sheng Zhang and Guang Lin. Robust data-driven discovery of governing physical laws using a new subsampling-based sparse bayesian method to tackle four challenges (large noise, outliers, data integration, and extrapolation). arXiv preprint arXiv:1907.07788, 2019. [44] Tong Zhang. Adaptive forward-backward greedy algorithm for sparse learning with linear models. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 1921–1928. Curran Associates, Inc., 2009. [45] Peng Zheng, Travis Askham, Steven L Brunton, J Nathan Kutz, and Aleksandr Y Aravkin. A unified framework for sparse relaxed regularized regression: Sr3. IEEE Access, 7:1404–1423, 2018. Appendix A: Proof of Equation (6) We show that $\mathbf{y}^{T}\boldsymbol{\Sigma}_{y}^{-1}\mathbf{y}=\underset{\boldsymbol{\xi% }}{min}\frac{1}{\sigma^{2}}\|\mathbf{y}-\boldsymbol{\Theta}\boldsymbol{\xi}\|_% {2}^{2}+\boldsymbol{\xi}^{T}\boldsymbol{\Gamma}^{-1}\boldsymbol{\xi}$. Applying the Woodbury identity to $\boldsymbol{\Sigma}_{y}^{-1}$ gives, $$\displaystyle\mathbf{y}^{T}\boldsymbol{\Sigma}_{y}^{-1}\mathbf{y}$$ $$\displaystyle=\mathbf{y}^{T}(\sigma^{2}\mathbf{I}+\boldsymbol{\Theta}% \boldsymbol{\Gamma}\boldsymbol{\Theta}^{T})^{-1}\mathbf{y}$$ $$\displaystyle=\mathbf{y}^{T}(\sigma^{-2}\mathbf{I}-\sigma^{-4}\boldsymbol{% \Theta}(\boldsymbol{\Gamma}^{-1}+\sigma^{-2}\boldsymbol{\Theta}^{T}\boldsymbol% {\Theta})^{-1}\boldsymbol{\Theta}^{T})\mathbf{y}$$ $$\displaystyle=\mathbf{y}^{T}(\sigma^{-2}\mathbf{I}-\sigma^{-4}\boldsymbol{% \Theta}\boldsymbol{\Sigma}_{\xi}\boldsymbol{\Theta}^{T})\mathbf{y},$$ On the other hand, $$\displaystyle\underset{\boldsymbol{\xi}}{min}\frac{1}{\sigma^{2}}$$ $$\displaystyle\|\mathbf{y}-\boldsymbol{\Theta}\boldsymbol{\xi}\|_{2}^{2}+% \boldsymbol{\xi}^{T}\boldsymbol{\Gamma}^{-1}\boldsymbol{\xi}=\frac{1}{\sigma^{% 2}}\|\mathbf{y}-\boldsymbol{\Theta}\boldsymbol{\mu}_{\xi}\|_{2}^{2}+% \boldsymbol{\mu}_{\xi}^{T}\boldsymbol{\Gamma}^{-1}\boldsymbol{\mu}_{\xi}$$ $$\displaystyle=\frac{1}{\sigma^{2}}\|\mathbf{y}-\sigma^{-2}\boldsymbol{\Theta}% \boldsymbol{\Sigma}_{\xi}\boldsymbol{\Theta}^{T}\mathbf{y}\|_{2}^{2}+\sigma^{-% 4}\mathbf{y}^{T}\boldsymbol{\Theta}\boldsymbol{\Sigma}_{\xi}\boldsymbol{\Gamma% }^{-1}\boldsymbol{\Sigma}_{\xi}\boldsymbol{\Theta}^{T}\mathbf{y}$$ $$\displaystyle=\frac{1}{\sigma^{2}}\mathbf{y}^{T}\mathbf{y}-\frac{2}{\sigma^{4}% }\mathbf{y}^{T}\boldsymbol{\Theta}\boldsymbol{\Sigma}_{\xi}\boldsymbol{\Theta}% ^{T}\mathbf{y}+\frac{1}{\sigma^{6}}\mathbf{y}^{T}\boldsymbol{\Theta}% \boldsymbol{\Sigma}_{\xi}\boldsymbol{\Theta}^{T}\boldsymbol{\Theta}\boldsymbol% {\Sigma}_{\xi}\boldsymbol{\Theta}^{T}\mathbf{y}+\frac{1}{\sigma^{4}}\mathbf{y}% ^{T}\boldsymbol{\Theta}\boldsymbol{\Sigma}_{\xi}\boldsymbol{\Gamma}^{-1}% \boldsymbol{\Sigma}_{\xi}\boldsymbol{\Theta}^{T}\mathbf{y}$$ $$\displaystyle=\frac{1}{\sigma^{2}}\mathbf{y}^{T}\mathbf{y}+\frac{1}{\sigma^{4}% }\mathbf{y}^{T}\boldsymbol{\Theta}\boldsymbol{\Sigma}_{\xi}\left(-2\mathbf{I}+% \left(\frac{1}{\sigma^{2}}\boldsymbol{\Theta}^{T}\boldsymbol{\Theta}+% \boldsymbol{\Gamma}^{-1}\right)\boldsymbol{\Sigma}_{\xi}\right)\boldsymbol{% \Theta}^{T}\mathbf{y}$$ $$\displaystyle=\frac{1}{\sigma^{2}}\mathbf{y}^{T}\mathbf{y}+\frac{1}{\sigma^{4}% }\mathbf{y}^{T}\boldsymbol{\Theta}\boldsymbol{\Sigma}_{\xi}\left(-2\mathbf{I}+% \boldsymbol{\Sigma}_{\xi}^{-1}\boldsymbol{\Sigma}_{\xi}\right)\boldsymbol{% \Theta}^{T}\mathbf{y}$$ $$\displaystyle=\frac{1}{\sigma^{2}}\mathbf{y}^{T}\mathbf{y}-\frac{1}{\sigma^{4}% }\mathbf{y}^{T}\boldsymbol{\Theta}\boldsymbol{\Sigma}_{\xi}\boldsymbol{\Theta}% ^{T}\mathbf{y}$$ $$\displaystyle=\mathbf{y}^{T}(\sigma^{-2}\mathbf{I}-\sigma^{-4}\boldsymbol{% \Theta}\boldsymbol{\Sigma}_{\xi}\boldsymbol{\Theta}^{T})\mathbf{y},$$ Appendix B: Converse of Equation (24) In this section we show that, $$\left|\xi_{i}-\rho_{i}^{-1}\boldsymbol{\Theta}_{i}^{T}\boldsymbol{\nu}\right|% \leq\sqrt{\rho_{i}^{-1}\sigma^{2}+\lambda\rho_{i}^{-2}\sigma^{4}}\Rightarrow% \xi_{i}^{*}=0$$ (66) From the above inequality and the KKT stationarity condition for the $\xi$ we have, $$\displaystyle\sqrt{\rho_{i}^{-1}\sigma^{2}+\lambda\rho_{i}^{-2}\sigma^{4}}$$ $$\displaystyle\geq\left|\xi_{i}^{*}+\rho_{i}^{-1}\sigma^{2}\sqrt{c_{i}^{*}}sgn(% \xi_{i}^{*})\right|$$ (67) $$\displaystyle=\left|\xi_{i}^{*}\right|+\rho_{i}^{-1}\sigma^{2}\sqrt{c_{i}^{*}}$$ $$\displaystyle\therefore\sqrt{c_{i}^{*}}$$ $$\displaystyle\leq\sqrt{\lambda+\rho_{i}\sigma^{-2}}-\rho_{i}\sigma^{-2}|\xi_{i% }^{*}|$$ $$\displaystyle\text{ and }|\xi_{i}^{*}|$$ $$\displaystyle\leq\sqrt{\rho_{i}^{-1}\sigma^{2}+\lambda\rho_{i}^{-2}\sigma^{4}}% -\rho_{i}^{-1}\sigma^{2}\sqrt{c_{i}^{*}}$$ From Eq. (22) $\sqrt{c_{i}^{*}}$ is given by the positive valued zero of following cubic, $$\psi(\omega)=\rho_{i}^{-1}\sigma^{2}\omega^{3}+|\xi_{i}^{*}|\omega^{2}-\left(% \lambda\rho_{i}^{-1}\sigma^{2}+1\right)\omega-\lambda|\xi_{i}^{*}|$$ (68) where $\omega=\sqrt{c_{i}^{*}}$ to simplify notation. Note that $\psi(0)\leq 0$ with equality only if $\lambda$ or $\xi_{i}^{*}=0$, $\psi^{\prime}(0)<0$, and the coefficient on the cubic term is positive. This suffices to show there is a unique positive zero of $\psi$. We also know that $\sqrt{c_{i}^{*}}$ is greater than the larger of the two zeros of $\psi^{\prime}(\omega)$ given by, $$\sqrt{c_{i}^{*}}>\omega^{+}=\frac{\rho_{i}}{3\sigma^{2}}\left(-|\xi_{i}^{*}|+% \sqrt{{\xi_{i}^{*}}^{2}+3\sigma^{2}\rho_{i}^{-1}\left(1+\lambda\sigma^{2}\rho_% {i}^{-1}\right)}\right).$$ (69) Substituting the lower bound for $\sqrt{c_{i}^{*}}$ given by Eq. (69) into Eq. (67) gives, $$\displaystyle|\xi_{i}^{*}|$$ $$\displaystyle<\sqrt{\rho_{i}^{-1}\sigma^{2}+\lambda\rho_{i}^{-2}\sigma^{4}}-% \frac{1}{3}\left(-|\xi_{i}^{*}|+\sqrt{{\xi_{i}^{*}}^{2}+3\sigma^{2}\rho_{i}^{-% 1}\left(1+\lambda\sigma^{2}\rho_{i}^{-1}\right)}\right)$$ (70) $$\displaystyle\frac{2|\xi_{i}^{*}|}{3}$$ $$\displaystyle<\sqrt{\rho_{i}^{-1}\sigma^{2}+\lambda\rho_{i}^{-2}\sigma^{4}}-% \frac{1}{3}\sqrt{{\xi_{i}^{*}}^{2}+3\sigma^{2}\rho_{i}^{-1}\left(1+\lambda% \sigma^{2}\rho_{i}^{-1}\right)}$$ $$\displaystyle|\xi_{i}^{*}|$$ $$\displaystyle<\frac{3}{2}\sqrt{\rho_{i}^{-1}\sigma^{2}+\lambda\rho_{i}^{-2}% \sigma^{4}}-\frac{1}{2}\sqrt{{\xi_{i}^{*}}^{2}+3\sigma^{2}\rho_{i}^{-1}\left(1% +\lambda\sigma^{2}\rho_{i}^{-1}\right)}$$ $$\displaystyle\leq\frac{3}{2}\sqrt{\rho_{i}^{-1}\sigma^{2}+\lambda\rho_{i}^{-2}% \sigma^{4}}-\frac{1}{2}\sqrt{3\sigma^{2}\rho_{i}^{-1}\left(1+\lambda\sigma^{2}% \rho_{i}^{-1}\right)}$$ $$\displaystyle=\frac{3-\sqrt{3}}{2}\sqrt{\rho_{i}^{-1}\sigma^{2}+\lambda\rho_{i% }^{-2}\sigma^{4}}$$ From Eq. (67) we know $\sqrt{c_{i}^{*}}\leq\sqrt{\lambda+\rho_{i}\sigma^{-2}}-\rho_{i}\sigma^{-2}|\xi% _{i}^{*}|$. Since $\sqrt{c_{i}^{*}}$ is the greatest zero of $\psi$ and the cubic coefficient is positive, $$\displaystyle 0$$ $$\displaystyle\leq\psi\left(\sqrt{\lambda+\rho_{i}\sigma^{-2}}-\rho_{i}\sigma^{% -2}|\xi_{i}^{*}|\right)$$ (71) $$\displaystyle=\frac{\rho_{i}|\xi_{i}^{*}|}{\sigma^{2}}\left(|\xi_{i}^{*}|\sqrt% {\lambda+\rho_{i}\sigma^{-2}}-2\lambda\rho_{i}^{-1}\sigma^{2}-1\right)$$ $$\displaystyle\leq\frac{\rho_{i}|\xi_{i}^{*}|}{\sigma^{2}}\left(\frac{3-\sqrt{3% }}{2}(1+\lambda\rho_{i}^{-1}\sigma^{2})-2\lambda\rho_{i}^{-1}\sigma^{2}-1\right)$$ $$\displaystyle=\frac{\rho_{i}^{-1}|\xi_{i}^{*}|}{\sigma^{2}}\left(\frac{-1-% \sqrt{3}}{2}\lambda\rho_{i}^{-1}\sigma^{2}+\frac{1-\sqrt{3}}{2}\right)$$ Note that the quantity inside the parentheses is strictly less than zero. Therefore, for the inequality to hold, $|\xi_{i}^{*}|=0$. Appendix C: Comparrison Between L-STSBL and MAP-STSBL The thresholding operations introduced for algorithms 5 and 6 bear some similarities but differ in an important manner with regards to how they treat the posterior marginal variance of $\xi_{i}$. The thresholding criteria for $\xi_{i}\to 0$ in Alg. 5 given threshold $\tau_{0}$ is, $$h_{L}(\mu_{\xi,i},\Sigma_{\xi,ii})=\dfrac{1}{\sqrt{2\pi\Sigma_{\xi,ii}}}\exp% \left(\frac{-\mu_{\xi,i}^{2}}{2\Sigma_{\xi,ii}}\right)>\tau_{0},$$ (72) while for Alg 6 it is, $$h_{MAP}(\mu_{\xi,i},\Sigma_{\xi,ii})=\frac{\mu_{\xi,i}^{2}}{2\Sigma_{\xi,ii}}<% \tau_{1},$$ (73) or equivalently, $$exp(h_{MAP}(-\mu_{\xi,i},\Sigma_{\xi,ii}))=\exp\left(\frac{-\mu_{\xi,i}^{2}}{2% \Sigma_{\xi,ii}}\right)>e^{-\tau_{1}}=\tau_{2}.$$ (74) The two criteria are related by, $$h_{L}(\mu_{\xi,i},\Sigma_{\xi,ii})=\dfrac{exp(-h_{MAP}(\mu_{\xi,i},\Sigma_{\xi% ,ii}))}{\sqrt{2\pi\Sigma_{\xi,ii}}}.$$ (75) This highlights the difference in assumptions between the two methods. In both algorithms, high uncertainty relative to coefficient magnitude indicates a greater chance of pruning. However, this effect is slightly lessened in Algorithm 5. Coefficients with low uncertainty relative to their magnitude are unlikely to be pruned using either method but the likelihood is higher using 5.
Twisted $\mathcal{D}$-module extensions of local systems on a certain subvariety isomorphic to $\operatorname{\mathbb{G}_{m}}^{2}$ of the affine flag variety of $\operatorname{SL}_{2}$ Claude Eicher Skolkovo Institute of Science and Technology, Moscow, Russia C.Eicher@skoltech.ru (Date:: January 17, 2021) Abstract. We introduce a family of rank-one local systems in the category of twisted $\mathcal{D}$-modules on a certain subvariety isomorphic to $\operatorname{\mathbb{G}_{m}}^{2}$ of the affine flag variety of $\operatorname{SL}_{2}$. We then give a criterion for these local systems, in terms of their parameters, to extend cleanly in the sense of $\mathcal{D}$-modules. \setstretch 1.4 Contents 1 Introduction 1.1 Overview 1.2 Context 2 Notation 3 Setup 4 Local coordinates and trivializations 4.1 Local coordinates on $\overline{O}$ 4.2 Local trivializations of $\widetilde{\pi}|\overline{O}$ 4.3 Local trivializations of $\mathcal{C}|\overline{O}$ 4.4 Local coordinates on $\widetilde{\mathcal{C}}^{\times}|\widetilde{\overline{O}}$ 5 Local system $\mathcal{L}_{\Lambda,\kappa,\mu_{-1},\mu_{0}}$ 6 Cleanness of the $\mathcal{D}$-module extension 1. Introduction 1.1. Overview In the present work we prove a basic result concerning a $\mathcal{D}$-module construction based on a subvariety $O$ of the affine flag variety of $\operatorname{SL}_{2}$ over $\operatorname{\mathbb{C}}$, isomorphic to $\operatorname{\mathbb{G}_{m}}^{2}$ and contained in a two dimensional Schubert cell. Namely, we construct a family of rank-one local systems in the category of arbitrarily complex twisted right $\mathcal{D}$-modules on $O$. The twist is w.r.t. the usual torsor over the affine flag variety. The family is naturally parametrized by two complex numbers describing the monodromies of the local system, henceforth called monodromy parameters, and its construction depends on a choice of coordinates in $O$. Our result, Theorem 6.1, states a criterion for the $\mathcal{D}$-module extension of such a local system to the affine flag variety to be clean: certain linear combinations of the twist and monodromy parameters should be non-integral. Its proof consists of a straightforward computation in coordinates on an open cover, by four affine planes, of the Zariski closure of $O$. The result only depends on the embedding of $O$ into its Zariski closure, or more precisely on the embedding of the corresponding total spaces of the above mentioned torsor, and not on the embedding of $O$ into the flag variety itself. A reason we nevertheless use the affine flag variety to formulate our result is that we would like to indicate an implication for the representation of the affine Kac-Moody algebra $\widehat{\mathfrak{sl}_{2}}$ in the space of global sections of the twisted $\mathcal{D}$-module extensions under consideration. And in fact we expect that the structure of $\widehat{\mathfrak{sl}_{2}}$-representations of the kind of these global sections, which is at present poorly understood, can be elucidated further using $\mathcal{D}$-module techniques. 1.2. Context The subvariety $O$ can be compared with the subvariety $X_{s_{i}}\cap s_{i}X_{s_{i}}$, isomorphic to $\operatorname{\mathbb{G}_{m}}$, of the affine flag variety of any almost simple and simply connected linear algebraic group over $\operatorname{\mathbb{C}}$. Here $X_{s_{i}}$ denotes the (one dimensional) Schubert cell associated with the simple reflection $s_{i}$ of the affine Weyl group. In [Eic16b], [Eic20] we consider $\mathcal{D}$-module constructions based on a Kummer local system on $X_{s_{i}}\cap s_{i}X_{s_{i}}$, in the case of integral and arbitrary complex twist, respectively. The twisted $\mathcal{D}$-module extensions of [Eic20] analogous to the ones considered presently are called $\mathcal{R}_{?s_{i},i,\lambda,\mu}$ there. We will not recall the definition of these twisted $\mathcal{D}$-modules on the affine flag variety, but content ourselves with stating that they are constructed in a natural way from the choice $?\in\{!,*\}$ of $!$- or $*$-extension, a simple affine root $i$, an arbitrary complex affine weight $\lambda$ describing the twist, and the complex number $\mu$ describing the monodromy of the Kummer local system. The analogue of Theorem 6.1 states that the canonical morphism in the sense of [Ber] $\mathcal{R}_{!s_{i},i,\lambda,\mu}~\rightarrow\mathcal{R}_{*s_{i},i,\lambda,\mu}$ is an isomorphism if and only if $\mu\notin\operatorname{\mathbb{Z}}$ and $\lambda(h_{i})-\mu\notin\operatorname{\mathbb{Z}}$. Here $h_{i}$ denotes the coroot. At the level of orbit stratifications, the situation is as follows. The group scheme whose $R$-valued points are $$\displaystyle\left\{\left.g=\begin{pmatrix}A_{0}+t^{2}a&t^{2}b\\ t^{2}c&D_{0}+t^{2}d\end{pmatrix}\;\right|A_{0},D_{0}\in R,\;a,b,c,d\in R[[t]],% \;\det g=1\right\}\rtimes R^{\times}$$ (1.1) acts in a natural way on the affine flag variety of $\operatorname{SL}_{2}$ and plays the role the group scheme $I\cap{{}^{s_{i}}I}$ does in [Eic16b], [Eic20]. Here $R^{\times}$ acts by loop rotations, i.e. by the automorphism induced by $t\mapsto rt$, $I$ denotes an Iwahori group of the loop group, and ${}^{s_{i}}I$ its $s_{i}$-conjugate. The subvariety $O$ is an orbit for this action and any other orbit of dimension less or equal than two arises as an orbit for the action of a group scheme strictly containing the one defined by (1.1). In this sense, $O$ can be considered as the basic orbit for this group action. In the same way, the subvariety $X_{s_{i}}\cap s_{i}X_{s_{i}}$ can be considered as the basic orbit for the action of the group $I\cap{{}^{s_{i}}I}$. Acknowledgements We would like to thank B.L. Feigin and G. Felder for discussions related to this article. 2. Notation When it is clear from the indication of its domain and codomain, we sometimes, for simplicity, omit from notation that we understand a restriction of the morphism under consideration. Any inclusion of a subvariety is denoted by $\operatorname{inc}$. The sheaf-theoretic direct image w.r.t. a morphism $f$ is denoted by $f_{\cdot}$. When $R$ is a commutative $\operatorname{\mathbb{C}}$-algebra, we denote by $R^{\times}$ its group of invertible elements. The restriction of a line bundle or torsor $\mathcal{F}$ to a subvariety $S$ is denoted by $\mathcal{F}|S$. 3. Setup Let $\operatorname{SL}_{2}((t))$ be the algebraic loop group of $\operatorname{SL}_{2}$ over $\operatorname{\mathbb{C}}$. Let $X=\operatorname{SL}_{2}((t))/I$ be the affine flag variety of $\operatorname{SL}_{2}$, an ind-projective ind-variety over $\operatorname{\mathbb{C}}$. Let $\widetilde{\pi}:\widetilde{X}=\operatorname{SL}_{2}((t))/I^{u}\rightarrow% \operatorname{SL}_{2}((t))/I$ be the canonical projection, a $T^{\circ}$-torsor. Here $I$ is the standard Iwahori group scheme of $\operatorname{SL}_{2}((t))$ with $R$-valued points $$\displaystyle I(R)=\left\{\left.g=\begin{pmatrix}a&b\\ tc&d\end{pmatrix}\;\right|\;a,b,c,d\in R[[t]],\;\det g=1\right\}\;,$$ (3.1) $I^{u}$ is the pro-unipotent radical of $I$, and $T^{\circ}\subseteq\operatorname{SL}_{2}$ is the torus of diagonal matrices. Let $\mathcal{C}$ be the level line bundle on $\operatorname{SL}_{2}((t))/I$, see e.g. [Zhu10]. It is normalized such that $\mathcal{C}|(P_{i}/I)\cong\mathcal{O}_{\operatorname{\mathbb{P}}^{1}}(1)$ for each $i\in\{0,1\}$, where $\operatorname{SL}_{2}((t))\supseteq P_{i}\supseteq I$ is the parabolic subgroup associated to $i$. We denote $\widetilde{\mathcal{C}}=\widetilde{\pi}^{*}\mathcal{C}$, a line bundle on $\widetilde{X}$. For any subvariety $S\subseteq X$ we denote $\widetilde{S}=\widetilde{\pi}^{-1}(S)\subseteq\widetilde{X}$. Let $(\cdot)^{\times}$ denote the $\operatorname{\mathbb{G}_{m}}$-torsor of invertible sections of a line bundle. We may, and will, in order to distinguish it from other $\operatorname{\mathbb{G}_{m}}$ appearing in the text, denote the structure group $\operatorname{\mathbb{G}_{m}}$ of $\mathcal{C}^{\times}$ and $\widetilde{\mathcal{C}}^{\times}$ by $\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}$. Here ${}^{\operatorname{cent}}$ stands for “central” because of the possible interpretation of $\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}$ in terms of the central extension of $\operatorname{SL}_{2}((t))$, which, however, will not play an important role in this article. Let $X_{w}$ be the $I$-orbit in $X$, i.e. the finite dimensional Schubert cell, associated to the element $w$ of the affine Weyl group $W$ of $\operatorname{SL}_{2}$ and $\overline{X_{w}}$ its Zariski closure in $X$, the corresponding Schubert variety. Let $s_{i}$, $i\in\{0,1\}$, denote the two simple reflections in $W$. We recall that the Demazure resolution $(P_{1}\times_{I}P_{0})/I\rightarrow\overline{X_{s_{1}s_{0}}}$ is an isomorphism, in particular $\overline{X_{s_{1}s_{0}}}$ is a $\operatorname{\mathbb{P}}^{1}$-bundle over $\operatorname{\mathbb{P}}^{1}$. 4. Local coordinates and trivializations 4.1. Local coordinates on $\overline{O}$ We set $O(R)=\begin{pmatrix}t&R^{\times}t^{-1}+R^{\times}\\ ~0&t^{-1}\end{pmatrix}I(R)$, then $O(R)$ are the $R$-valued points of a locally closed subvariety $O$ of $X$. Lemma 4.1. There are open subsets $\{U_{i}\}_{i=1,2,3,4}$ of $\overline{X_{s_{1}s_{0}}}$ such that the isomorphism $O\xrightarrow{\cong}\operatorname{\mathbb{G}_{m}}^{2}$ given by $$\displaystyle\begin{pmatrix}t&a_{-1}t^{-1}+a_{0}\\ 0&t^{-1}\end{pmatrix}I~\mapsto$$ $$\displaystyle(a_{-1},a_{0})$$ $$\displaystyle\mapsto$$ $$\displaystyle\left(\frac{1}{a_{-1}},\frac{a_{0}}{a_{-1}^{2}}\right)$$ $$\displaystyle\mapsto$$ $$\displaystyle\left(a_{-1},\frac{1}{a_{0}}\right)$$ $$\displaystyle\mapsto$$ $$\displaystyle\left(\frac{1}{a_{-1}},\frac{a_{-1}^{2}}{a_{0}}\right)$$ extends to an isomorphism $U_{i}\xrightarrow{\cong}\operatorname{\mathbb{A}}^{2}$ for $i=1,2,3,4$, respectively. We denote the inverse of this isomorphism by $\Phi_{i}:\operatorname{\mathbb{A}}^{2}\xrightarrow{\cong}U_{i}$. Proof. That the isomorphisms extend follows from the fact that $$\displaystyle\begin{pmatrix}~t&a_{-1}t^{-1}+a_{0}\\ ~0&t^{-1}\end{pmatrix}~I$$ $$\displaystyle\to\begin{pmatrix}~1&-\frac{a_{-1}^{2}}{a_{0}}t^{-1}\\ 0&1\end{pmatrix}I\qquad a_{-1}\to\infty,\;\frac{a_{0}}{a_{-1}^{2}}\;\text{fixed}$$ $$\displaystyle\to\begin{pmatrix}-a_{-1}&1\\ -1&0\end{pmatrix}I\qquad a_{-1}\;\text{fixed},\;a_{0}\to\infty$$ and that $$\displaystyle\begin{pmatrix}1&a_{-1}t^{-1}\\ 0&1\end{pmatrix}~I$$ $$\displaystyle\to\begin{pmatrix}0&t^{-1}\\ -t&0\end{pmatrix}I\qquad a_{-1}\to\infty$$ $$\displaystyle\begin{pmatrix}t&a_{-1}t^{-1}\\ 0&t^{-1}\end{pmatrix}I$$ $$\displaystyle\to\begin{pmatrix}0&t^{-1}\\ -t&0\end{pmatrix}I~\qquad a_{-1}\to\infty$$ $$\displaystyle\begin{pmatrix}~t&a_{0}\\ 0&t^{-1}\end{pmatrix}I$$ $$\displaystyle\to\begin{pmatrix}0&1\\ -1&0\end{pmatrix}I\qquad a_{0}\to\infty$$ $$\displaystyle\begin{pmatrix}~a_{0}&1\\ ~-1&0\end{pmatrix}I$$ $$\displaystyle\to I\qquad a_{0}\to\infty\;,$$ which is easily shown by multiplying from the right by a suitable element of $I$. ∎ For any $i$, we denote by $(x,y)$ the coordinates of the source of $\Phi_{i}$. We have $U_{1}=X_{s_{1}s_{0}}$, hence $\overline{O}=\overline{X_{s_{1}s_{0}}}$, and $\overline{O}=\bigcup_{i=1}^{4}U_{i}$, i.e. $\{U_{i}\}_{i=1,2,3,4}$ is an open cover of $\overline{O}$. 4.2. Local trivializations of $\widetilde{\pi}|\overline{O}$ In this section we write down a trivialization of the $T^{\circ}$-torsor $\widetilde{\pi}|U_{i}$ for $i=1,2,3,4$. Lemma 4.2. Set $g=\begin{pmatrix}t&a_{-1}t^{-1}+a_{0}\\ ~0&t^{-1}\end{pmatrix}$. The section $O\rightarrow\widetilde{O}\;,\;gI\mapsto gf_{i}I^{u}$ ,  of $\widetilde{\pi}$ extends to a section $\sigma_{i}:U_{i}\rightarrow\widetilde{U_{i}}$, $i=1,2,3,4$, of $\widetilde{\pi}$, where $$\displaystyle f_{1}=\begin{pmatrix}~1&0\\ ~0&1\end{pmatrix}\;\,\;f_{2}=\begin{pmatrix}a_{-1}&0\\ 0&\frac{1}{a_{-1}}\end{pmatrix}\;f_{3}=\begin{pmatrix}a_{0}&0\\ 0&\frac{1}{a_{0}}\end{pmatrix}\;f_{4}=\begin{pmatrix}\frac{a_{0}}{a_{-1}}&0\\ 0&\frac{a_{-1}}{a_{0}}\end{pmatrix}\;.$$ Proof. For $i=1$ the statement is clear. In the remaining cases the proof is similar to the proof of Lemma 4.1, but now we instead multiply by suitable elements of $I^{u}$ from the right. ∎ The section $\sigma_{i}$ defines a trivialization $\tau_{i,T^{\circ}}:U_{i}\times T^{\circ}\xrightarrow{\cong}\widetilde{U_{i}}$ of the $T^{\circ}$-torsor $\widetilde{\pi}|U_{i}$. 4.3. Local trivializations of $\mathcal{C}|\overline{O}$ We first compute the transition functions w.r.t. local trivializations of $\mathcal{C}^{-1}|\overline{O}$, where $\mathcal{C}^{-1}$ denotes the line bundle inverse (dual) to $\mathcal{C}$. Let $\tau_{i,\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}}^{(-1)}:U_{i}% \times\operatorname{\mathbb{A}}^{1}\xrightarrow{\cong}\mathcal{C}^{-1}|U_{i}$ be a trivialization of $\mathcal{C}^{-1}|U_{i}$. The corresponding transition functions $t_{ij}^{(-1)}:U_{i}\cap U_{j}\rightarrow\operatorname{\mathbb{G}_{m}}^{% \operatorname{cent}}$ are expressed through the $\tau_{i,\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}}^{(-1)}$ by $$\displaystyle\tau_{i,\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}}^{(-1% )-1}\circ\tau_{j,\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}}^{(-1)}:U% _{i}\cap U_{j}\times\operatorname{\mathbb{A}}^{1}\xrightarrow{\cong}U_{i}\cap U% _{j}\times\operatorname{\mathbb{A}}^{1}\;,\;(x,v)\mapsto\left(x,t_{ij}^{(-1)}(% x)v\right)\;.$$ Of course, the $t_{ij}^{(-1)}$ can be changed by a multiplicative constant. Lemma 4.3. We have $$\displaystyle t_{12}^{(-1)}$$ $$\displaystyle=\frac{1}{x^{3}}\circ\Phi_{1}^{-1}|U_{1}\cap U_{2}$$ $$\displaystyle t_{13}^{(-1)}$$ $$\displaystyle=\frac{1}{y}\circ\Phi_{1}^{-1}|U_{1}\cap U_{3}$$ $$\displaystyle t_{14}^{(-1)}$$ $$\displaystyle=\frac{1}{xy}\circ\Phi_{1}^{-1}~|U_{1}\cap U_{4}\;.$$ Proof. We have an isomorphism of line bundles $\mathcal{C}^{-1}|\overline{X_{s_{1}s_{0}}}\cong\Omega_{\overline{X_{s_{1}s_{0}% }}}(\overline{X_{s_{1}}}+\overline{X_{s_{0}}})$, see e.g. [Zhu10], where the right-hand side are the top-degree forms on $\overline{X_{s_{1}s_{0}}}$ with possible simple poles along the Schubert divisors. This description is the reason why we consider $\mathcal{C}^{-1}$ at all. Let $\omega_{i}\in\Gamma(U_{i},\Omega_{\overline{X_{s_{1}s_{0}}}}(\overline{X_{s_{1% }}}+\overline{X_{s_{0}}}))$ be a nowhere vanishing section (unique up to nonzero constant). We have $t_{ij}^{(-1)}=\frac{\omega_{j}}{\omega_{i}}\in\Gamma(U_{i}\cap U_{j},\mathcal{% O})^{\times}$. We have a canonical isomorphism $$\displaystyle\Gamma(U_{i},\Omega_{\overline{X_{s_{1}s_{0}}}}(\overline{X_{s_{1% }}}+\overline{X_{s_{0}}}))=\begin{cases}\Gamma(U_{1},\Omega_{U_{1}})&i=1\\ \Gamma(U_{2},\Omega_{U_{2}}(\Phi_{2}(\{0\}\times\operatorname{\mathbb{A}}^{1})% ))&i=2\\ \Gamma(U_{3},\Omega_{U_{3}}(\Phi_{3}(\operatorname{\mathbb{A}}^{1}\times\{0\})% ))&i=3\\ \Gamma(U_{4},\Omega_{U_{4}}(\Phi_{4}(\operatorname{\mathbb{A}}^{1}\times\{0\}% \cup\{0\}\times\operatorname{\mathbb{A}}^{1})))&i=4\;.\end{cases}$$ It is now easy to write down explicit expressions for the $\omega_{i}$, from which we then find the expressions for the $t_{ij}^{(-1)}$ given in the lemma. ∎ The trivialization $\tau_{i,\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}}^{(-1)}$ induces a trivialization $\tau_{i,\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}}:U_{i}\times% \operatorname{\mathbb{A}}^{1}\xrightarrow{\cong}\mathcal{C}~|U_{i}~$ of $\mathcal{C}|U_{i}$ and the corresponding transition functions are $t_{ij}=\frac{1}{t_{ij}^{(-1)}}$. 4.4. Local coordinates on $\widetilde{\mathcal{C}}^{\times}|\widetilde{\overline{O}}$ Combining the local coordinates and trivializations of the previous sections we now introduce local coordinates on the total space of $\widetilde{\mathcal{C}}^{\times}|\widetilde{\overline{O}}$. We define $$\displaystyle\Phi_{i,T^{\circ},\operatorname{\mathbb{G}_{m}}^{\operatorname{% cent}}}=\widetilde{\pi}^{*}\tau_{i,\operatorname{\mathbb{G}_{m}}^{% \operatorname{cent}}}~\circ(\tau_{i,T^{\circ}}\times\operatorname{id})\circ(% \Phi_{i}\times\operatorname{id}):\operatorname{\mathbb{A}}^{2}\times T^{\circ}% ~\times\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}\xrightarrow{\cong}~% \widetilde{C}^{\times}|\widetilde{U_{i}}$$ for $i=1,2,3,4$. This composition is $T^{\circ}\times\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}$-equivariant for the obvious $T^{\circ}~\times\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}$-action on $\operatorname{\mathbb{A}}^{2}\times T^{\circ}~\times\operatorname{\mathbb{G}_{% m}}^{\operatorname{cent}}$ as all three composition factors are so. Using this we define $$\displaystyle\Psi_{ij,T^{\circ},\operatorname{\mathbb{G}_{m}}^{\operatorname{% cent}}}=\Phi_{i,T^{\circ},\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}}% ^{-1}~\circ\Phi_{j,T^{\circ},\operatorname{\mathbb{G}_{m}}^{\operatorname{cent% }}}:$$ $$\displaystyle\Phi_{j}^{-1}(U_{i}\cap U_{j})\times T^{\circ}\times\operatorname% {\mathbb{G}_{m}}^{\operatorname{cent}}\xrightarrow{\cong}~\Phi_{i}^{-1}(U_{i}% \cap U_{j})\times T^{\circ}~\times\operatorname{\mathbb{G}_{m}}^{\operatorname% {cent}}\;.$$ Corollary 4.1. The isomorphism $$\displaystyle\Psi_{12,T^{\circ},\operatorname{\mathbb{G}_{m}}^{\operatorname{% cent}}}:$$ $$\displaystyle\operatorname{\mathbb{G}_{m}}\times\operatorname{\mathbb{A}}^{1}% \times T^{\circ}\times\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}% \xrightarrow{\cong}\operatorname{\mathbb{G}_{m}}\times\operatorname{\mathbb{A}% }^{1}\times T^{\circ}~\times\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}$$ $$\displaystyle\Psi_{13,T^{\circ},\operatorname{\mathbb{G}_{m}}^{\operatorname{% cent}}}:$$ $$\displaystyle\operatorname{\mathbb{A}}^{1}\times\operatorname{\mathbb{G}_{m}}% \times T^{\circ}\times\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}% \xrightarrow{\cong}\operatorname{\mathbb{A}}^{1}\times\operatorname{\mathbb{G}% _{m}}\times T^{\circ}\times\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}$$ $$\displaystyle\Psi_{14,T^{\circ},\operatorname{\mathbb{G}_{m}}^{\operatorname{% cent}}}:$$ $$\displaystyle\operatorname{\mathbb{G}_{m}}^{2}\times T^{\circ}\times% \operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}\xrightarrow{\cong}~% \operatorname{\mathbb{G}_{m}}^{2}\times T^{\circ}\times\operatorname{\mathbb{G% }_{m}}^{\operatorname{cent}}$$ is given by $$\displaystyle\left(x,y,\begin{pmatrix}a&0\\ 0&a^{-1}\end{pmatrix},v\right)$$ $$\displaystyle\mapsto\left(\frac{1}{x},\frac{y}{x^{2}},\begin{pmatrix}\frac{a}{% x}&0\\ 0&\frac{x}{a}\end{pmatrix},\frac{1}{x^{3}}v\right)$$ $$\displaystyle\mapsto\left(x,\frac{1}{y},\begin{pmatrix}~\frac{a}{y}&0\\ 0&\frac{y}{a}\end{pmatrix},\frac{1}{y}v\right)$$ $$\displaystyle\mapsto\left(\frac{1}{x},\frac{1}{x^{2}y},\begin{pmatrix}~\frac{a% }{xy}&0\\ 0&\frac{xy}{a}\end{pmatrix},\frac{1}{x^{3}y}v\right)$$ respectively. Proof. This is a direct computation using only Lemma 4.1, Lemma 4.2, and Lemma 4.3. ∎ 5. Local system $\mathcal{L}_{\Lambda,\kappa,\mu_{-1},\mu_{0}}$ The isomorphism $\Phi_{1,T^{\circ},\operatorname{\mathbb{G}_{m}}^{\operatorname{cent}}}$ restricts to an isomorphism $\operatorname{\mathbb{G}_{m}}^{2}\times T^{\circ}\times\operatorname{\mathbb{G% }_{m}}^{\operatorname{cent}}\xrightarrow{\cong}~\widetilde{\mathcal{C}}^{% \times}~|\widetilde{O}$ denoted by the same symbol. For $\Lambda,\kappa,\mu_{-1},\mu_{0}\in\operatorname{\mathbb{C}}$ we define the local system $\mathcal{L}_{\Lambda,\kappa,\mu_{-1},\mu_{0}}$ of rank one in the category of right $\mathcal{D}$-modules on $\widetilde{\mathcal{C}}^{\times}|\widetilde{O}$ by $$\displaystyle\Phi_{1,T^{\circ},\operatorname{\mathbb{G}_{m}}^{\operatorname{% cent}}}^{*}\mathcal{L}_{\Lambda,\kappa,\mu_{-1},\mu_{0}}=\Omega_{\Lambda,% \kappa,\mu_{-1},\mu_{0}}=\Omega^{(\mu_{-1})}_{\operatorname{\mathbb{G}_{m}}}% \boxtimes\Omega^{(\mu_{0})}_{\operatorname{\mathbb{G}_{m}}}\boxtimes\Omega^{(% \Lambda)}_{T^{\circ}}\boxtimes\Omega^{(\kappa)}_{\operatorname{\mathbb{G}_{m}}% ^{\operatorname{cent}}}\;.$$ Here we employed the rank-one local system $\Omega_{A}^{(\lambda)}$ in the category of right $\mathcal{D}$-modules on $A$ defined by $\Omega_{A}^{(\lambda)}=\mathcal{D}_{A}/(\xi_{v}+\lambda(v),v\in\mathfrak{a})% \mathcal{D}_{A}$, where $A$ is any algebraic torus, $\mathfrak{a}$ is its Lie algebra, $\xi_{v}$ is the translation vector field on $A$ given by $v\in\mathfrak{a}$, and $\lambda:\mathfrak{a}\rightarrow\operatorname{\mathbb{C}}$ is a linear map. $\mathcal{D}_{A}$ is the sheaf of differential operators on $A$ and $(\xi_{v}+\lambda(v),v\in\mathfrak{a})\mathcal{D}_{A}$ is the right ideal of it generated by the indicated relations. In the case $A=\operatorname{\mathbb{G}_{m}}$ we identify $\lambda$ canonically with a complex number. In the case of $\Omega^{(\Lambda)}_{T^{\circ}}$ we moreover use the isomorphism $\operatorname{\mathbb{G}_{m}}\xrightarrow{\cong}T^{\circ}$ given by $a\mapsto\begin{pmatrix}a&0\\ 0&a^{-1}\end{pmatrix}$ in order to be able to consider $\Lambda$ as a complex number. Our choice of the parameter name $\mu_{-1}$ and $\mu_{0}$ is due to the fact that the corresponding coordinates in $O$, given by $\Phi_{1}$, are the coefficients of $t^{-1}$ and $t^{0}$. Of course, the local system $\mathcal{L}_{\Lambda,\kappa,\mu_{-1},\mu_{0}}$ can equivalently be viewed as a sheaf of modules on $O$ for the $(\Lambda,\kappa)$-twisted differential operators on $O$, but in this article we only use this for terminological convenience in the title and some other places and not in computations. It is thus natural to refer to $\Lambda$ and $\kappa$ as the twist parameters. 6. Cleanness of the $\mathcal{D}$-module extension We recall the notion of cleanness of a $\mathcal{D}$-module extension [Ber]. Let $\mathcal{M}$ be a holonomic right $\mathcal{D}$-module on a smooth variety $X$ and $\iota:X\hookrightarrow Y$ a locally closed affine embedding into a smooth variety $Y$. There is a canonical morphism $\operatorname{can}_{\iota}:\iota_{!}\mathcal{M}\rightarrow\iota_{*}\mathcal{M}$ of holonomic right $\mathcal{D}$-modules on $Y$ that is the identity when restricted to $X$. The $\mathcal{D}$-module extension of $\mathcal{M}$ w.r.t. $\iota$ is called clean if $\operatorname{can}_{\iota}$ is an isomorphism. The following remark can be deduced from the construction of $\operatorname{can}_{\iota}$. Remark 6.1. In case $\iota$ is an open affine embedding and $\{U_{i}\}_{i}$ is an open cover of $Y$ we have $\operatorname{can}_{\iota}|U_{i}=\operatorname{can}_{\iota_{i}}$ for each $i$, where $\iota_{i}:\iota^{-1}(U_{i})\hookrightarrow U_{i}$ is the restriction of $\iota$. Thus $\operatorname{can}_{\iota}$ is an isomorphism if and only if $\operatorname{can}_{\iota_{i}}$ is an isomorphism for each $i$. This means that the $\mathcal{D}$-module extension of $\mathcal{M}$ w.r.t. $\iota$ is clean if and only if the $\mathcal{D}$-module extension of the restriction $\mathcal{M}|\iota_{i}^{-1}(U_{i})$ w.r.t. $\iota_{i}$ is clean for each $i$. Lemma 6.1. Let $\mu_{1},\mu_{2}\in\operatorname{\mathbb{C}}$. The $\mathcal{D}$-module extension of $\Omega^{(\mu_{1})}_{\operatorname{\mathbb{G}_{m}}}\boxtimes\Omega^{(\mu_{2})}_% {\operatorname{\mathbb{G}_{m}}}$ w.r.t. $\operatorname{inc}:\operatorname{\mathbb{G}_{m}}^{2}\hookrightarrow% \operatorname{\mathbb{A}}^{2}$ is clean if and only if $\mu_{1}\notin\operatorname{\mathbb{Z}}$ and $\mu_{2}\notin\operatorname{\mathbb{Z}}$. Proof. We first argue that the extension of $\Omega^{(\epsilon\mu_{1})}_{\operatorname{\mathbb{G}_{m}}}\boxtimes\Omega^{(% \epsilon\mu_{2})}_{\operatorname{\mathbb{G}_{m}}}$ w.r.t. $\operatorname{inc}$ is clean for $\epsilon\in\{\pm 1\}$ if and only if $\operatorname{inc}_{\cdot}(\Omega^{(\epsilon\mu_{1})}_{\operatorname{\mathbb{G% }_{m}}}\boxtimes\Omega^{(\epsilon\mu_{2})}_{\operatorname{\mathbb{G}_{m}}})$ is a simple $\mathcal{D}$-module on $\operatorname{\mathbb{A}}^{2}$ for $\epsilon\in\{\pm 1\}$. We know [Ber] that the image of $\operatorname{can}_{\operatorname{inc}}:\operatorname{inc}_{!}(\Omega^{(\mu_{1% })}_{\operatorname{\mathbb{G}_{m}}}\boxtimes\Omega^{(\mu_{2})}_{\operatorname{% \mathbb{G}_{m}}})\rightarrow\operatorname{inc}_{\cdot}(\Omega^{(\mu_{1})}_{% \operatorname{\mathbb{G}_{m}}}\boxtimes\Omega^{(\mu_{2})}_{\operatorname{% \mathbb{G}_{m}}})$ is a simple $\mathcal{D}$-module on $\operatorname{\mathbb{A}}^{2}$ and thus obtain one implication. For the other implication we use that if $\operatorname{inc}_{\cdot}(\Omega^{(\mu_{1})}_{\operatorname{\mathbb{G}_{m}}}% \boxtimes\Omega^{(\mu_{2})}_{\operatorname{\mathbb{G}_{m}}})$ is simple, then $\operatorname{can}_{\operatorname{inc}}$ surjects and hence $\operatorname{\mathbb{D}}\operatorname{can}_{\operatorname{inc}}$, which can be identified with $\operatorname{can}_{\operatorname{inc}}:\operatorname{inc}_{!}(\Omega^{(-\mu_{% 1})}_{\operatorname{\mathbb{G}_{m}}}\boxtimes\Omega^{(-\mu_{2})}_{% \operatorname{\mathbb{G}_{m}}})\rightarrow\operatorname{inc}_{\cdot}(\Omega^{(% -\mu_{1})}_{\operatorname{\mathbb{G}_{m}}}\boxtimes\Omega^{(-\mu_{2})}_{% \operatorname{\mathbb{G}_{m}}})$, injects. Here $\operatorname{\mathbb{D}}$ denotes the holonomic duality functor on $\operatorname{\mathbb{A}}^{2}$. Finally, it is easy to see that $\operatorname{inc}_{\cdot}(\Omega^{(\mu_{1})}_{\operatorname{\mathbb{G}_{m}}}% \boxtimes\Omega^{(\mu_{2})}_{\operatorname{\mathbb{G}_{m}}})$ is a simple $\mathcal{D}$-module on $\operatorname{\mathbb{A}}^{2}$ if and only if $\mu_{1}\notin\operatorname{\mathbb{Z}}$ and $\mu_{2}\notin\operatorname{\mathbb{Z}}$. Indeed, this follows from elementary arguments using the explicit action of $x,y,\partial_{x},\partial_{y}$ on the elements of the natural $\operatorname{\mathbb{C}}$-basis of $\Gamma(\operatorname{\mathbb{G}_{m}},\Omega_{\operatorname{\mathbb{G}_{m}}}^{(% \mu_{1})})\otimes_{\operatorname{\mathbb{C}}}\Gamma(\operatorname{\mathbb{G}_{% m}},\Omega^{(\mu_{2})}_{\operatorname{\mathbb{G}_{m}}})$. ∎ The following theorem is the main result of this work. Theorem 6.1. Let $\Lambda,\kappa,\mu_{-1},\mu_{0}\in\operatorname{\mathbb{C}}$. The $\mathcal{D}$-module extension of $\mathcal{L}_{\Lambda,\kappa,\mu_{-1},\mu_{0}}$ w.r.t. the inclusion $\operatorname{inc}:\widetilde{\mathcal{C}}^{\times}|\widetilde{O}% \hookrightarrow\widetilde{\mathcal{C}}^{\times}$ is clean if and only if $\mu_{-1}\notin\operatorname{\mathbb{Z}}$ and $\mu_{0}\notin\operatorname{\mathbb{Z}}$ and $\mu_{-1}+2\mu_{0}+\Lambda+3\kappa\notin\operatorname{\mathbb{Z}}$ and $\mu_{0}+\Lambda+\kappa\notin\operatorname{\mathbb{Z}}$. Proof. By Remark 6.1, as $\{\widetilde{U}_{i}\}_{i=1,2,3,4}$ is an open cover of $\widetilde{\overline{O}}$, the extension of $\mathcal{L}_{\Lambda,\kappa,\mu_{-1},\mu_{0}}$ w.r.t. the locally closed affine inclusion $\widetilde{\mathcal{C}}^{\times}~|\widetilde{O}\hookrightarrow\widetilde{% \mathcal{C}}^{\times}$ is clean if and only if it is clean w.r.t. the open affine inclusion $\widetilde{\mathcal{C}}^{\times}|\widetilde{O}\hookrightarrow\widetilde{% \mathcal{C}}^{\times}|\widetilde{U_{i}}$ for each $i=1,2,3,4$. We have a commutative diagram
Density phase separation and order-disorder transition in a collection of polar self-propelled particles Sudipta Pattanayak pattanayak.sudipta@gmail.com S N Bose National Centre for Basic Sciences, J D Block, Sector III, Salt Lake City, Kolkata 700106    Shradha Mishra smishra.phy@itbhu.ac.in Department of Physics, Indian Institute of Technology (BHU), Varanasi, India 221005 (December 6, 2020) Abstract We study the order-disorder transition in a collection of polar self-propelled particles, interacting through a distance dependent short range alignment interaction. A distance dependent interaction parameter $a_{0}$ is introduced such that on decreasing $a_{0}$ interaction decay faster with distance $d$ and for $a_{0}=1.0$ model reduces to Vicsek’s type. For all $a_{0}>0.0$, system shows a transition from disorder to long ranged ordered state. We find another phase transition from phase separated to nonphase separated state with decreasing $a_{0}$: at the same time order-disorder transition changes from discontinuous to continuous type. Hence density phase separation plays an important role in predicting the nature of order-disorder transition. We also calculate the two-point density structure factor using coarse-grained hydrodynamic equations of motion with an introduction of a density dependent alignment term in the equation introduced by Toner and Tu tonertu . Density structure factor shows a divergence at a critical wave-vector $q_{c}$, which decreases with decreasing density dependent alignment term. Alignment term in the coarse-grained equation plays the same role as the distance dependent parameter $a_{0}$ in the microscopic simulation. Our results can be tested in many biological systems: where particle have tendency to interact strongly with their closest neighbours. ††preprint: ††preprint: I Introduction Flocking bacterialcolonies ; insectswarms ; birdflocks ; fishschools - the collective, coherent motion of large number of organisms, is one of the most familiar and ubiquitous biological phenomena. Last one decade there have been an increasing interest in rich behaviour of these systems which are far from equilibrium sriramrev3 ; sriramrev2 ; sriramrev1 . One of the key feature of these flocks, is that the systems show a transition from disordered state to a long ranged ordered state with the variation of system parameters: like density, noise strength etc. benjacob ; chate2007 ; chate2008 . Nature of such transition is a matter of debate even after many years of introduction of a minimal model by T. Vicsek et al. in 1995 vicsek1995 , also called as Vicsek’s model (VM). In the model a collection of point particles move along their heading direction and align with their neighbours lie in a small metric distance. Many studies are done with other model called as topological distance model, where particles interact through topological distance chatetopo ; chatetopo1 . Initially Vicsek’s study on metric distance model finds the transition is continuous vicsek1995 but later studies of chate2007 ; chate2008 find it discontinuous. Similary for topological distance model, study of biplab claims dicontinuous but in chatetopo finds the transition is continuous. Hence nature of transition is a matter of curiosity in polar flock. In our present study we ask a question, what causes the nature of transition to change from discontinuous type to continuous one ? In previous studies of metric as well as topological distance models, particles interact with the same interaction strength within an interaction metric or topological distance. But in many biological systems particles have tendency to interact more with their closest neighbours. In recent study of andrea using maximum entropy principle, they find the functional dependence of the interaction on the distance which decays exponentially over a range of few individuals. In our model we introduce a distance dependent interaction parameter $a_{0}$, such that interaction decays with distance within a small metric distance. For $a_{0}=1.0$, interaction is of Vicsek’s type and as we decrease $a_{0}$ strength of interaction decays faster with distance. For all non-zero interaction parameter $a_{0}>0.0$ system shows a disordered state at small density, high noise strength and long-ranged ordered state at high density low noise strength. We also find another phase transition from phase separated to nonphase separated state as we decrease $a_{0}$. Order-disorder transition is first order for phase separated state and gradually becomes continuous as we approach nonphase separated state. In rest of the article, in section II we introduce the microscopic rule based model for distance dependent interaction in the Vicsek’s model and then write the phenomenological hydrodynamic equations of motion for a collection of polar self-propelled particles. Numerical details of microscopic simulation are givem in section III. Section IV gives the results of numerical simulation and linearised calculation. Finally in section V we discuss our main results and future prospect of our study. Detail calculation of linearised structure factor is given at the end in the appendix A. II Model We study a collection of polar self-propelled particles on a two-dimensional substrate. These particles interact through a short range alignment interaction which decays with distance inside a small interaction radius. We first describe a rule based distance dependent model for such system, which is similar to model introduced by Vicsek’s but with an additional distance dependent interaction. And then we write coupled hydrodynamic equations of motion for density and velocity derived from the microscopic model. Microscopic Model: Each particle in the collection is defined by its position ${\bf r}_{i}(t)$ and orientation $\theta_{i}(t)$ or unit direction vector ${\bf n}_{i}(t)=[\cos\theta_{i}(t),\sin\theta_{i}(t)]$ on a two-dimensional substrate. Dynamics of particle is given by two updates. One for the position, that takes care of its self-propulsion and other for orientation, that cares about the interaction between particles. Self-propulsion, is introduced as a motion towards its orientation direction with some fixed step size. Hence position update of particles, $${\bf r}_{i}(t+1)={\bf r}_{i}(t)+v_{0}{\bf n_{i}}$$ (1) and orientation update with a distance dependent short range alignment interaction $${\bf n_{i}}(t+1)=\frac{\sum_{j\in R_{0}}{\bf n_{j}}(t)a_{0}^{d_{ij}}+N_{i}(t)% \eta{\bf\zeta}_{i}}{W_{i}(t)}$$ (2) where sum is over all particles inside the interaction radius with $|{\bf r}_{j}(t)-{\bf r}_{i}(t)|<1$, $N_{i}(t)$ is number of particle within unit interaction radius and $W_{i}(t)$ is the normalisation factor, which makes ${\bf n}_{i}(t+1)$ again a unit vector, $\eta$ is the strength of noise, which we vary between zero to $1$ and ${\bf\zeta}_{i}(t)$ is a random unit vector. Phenomenological hydrodynamic equations of motion : We also write the phenomenological hydrodynamic equations of motion which are either derived from the above rule based model or written by symmetry of the system. Density: because total number of particles are conserved and velocity: is a broken symmetry variable in the ordered state, are two hydrodynamic variables in our system. They are defined by $$\rho({\bf r},t)=\sum_{i=1}^{N}\delta({\bf r}-{\bf r}_{i})$$ (3) and $${\bf V}({\bf r},t)=\dfrac{\sum_{i=1}^{N}{\bf n}_{i}(t)\delta({\bf r}-{\bf r}_{% i})}{\rho({\bf r},t)}$$ (4) Coupled hydrodynamic equation of motion for density is $$\partial_{t}\rho=v_{0}{\nabla}.(\rho{\bf V})$$ (5) and for velocity $$\displaystyle\partial_{t}{{\bf V}}=$$ $$\displaystyle\alpha(\rho){\bf V}-\beta(\mid V\mid)^{2}{\bf V}-\frac{v_{1}}{2% \rho_{0}}{\nabla}\rho$$ (6) $$\displaystyle+D_{p}\nabla^{2}{\bf V}-\lambda_{1}({\bf V}.{\nabla}){\bf V}-% \lambda_{2}({\bf\nabla}.{\bf V}){\bf V}$$ $$\displaystyle-\lambda_{3}\nabla(\mid V\mid^{2})+{\bf f_{V}}$$ These equations are similar to the equations introduced by Toner and Tu for polar self-propelled flocks tonertu . Density equation Eq.5 is a continuity equation, where $v_{0}$ is the self-propulsion speed of the particles. First two terms in the velocity equation Eq.6 is a mean-field order disorder term. In general when derived from any metric distance model like Vicsek’s model both $\alpha(\rho)$ and $\beta$ are functions of density: such that $\alpha(\rho)$ changes sign at some critical density $\rho_{c}$. Hence homogeneous equations has a disordered state $V_{0}=0$ for $\rho_{0}<\rho_{c}$ and ordered state $V_{0}=\sqrt{\frac{\alpha(\rho_{0})}{\beta}}$ for $\rho_{0}>\rho_{c}$, where $\rho_{0}$ is the mean density of the system. Distance dependent alignment interaction, which is in general non-linear, introduces non-linear density dependence of $\alpha(\rho)$. Hence we keep general density dependence of $\alpha(\rho)$. $v_{1}$, $D_{V}$ and $\lambda$’s are constants, $\nabla\rho$ is pressure term and $D_{V}$ is the viscosity term. $\lambda$’s are convective non-linearities, typically present in fluid flow and present here because our velocity field can also flow. Presence of all three non-linearities show the absence of Galilean invariance in polar flock system. The ${\bf f}_{V}$ term is a random Gaussian white noise, with zero mean and variance $$<f_{V_{i}}({\bf r},t)f_{V_{j}}({\bf r}^{\prime},t^{\prime})>=2\Delta_{0}\delta% _{ij}\delta^{d}({\bf r}-{\bf r}^{\prime})\delta(t-t^{\prime})$$ (7) where $\Delta_{0}$ is a constant and ($i,j=1,2$) denoting Cartesian components. III Numerical details We numrically study the microscopic model introduced in Eqs.1 and 2 for different distance dependent interaction parameter $a_{0}$. Form of interaction potential is shown in Fig. 1 for different $a_{0}$ as a function of distance. For $a_{0}=1.0$, all the particles within the interaction radius interact with same strength, but as we decrease $a_{0}$ effective range of interaction decreases. We vary $a_{0}$ from $1.0$ to small value $0.01$ and for $a_{0}=0.0$ (no alignment interaction). We keep the speed $v_{0}=0.5$ of the particles. We start with initially homogeneous density and random orientation of the particles on a two dimensional lattice of size $L\times L$ and mean density $\rho_{0}$ with periodic boundary condition. System shows a phase transition from disordered to long-ranged ordered state with the variation of noise strength $\eta$. Ordered state is characterised by global velocity defined by $$V=|\frac{1}{N}\sum_{i=1}^{N}{\bf{n}}_{i}(t)|.$$ (8) Typical plot of $V$ vs. $\eta$ is shown in cartoon picture in inset of Fig 6 We study our model in three different regions $I$(disordered), $II$ (close to transition:on the ordered side) and $III$ (deep ordered state) of phase diagram. First we study the model for region $II$ (close to transition). Since effective range of interaction as shown in Fig 1 decreases with $a_{0}$, hence for fixed density, critical value of noise strength also changes. For each $a_{0}=1.0,0.8,0.7,0.6,0.5,0.4,0.3,0.2,0.15,0.1,0.01$ we first estimate the critical $\eta_{c}(a_{0})$, then we choose value of $\eta(a_{0})=\eta_{c}-\delta\eta(a_{0})$, such that system have approximately same global velocity in the steady state. List of values of $\eta(a_{0})$ used in region $II$ of phase diagram is given in Table 1. We choose noise strength ($\eta$) in region $I$ and $III$, $0.7$ and $0.1$ respectively such that system is in the disordered/ordered state for all $a_{0}$. IV Results We first study our model in region $II$ of phase diagram: Each particle is chosen one by one and sequencially position and orientation are updated using Eqs. 1 and 2. Position and orientation of particles are stored in steady state, which we check by consistency of instantaneous global order parameter. Typical snapshot of particle’s position for four different values of $a_{0}=1.0,0.7,0.5,0.3$ at steady state and when there is clear band, is shown in Fig 2 (upper pannel). One of the main characteristic of polar flock, is the formation of bands in ordered state also obtained in previous study of Chate et al. chate2007 ,chate2008 . Similar bands are found in other microscopic models biplab as well as coarse-grained studies shradhapre . Our model reduces to the Vicsek’s type for $a_{0}=1.0$, where we also find clear bands as shown in Fig 2. As we vary $a_{0}$, size of the band increases as shown by increasing size of horizontal bar. Mean alignment of particles inside the band is perpendicular to the long axis of the band and bands typically move in one direction. Direction of motion of band is shown by big arrow in the Fig 2. In Fig 2 (lower panel), we plot the one dimensional distribution of density along the band direction and average over other direction. For $a_{0}=1.0$ density distribution shows sharp peak and width is small. As we decrease $a_{0}$, height of peak decreases and width increases. In Fig 3 we plot width of the band $W_{b}$, calculated from the width of the one dimensional density distribution, averaged over many snapshots. Mean density inside the band which we define as $\rho_{b}=\frac{N_{b}}{W_{b}\times L}$, where $N_{b}$ is the number of particles participate in band formation and $W_{b}$ is the width of the band. As shown in Fig 3 , mean density of particles inside the band increases as we increase $a_{0}$ and width of the band decreases with increasing $a_{0}$. In table 1 we show the variation of mean width of band $W_{b}$, mean density inside the band $\rho_{b}$ and fraction of particles participate for band formation $n_{c}=\frac{N_{b}}{N}$ for different distance dependent parameter $a_{0}$. We find although both $W_{b}$ and $\rho_{b}$ shows variation as we change $a_{0}$, but $n_{c}$ does not show any systematic change as we decrease $a_{0}$. It varies from $0.66$ (66 $\%$ particles) to $0.47$ (47 $\%$ particles). Also for $a_{0}<0.3$, there is no clear band formation, hence it is not possible to calculate different quantities (e.g $W_{b}$, $\rho_{b}$, etc.). Hence as we decrease $a_{0}$ system shows a change from high density narrow bands to low density wide bands and finally for very small $a_{0}$, there is no band. Formation of high and low density bands should also be visible in two-point density structure factor. Finite size of bands or clusters show a presence of critical wavevector in the system. We calculate the two-point density structure factor $S({\bf q})$ using linearised calculation in ordered state. Details of calculation are given in appendix A. $S({\bf q})$ is calculated in the direction of ordering or along the bands direction. From Eqs.26 we find $S({\bf q})=\frac{v_{0}^{2}\rho_{0}^{2}\triangle_{0}}{c_{2}}[\frac{1}{q^{2}+q_{% 1}^{2}}+\frac{1}{q^{2}-q_{2}^{2}}]$ In the above expression of $S({\bf q})$, $c_{2}$ is a constant, is defined in 19 and expression for $q_{1}$ and $q_{2}$ is given in Eqs.27 and Eqs.28 respectively. $S({\bf q})$ diverges at critical wavevector $q_{c}=q_{2}=\sqrt{\frac{CB_{+}\alpha_{1}^{\prime}}{D_{V}v_{0}^{2}v_{1}^{2}}}$. Where $C=v_{0}V_{0}$ and $B_{+}$ is defined in Eqs. 29. Critical length scale $L_{c}=q_{c}^{-1}\simeq\sqrt{\frac{D_{V}v_{0}^{2}v_{1}^{2}}{CB_{+}\alpha_{1}^{% \prime}}}$ decreases with increasing $\alpha_{1}^{\prime}=\frac{d\alpha}{d\rho}|_{\rho_{0}}$, which depends on the density dependence of $\alpha(\rho)$. For $\alpha_{1}^{\prime}=0$ or $\alpha$ is independent of density and Eqs. 6 reduces to Toner and Tu tonertu . Variation of width of the band $W_{b}$ in microscopic simulation with distance dependent parameter $a_{0}$ and dependence of critical wavevector $q_{c}$ with $\alpha_{1}^{\prime}$, shows one to one mapping between them. Band formation in polar flock or clustering of particles also implies the density phase separation. In order to calculate density phase separation we first calculate fluctuation in density in cells of small size. To calculate such quantity, whole system is divided into $N_{c}$ small cells of size $1\times 1$. Hence for system of size $L\times L$, there will be $L^{2}$ small cells ($N_{c}=L^{2}$). We calculate number of particles in each cell in the steady state. Then standard deviation in particle number is calculated from different cells, which is defined by $\Delta\phi$ $$\Delta\phi=\sqrt{\frac{1}{N_{c}}\sum_{j=1}^{N_{c}}(\phi_{j})^{2}-(\frac{1}{N_{% c}}\sum_{j=1}^{N_{c}}\phi_{j})^{2}}$$ (9) where, $\phi_{j}$ is the number of particles in $j$th cell. We also calculate Fourier transform of density defined by $$Q({\bf k})=\mid\frac{1}{L}\sum_{i,j=1}^{L}e^{i{\bf k}\cdot{\bf r}}\rho(i,j)\mid$$ (10) where ${\bf k}=\frac{2\pi(m,n)}{L}$, where $m,n$=$0$, $1$, $2$ …., $L-1$ are a two dimensional wave vector. We choose directions $(1,0)$ and $(0,1)$ as two directions of square box hence direction $(1,1)$ is along the diagonal of square box. During evolution of flock, direction of band changes with time. Hence to get maximum information about the clustering we calculate first non-zero value of $Q({\bf k})$ in the diagonal direction or $Q(1,1)$, where $m=n=1$. Both $\Delta\phi(t)$ and $Q(1,1)(t)$ are calculated at different times in the steady state. Then we average it over large time and calculate $<\Delta\phi>$ and $<Q(1,1)>$. Plot of $<\Delta\phi>$ and $<Q(1,1)>$ vs. $a_{0}$ is shown on right and left respectively of Fig. 6. $<\Delta\phi>$ and $<Q(1,1)>$ is calculated in all three regions of phase diagram (inset of Fig. 6). For region I in the phase diagram, where system is in the disordered state, both $<\Delta\phi>$ and $<Q(1,1)>$ remains small hence no phase separation. For region II or close to order-disorder transition, as we increase $a_{0}$, first both $\Delta\phi$ and $<Q(1,1)>$ increases with $a_{0}$, then shows a plateau type behaviour for $0.3<a_{0}<0.7$ and again increases for $a_{0}>0.7$. Hence system shows no phase separation for small $a_{0}$ and then gradually goes to moderate phase separation and finally for large $a_{0}$ shows strong phase separation. We find similar results for region III of phase diagram, where system is in the deep ordered state. Hence density shows another phase transition from phase separated to nonphase separated state as a function of distance dependent parameter $a_{0}$. We also calculate density fluctuation $\Delta N=\sqrt{<N^{2}>-<N>^{2}}$, as we vary distance dependent parameter $a_{0}$. We find for all $a_{0}$, density fluctuation $\Delta N/N^{1/2}\simeq N^{\beta}$ and $\beta>0.0$, hence density fluctuation is larger than thermal equilibrium system, where $\beta=0.0$. Large density fluctuation is one of the characteristic feature of active self-propelled systems chate2004 ; shradhaprl ; das2012 ; aditi . But we find the exponent $\beta$ varies as we tune $a_{0}$. In the inset of Fig 4, we plot variation of exponent $\beta$ for different $a_{0}$. $\beta$ vs. $a_{0}$ plot is almost flat with $\beta\simeq 0.4$ for $a_{0}>0.3$ and approaches zero for very small $a_{0}$. Phase Transition : Now we characterise the order-disorder transition for three different values of distance dependent parameter $a_{0}=1.0$, $0.5$ and $0.05$. As shown in Fig 6 these three values of $a_{0}$ are three different points in density phase separation curve. For $a_{0}=1.0$, system is strongly phase separated, for $a_{0}=0.5$, moderate phase separation (plateau region) and for $a_{0}=0.05$, no phase separation. In Fig 5 we first plot the time series of global velocity for three different regions, region I, region II and region III as shown in inset of Fig 6. During a large span of time ($t=2\times 10^{5}$) in the steady state (after $t=8\times 10^{5}$), global velocity $V$ approches to value close to $1$ for all three $a_{0}$ in region $III$ and for region $I$ it approaches to $0$. But for region II system shows switching type behaviour, where system continuously switches from ordered $V\simeq 0.6$ to disordered $V\simeq 0.1$ state for $a_{0}=1$. Such switching behaviour of global velocity is also observed in previous study of chate2008 ,biplab ,shradhapre As we tune $a_{0}=0.5$, time series of global velocity shows weaker switching behaviour and for very small $a_{0}=0.05$ it shows huge fluctuation with no switching (shown in Fig 5). Probability distribution of global velocity $P({V})$ shows bistable behaviour for $a_{0}=1.0$ and gradually switches to unimodal behaviour for small $a_{0}$ as shown in Fig 7 (a). We have shown phase transition curve of V for three different $a_{0}$ in Fig 7 (b). For larger value of $a_{0}$ change in V with noise strength $\eta$ becomes very sharp and as we decrease $a_{0}$, change in V with $\eta$ becomes more and more continuous. We also calculate variance of order parameter $\sigma=\langle V^{2}\rangle-\langle V\rangle^{2}$, shown in Fig 7 (c) and also the fourth order binder cumulant defined by $U=1-\frac{<V^{4}>}{3<V^{2}>^{2}}$, Fig 7 (d) shows strong discontinuity from $1/3$ for disordered state to $2/3$ for ordered state as we approach critical $\eta$ for $a_{0}=1.0$ and discontinuity decreases with $a_{0}$ and smoothly goes from disordered value $1/3$ to ordered state value $2/3$ for $a_{0}=0.05$. System shows a transition from disordered to ordered state for all $a_{0}$, but nature of transition changes from first order type to continuous as we tune $a_{0}$. Also density changes from phase separated to nonphase separated state. V Discussion We studied a collection of polar self-propelled particles, interacting through distance dependent short range alignment interaction. Such distance dependent model is biologically motivated, where particles interact more strongly with their closest neighbours. Distance dependent interaction is introduced through an interaction parameter $a_{0}$, which varies from $1.0$ to $0.0$. For $a_{0}=1.0$, model reduces to Vicsek’s type and $a_{0}=0.0$, implies no interaction. For all $a_{0}$’s system shows a phase transition from disordered (global velocity $V=0.0$) to ordered (finite global velocity) as we vary noise intensity $\eta$. For large $a_{0}$ density shows formation of bands, characteristic of bands changes with $a_{0}$. For $a_{0}$ close to $1.0$, bands are strong with small width and high density and as we decrease $a_{0}$, bands become weak. Our numerical result is consistent with analytical calculation of density structure factor as shown in Eqs. 26. Where we find a critical wavevector at which structure factor diverges. The critical wave vector decreases or wavelength increases with decreasing $\alpha_{1}^{\prime}$. Our finding in the work shows that density phase separation plays an important role in determining the nature of phase transition. First order phase transition in Vicsek’s model is because of strong density phase separation. Density phase separation in microscopic simulation can be tune in many ways, we can use distance dependent interaction, which controls the number of interacting neighbours. Changing the speed of particle in the Vicsek’s model will give the same result. For large speed system should show phase separated state and first order transition and for small speed we will have nonphase separated or continuous transition. Microscopic model we introduce here is not a unique distance dependent model, other models where interaction vary with other functional dependence on distance will also show the similar results. Acknowledgements. S. Pattanayak would like to thank Dr. Manoranjan Kumar for his kind cooperation and useful suggestions through out this work. S. Pattanayak would like to thank Department of Physics, IIT (BHU), Varanasi for kind hospitality. S. Pattanayak would also like to thank Mr. Rakesh Das for his useful suggestions. S. Mishra would like to support DST for their partial financial support in this work. Appendix A Linearised study of the broken symmetry state The hydrodynamic equations Eqs.5 and 6, admit two homogeneous solutions: an isotropic state with ${\bf V}=0$ for $\rho<\rho_{c}$ and a homogeneous polarized state with ${\bf V}=V_{0}{\bf x}$ for $\rho>\rho_{c}$, where ${\bf x}$ is the direction of ordering. .We are mainly interested in the symmetry broken phase, specifically along the horizontal direction, in which direction large cluster or bands are moving. For $\alpha(\rho)>0$, We can write the polarization field as, ${\bf{V}}=(V_{o}+\delta V_{x}){\bf x}+\delta{\bf V}_{y}$, where ${\bf x}$ is the direction of band formation or horizontal direction and ${\bf y}$ is the perpendicular direction, $V_{0}{\bf x}=<{\bf{V}}>$ is the spontaneous average value of $\bf{V}$ in ordered phase. We choose $V_{0}=\sqrt{\frac{\alpha(\rho_{0})}{\beta}}$ and $\rho=\rho_{0}+\delta\rho$ where $\rho_{0}$, coarse-grained density. Combining the fluctuations we can write in a vector format, $$\delta X_{\alpha}({\bf r},t)=\left[\begin{array}[]{c}\delta\rho\\ \delta V_{x}\\ \delta V_{y}\end{array}\right]$$ (11) Now we introduce fluctuation in hydrodynamic equation for density then Eqs. 5 will reduce to, $$\partial_{t}\delta\rho+v_{0}V_{0}\partial_{x}\delta\rho+v_{0}V_{0}\partial_{y}% \delta\rho+v_{0}\rho_{0}\partial_{x}\delta_{V_{x}}+v_{0}\rho_{0}\partial_{y}% \delta_{V_{y}}=0$$ (12) Similarly we introduce fluctuation in velocity Eqs. 6, we are writing velocity fluctuation equations for horizontal direction or direction of band formation and for perpendicular direction. We are writing fluctuation equations for both direction separately, here x-is the direction of ordering and y is perpendicular direction. We have done Taylor series expansion of $\alpha(\rho)$ in Eqs.6 at $\rho=\rho_{0}$ and we have consider upto first order derivative term of $\alpha(\rho)$. $$\partial_{t}{\delta{V_{x}}}=(\alpha(\rho_{0})+\alpha{{}_{1}^{\prime}}({\rho_{0% }})\delta\rho)(V_{0}+\delta V_{x})-\beta(V_{0}^{2}+2V_{0}\delta V_{x})(V_{0}+% \delta V_{x})-\frac{v_{1}}{2\rho_{0}}\partial_{x}\delta\rho+D_{V}\partial_{x}^% {2}\delta V_{x}+D_{V}\partial_{y}^{2}\delta V_{x}-\lambda V_{0}\partial_{x}% \delta V_{x}+{f_{Vx}}$$ (13) $$\partial_{t}{\delta{V_{y}}}=(\alpha(\rho_{0})+\alpha{{}_{1}^{\prime}}({\rho_{0% }})\delta\rho)(\delta V_{y})-\beta(V_{0}^{2}+2V_{0}\delta V_{x})(\delta V_{y})% -\frac{v_{1}}{2\rho_{0}}\partial_{y}\delta\rho+D_{V}\partial_{x}^{2}\delta V_{% y}+D_{V}\partial_{y}^{2}\delta V_{y}-\lambda V_{0}\partial_{x}\delta V_{y}+{f_% {Vy}}$$ (14) where $\alpha_{1}^{\prime}=\frac{\partial\alpha}{\partial\rho}\mid_{\rho_{0}}$ also $\lambda$ is combination of three $\lambda^{\prime}s(\lambda=\lambda_{1}+\lambda_{2}+2\lambda_{3})$ terms. Now we are introducing Fourier component, $\delta\hat{\bf X}_{\alpha}({\bf k},t)=\int exp^{i({\bf k.r}-\omega t)}\delta X% _{\alpha}({\bf r},t)$ in above three fluctuation equations 12, 13 and 14. Then the coupled equations we write in matrix form. $$\displaystyle\left[\begin{array}[]{ccc}-i\omega+iv_{0}V_{0}q_{x}+iv_{0}V_{0}q_% {y}&iv_{0}\rho_{0}q_{x}&iv_{0}\rho_{0}q_{y}\\ i\frac{v_{1}}{2\rho_{0}}q_{x}-\alpha{{}_{1}^{\prime}}(\rho_{0})V_{0}&-i\omega+% 2\alpha(\rho_{0})+D_{V}|q^{2}|+i\lambda V_{0}q_{x}&0\\ i\frac{v_{1}}{2\rho_{0}}q_{y}&0&-i\omega+2\alpha(\rho_{0})+D_{V}|q^{2}|+i% \lambda V_{0}q_{x}\end{array}\right]\times\left[\begin{array}[]{c}\delta\rho\\ \delta V_{x}\\ \delta V_{y}\end{array}\right]=\left[\begin{array}[]{c}0\\ f_{Vx}\\ f_{Vy}\end{array}\right]$$ (15) Earlier study shradhapre finds horizontal fluctuation or fluctuation in the direction of band formation is important when system is close to transition. Here in our numerical study we take region II, near to transition point, shown in the inset of Fig. 6. So, we are only considering fluctuation in ordering direction, then the above 3x3 matrix 15 reduce to, $$\left[\begin{array}[]{cc}-i\omega+iv_{0}V_{0}q&iv_{0}\rho_{0}q\\ i\frac{v_{1}}{2\rho_{0}}q-\alpha{{}_{1}^{\prime}}(\rho_{0})V_{0}&-i\omega+2% \alpha+D_{V}q^{2}+i\lambda V_{0}q\end{array}\right]\times\left[\begin{array}[]% {c}\delta\rho\\ \delta V_{x}\end{array}\right]=\left[\begin{array}[]{c}0\\ f_{Vx}\end{array}\right]$$ (16) We first determine the eigen frequencies of $\omega(\bf q)$ of these coupled equations and we find, $$\omega_{\pm}=c_{\pm}q-i\varepsilon_{\pm}$$ (17) where, the sound speeds, $$c_{\pm}=\frac{1}{2}(\lambda+v_{0})V_{0}\pm c_{2}$$ (18) with $$c_{2}=\sqrt{(\frac{1}{4}(\lambda-v_{0})^{2}V_{0}^{2}+v_{0}v_{1})}$$ (19) and the damping $\varepsilon_{\pm}$ in the Eqs. 17 are $O({\bf q^{2}})$ and given by, $$\varepsilon_{\pm}=\pm\frac{c_{\pm}}{2c_{2}}[2\alpha+D_{V}q^{2}]\mp\frac{1}{2c_% {2}}[2\alpha v_{0}V_{0}+v_{0}V_{0}\alpha_{1}^{\prime}+v_{0}V_{0}D_{V}q^{2}]$$ (20) Here, important thing is that unlike isotropic problem $d>2$ there are no transverse mode, we always have just two longitudinal goldstone modes, associated with $\delta\rho$ and $V_{x}$. Now the two-point density auto correlation along the direction of ordering, $$\centering C_{\rho\rho}({\bf q},\omega)=\frac{v_{0}^{2}\rho_{0}^{2}2\triangle_% {0}q^{2}}{(-\omega^{2}+(v_{0}+\lambda)V_{0}q\omega-v_{0}\lambda V_{0}^{2}q^{2}% +\frac{v_{1}v_{0}q^{2}}{2})^{2}+[\omega(2\alpha+D_{V}q^{2})-q(2\alpha v_{0}V_{% 0}+v_{0}V_{0}\alpha_{1}^{\prime}+v_{0}V_{0}D_{V}q^{2})]^{2}}\@add@centering$$ (21) $$\centering C_{\rho\rho}({\bf q},\omega)=\frac{v_{0}^{2}\rho_{0}^{2}2\triangle_% {0}q^{2}}{(\omega-c_{+}q)^{2}(\omega-c_{-}q)^{2}+[\omega(2\alpha+D_{V}q^{2})-q% (2\alpha v_{0}V_{0}+v_{0}V_{0}\alpha_{1}^{\prime}+v_{0}V_{0}D_{V}q^{2})]^{2}}\@add@centering$$ (22) If we plot density-density correlation function as a function of $\omega$ there are two peaks at $\omega=c_{\pm}q$. From above density auto correlation it is very straight forward to calculate structure factor. $$S(q,t)=\frac{1}{2\pi}\int_{-\infty}^{+\infty}\left\langle|\delta\rho(q,\omega)% |^{2}\right\rangle d\omega=\frac{1}{2\pi}\int_{-\infty}^{+\infty}C_{\rho\rho}(% {\bf q},\omega)d\omega$$ (23) $$S(q,t)=\frac{1}{2\pi}\int_{-\infty}^{+\infty}\frac{v_{0}^{2}\rho_{0}^{2}2% \triangle_{0}q^{2}}{(\omega-c_{+}q)^{2}(\omega-c_{-}q)^{2}+[\omega(2\alpha+D_{% V}q^{2})-q(2\alpha v_{0}V_{0}+v_{0}V_{0}\alpha_{1}^{\prime}+v_{0}V_{0}D_{V}q^{% 2})]^{2}}$$ (24) $$S(q,t)=\frac{v_{0}^{2}\rho_{0}^{2}2\triangle_{0}q^{2}}{2c_{2}q}[\frac{1}{c_{+}% q(2\alpha+D_{V}q^{2})-q(v_{0}V_{0}(2\alpha+D_{V}q^{2})+v_{0}V_{0}\rho_{0}% \alpha_{1}^{\prime})}]$$ (25) $$S(q,t)=\frac{v_{0}^{2}\rho_{0}^{2}\triangle_{0}}{c_{2}}[\frac{1}{q^{2}+q_{1}^{% 2}}+\frac{1}{q^{2}-q_{2}^{2}}]$$ (26) where, $$q_{1}^{2}=[\frac{v_{0}V_{0}\alpha_{1}^{\prime}B_{-}}{D_{V}v_{0}^{2}v_{1}^{2}}]$$ (27) $$q_{2}^{2}=[\frac{v_{0}V_{0}\alpha_{1}^{\prime}B_{+}}{D_{V}v_{0}^{2}v_{1}^{2}}]$$ (28) and $$B_{\pm}=[\sqrt{\frac{1}{4}(\lambda-v_{0})^{2}V_{0}^{2}+v_{0}v_{1}}\mp\frac{1}{% 2}(\lambda-v_{0})V_{0}]$$ (29) All the constants in wave vector expression is defined earlier. From there it is very clear $q_{1}^{2}$ and $q_{2}^{2}$ are positive. Now from above structure factor expression we get critical wave vector below which structure factor diverges, $$D_{V}q_{2}^{2}=[\frac{v_{0}V_{0}\rho_{0}\alpha_{1}^{\prime}B_{+}}{v_{0}^{2}v_{% 1}^{2}}]-2\alpha B_{+}$$ (30) When the system is near to critical point then $\alpha(\rho_{0})$ will be close to zero. Then we can write expression for critical wave vector, $$q_{c}=q_{2}=\sqrt{\frac{CB_{+}\alpha_{1}^{\prime}}{D_{V}v_{0}^{2}v_{1}^{2}}}$$ (31) Where, $C=v_{0}V_{0}$ Here for $\alpha_{1}^{\prime}\neq 0$ we get a critical wave vector $q_{c}$ at which structure factor diverges and it also gives a critical length scale $L_{c}$ of the system. Where, $\alpha_{1}^{\prime}$ is density dependent alignment term, which is similar to the distance dependence parameter $a_{0}$ in our numerical study and for $\alpha_{1}^{\prime}=0$ our study reduces to Toner and Tu study tonertu . References (1) E. Ben-Jacob, I. Cohen, O. Shochet, A. Czir$\acute{o}$k and T. Vicsek, Phys. Rev. Lett. 75, 2899 (1995) (2) E. Rauch, M. Millonas and D. Chialvo, Phys. Lett. A 207, 185 (1995) (3) Physics Today 60, 28 (2007); C. Feare, The Starlings (Oxford: Oxford University Press) (1984) (4) S. Hubbard, P. Babak, S. Sigurdsson and K. Magnusson, Ecol. Model. 174, 359 (2004) (5) Toner, J., Y. Tu, and S. Ramaswamy, 2005, Ann. Phys. (Amsterdam) 318, 170 (6) Ramaswamy, S., 2010, Annu. Rev. Condens. Matter Phys. 1, 323 (7) M. Cristina Marchetti et al., Rev. Mod. Phys., 85, 1143, (2013). (8) E. Ben-Jacob, I. Cohen, O. Shochet, A. Tenenbaum, A. Czirók, and T. Vicsek, Phys. Rev. Lett. 75, 2899 (1995). (9) Hugues Chaté, Francesco Ginelli, and Guillaume Grégoire Phys. Rev. Lett. 99, 229601, (2007) (10) Hugues Chaté, Francesco Ginelli, Guillaume Grégoire, and Franck Raynaud Phys. Rev. E 77, 046113 (2008) (11) T. Vicsek et al., Phys. Rev. Lett. 75, 1226 (1995). (12) Francesco Ginelli and Hugues Chaté ; Phys. Rev. Lett. 105, 168103 (2010) . (13) Anton Peshkov, Sandrine Ngo, Eric Bertin, Hugues Chaté, and Francesco Ginelli ; Phys. Rev. Lett. 109, 098101 (14) Biplab Bhattacherjee, Shradha Mishra, and S. S. Manna; Phys. Rev. E 92, 062134 (2015). (15) Andrea Cavagna, Lorenzo Del Castello, Supravat Dey, Irene Giardina, Stefania Melillo, Leonardo Parisi, and Massimiliano Viale; Phys. Rev. E 92, 012705 (2015). (16) J. Toner and Y. Tu, Phys. Rev. Lett. 75, 4326 (1995); Phys. Rev. E 58, 4828 (1998). (17) Shradha Mishra, Aparna Baskaran, and M. Cristina Marchetti Phys. Rev. E 81, 061916, (2010). (18) Guillaume Grégoire and Hugues Chaté ; Phys. Rev. Lett. 92, 025702 (2004). (19) Shradha Mishra and Sriram Ramaswamy ; Phys. Rev. Lett. 97, 090602 (2006). (20) Dipjyoti Das, Dibyendu Das , Ashok Prasad ; Journal of Theoretical Biology 308 (2012) 96–104. (21) S. Ramaswamy, R. Aditi Simha and J. Toner ; EPL (Europhysics Letters) 62, 196 (2003)
GENERALIZED SECOND LAW OF THERMODYNAMICS ON THE EVENT HORIZON FOR INTERACTING DARK ENERGY  Nairwita Mazumder111nairwita15@gmail.com, Subenoy Chakraborty222schakraborty@math.jdvu.ac.in. ${}^{1}$Department of Mathematics, Jadavpur University, Kolkata-32, India. (November 26, 2020) Abstract Here we are trying to find the conditions for the validity of the generalized second law of thermodynamics (GSLT) assuming the first law of thermodynamics on the event horizon in both cases when the FRW universe is filled with interacting two fluid system- one in the form of cold dark matter and the other is either holographic dark energy or new age graphic dark energy. Using the recent observational data we have found that GSLT holds both in quintessence era as well as in phantom era for new age graphic model while for holographic dark energy GSLT is valid only in phantom era. pacs: 98.80.Cq, 98.80.-k I Introduction The equivalence between a black body and a black hole(BH)both emitting thermal radiation (using semi-classical description) put a new era for black hole physics. The black hole behaves as a thermodynamical system with the temperature (known as Hawking temperature)and the entropy proportional to the surface gravity at the horizon and the area of the horizon [1,2] respectively. Further, this temperature, entropy and mass of a black hole are related by the first law of thermodynamics [3]. On the other hand, the thermodynamic parameters namely, the temperature and entropy are characterized by the space time geometry. So a natural speculation about some relationship between black hole thermodynamics and Einstein equations is legitimate. In fact, Jacobson [4] showed that Einstein equations can be derived from the first law of thermodynamics :$\delta Q=TdS$ , for all local Rindler Causal horizons with $\delta Q$ and $T$ as the energy flux and Unruh temperature measured by an accelerated observer just inside the horizon, while on the other way Padmanabhan [5] derived the first law of thermodynamics on the horizon, starting from Einstein equations for general static spherically symmetric space-time. This equivalence between the thermodynamical laws and the Einstein gravity subsequently leads to generalize this idea in cosmology, treating universe as a thermodynamical system. More precisely , if we assume that the universe is bounded by the apparent horizon $R_{A}$ with temperature $T_{A}=\frac{1}{2\pi R_{A}}$ and entropy $S_{A}=\frac{\pi{R_{A}}^{2}}{G}$ then the Friedmann equations and the first law of thermodynamics (on the apparent horizon ) are equivalent [6]. Usually, the universe bounded by the apparent horizon is termed as a Bekenstein system because Bekenstein’s entropy-mass bound $(S\leq 2\pi ER_{A})$ and entropy-area bound ($S\leq\frac{A}{4}$) are obeyed in this region . On the otherhand, the cosmological event horizon does not exist in the usual standard big bang model while it exists in an accelerating universe dominated by dark energy $\omega_{D}\neq-1$ . However , both the first and second law of thermodynamics break down on the event horizon [7]. Using the usual definition of temperature (as on the apparent horizon) Wang et al [7] have argued that the applicability of the first law of thermodynamics is restricted to nearby states of local thermodynamic equilibrium while event horizon characterizes the global features of space time. The present observational evidences obtained from Wilkinson-Microwave-Anisotropy-Probe(WMAP) strongly suggest that the current expansion of the universe is accelerating [8,9]. There are two possible ways [10-15] of explaining this accelerated expansion of the universe.In the frame work of general relativity it can be explained by introducing the dark energy having negative pressure. The other possibility is to consider modified gravity theory such as $f(R)$ gravity [15,16,17], where the action is an arbitrary function $(f(R))$ of the scalar curvature $R$. As a result, the Friedmann equations [18,19] become complicated by including powers of Ricci scalar $R$ and its time derivatives. In the present work, we examine the validity of the generalized second law of thermodynamics of the universe bounded by the event horizon (which exists due to present accelerating phase of the universe). The matter in the universe is taken in the form of interacting two fluid system- one component is dust and the other is in the form of dark energy. The model of dark energy obeying holographic principle is termed as holographic dark energy. From the effective quantum field theory, the energy density for the holographic dark energy is given by [20] $$\rho_{D}=3c^{2}M_{p}^{2}L^{-2},$$ where $L$ is an IR cut-off in units $M_{p}^{2}=1$, c is any free dimension less parameter,determined from observational data [21]. Li [22] has argued that (also from the present context) the cut off length $L$ can be chosen as the radius of the event horizon to get correct equation of state and desired accelerating universe. Very recently, Wei and Cai [23] proposed another model of dark energy known as new age graphic dark energy (NADE) which may have interesting cosmological consequences (Originally Cai[24] proposed an age graphic dark energy model(ADE) which is unable to describe the matter dominated era). These new dark energy models are based on the uncertainty relation in quantum mechanics and gravitational effect due to Einstein gravity. Also the NADE models are constrained by various astronomical observations [25].Although the evolution behavior of the NADE has similarity [26] with the holographic dark energy but the causality problem in the holographic dark energy model can be overcome in ADE by choosing the age of the universe as the measure of the length (instead of the horizon distance) while the conformal time ${}^{\prime}\eta^{\prime}$ is chosen as the time scale in NADE. Thus the energy density of the NADE can be written as $$\rho_{ND}=\frac{3n^{2}m_{p}^{2}}{\eta^{2}}$$ (1) where the conformal time $\eta$ has the expression $$\eta=\int\frac{dt}{a}=\int_{0}^{a}\frac{da}{Ha^{2}}$$ and the numerical factor $3n^{2}$ is taken care of uncertainties in quantum theory and the effect of curved space time. The paper is organized as follows. In the section (II) the holographic dark energy model has been used while recently formulated new age graphic dark energy model is taken in section (III). The conclusions are presented in section (IV). II Interacting Holographic dark energy model and generalized second law of thermodynamics: In this section, we take the FRW universe bounded by the event horizon and the matter in the universe is taken as the holographic dark energy (HDE) interacting with dust. So the energy density of the HDE model has the expression $$\rho_{D}=3c^{2}M_{p}^{2}R_{E}^{-2}$$ (2) The individual continuity equations for the HDE and dust are of the form $$\dot{\rho_{D}}+3H(1+\omega_{D})\rho_{D}=-Q$$ (3) and $$\dot{\rho_{m}}+3H\rho_{m}=Q$$ (4) where $Q=\Gamma\rho_{D}$ [27] is the interaction term and the decay rate $\Gamma$ corresponds to conversion of dark energy to dust. The above conservation equations can be written in non-interacting form as [27] $$\dot{\rho_{D}}+3H(1+\omega_{D}^{eff})\rho_{D}=0$$ (5) and $$\dot{\rho_{m}}+3H(1+\omega_{m}^{eff})\rho_{m}=0$$ (6) These equations show that the interacting matter system is equivalent to non-interacting two fluid system with variable equation of state $$\omega_{D}^{eff}=\omega_{D}+\frac{\Gamma}{3H}~{}~{}~{}and~{}~{}~{}~{}\omega_{m% }^{eff}=-\frac{\Gamma}{3Hu}~{}$$ (7) where $~{}u=\frac{\rho_{m}}{\rho_{D}}~{}$ is the ratio of two energy densities. So combining (5) and (6) we get $$\dot{\rho_{t}}+3H(\rho_{t}+p_{t})=0$$ (8) where $$\rho_{t}=\rho_{D}+\rho_{m}~{},~{}p_{D}=\rho_{D}\omega_{D}^{eff}~{},~{}p_{m}=% \rho_{m}\omega_{m}^{eff}~{}and~{}p_{t}=p_{D}+p_{m}$$ (9) For FRW Universe with line element $$ds^{2}=-dt^{2}+\frac{a^{2}(t)}{1-kr^{2}}dr^{2}+a^{2}(t)r^{2}d\Omega_{2}^{2}$$ the Friedmann equations are $$H^{2}+\frac{k}{a^{2}}=\frac{8\pi G}{3}\rho_{t}$$ and $$\dot{H}-\frac{k}{a^{2}}=~{}-4\pi G(\rho_{t}+p_{D})$$ (10) with $\rho_{t}=\rho_{m}+\rho_{D}$ . Now as usual the density parameters are $$\Omega_{m}=\frac{\rho_{m}}{\frac{3H^{2}}{8\pi G}}~{},~{}\Omega_{D}=\frac{\rho_% {D}}{\frac{3H^{2}}{8\pi G}}~{},\Omega_{k}=\frac{\frac{k}{a^{2}}}{\frac{3H^{2}}% {8\pi G}}~{}$$ and due to the first Friedmann equation they are related by the relation $$\Omega_{D}+\Omega_{m}=1+\Omega_{k}$$ (11) Now using the Friedmann equations, the conservation relations and the expression for the energy density of the holographic dark energy (Eq.(2)) , the equation of state parameter $\omega_{D}$ for the HDE can be obtained as [27] $$\omega_{D}=-\frac{1}{3}-\frac{2\sqrt{\Omega_{D}-\Omega_{k}}}{3c}-\frac{b^{2}(1% +\Omega_{k})}{\Omega_{D}}$$ (12) where the decay rate is chosen as [27] $$\Gamma=3b^{2}(1+u)H$$ (13) with $b^{2}$ as the coupling constant. The deceleration parameter $q=-(1+\frac{\dot{H}}{H^{2}})$ can be expressed in terms of density parameters (using Friedmann equations and the conservation relations) as $$q=-\frac{\Omega_{D}}{2}-\frac{\Omega_{D}\sqrt{\Omega_{D}-\Omega_{k}}}{c}+\frac% {1}{2}(1-3b^{2})(1+\Omega_{k})$$ (14) Now using the expression for the energy density of holographic dark energy (Eq.(2)) and the modified conservation equation (5) the change in the radius of the event horizon is given by [28] $$dR_{E}=\frac{3}{2}R_{E}H(1+\omega_{D}^{eff})dt$$ (15) Assuming the validity of the first law of thermodynamics on the event horizon and using the expression for the amount of energy crossing the event horizon in time dt [6,29,30] i.e. $$-dE=4\pi{{R}_{E}}^{3}H(\rho_{t}+p_{t})dt$$ (16) we obtain $$dS_{E}=\frac{4\pi{{R}_{E}}^{3}H(\rho_{t}+p_{t})dt}{T_{E}}$$ (17) where $S_{E}$ is the entropy of the event horizon and $T_{E}$ is the temperature on the event horizon. From the Gibb’s equation [31] $$T_{E}dS_{I}=dE_{I}+p_{t}dV~{},$$ (18) the variation of the entropy $(S_{I})$ of the fluid inside the event horizon is given by $$\frac{dS_{I}}{dt}=\frac{4\pi{R_{E}}^{3}}{T_{E}}H(\rho_{t}+p_{D})\left(\frac{3}% {2}(\omega_{D}^{eff}+1)-1\right)$$ (19) In deriving Eq.(19) we have used Eq.(9) and the following expressions $$V=\frac{4}{3}\pi R_{E}^{3}~{},~{}E_{I}=V\rho_{t}.$$ Hence combining equations (10) and (19) the resulting change of total entropy is given by $$\frac{d}{dt}(S_{I}+S_{E})=\frac{6\pi{R_{E}}^{3}H}{T_{E}}(\rho_{t}+p_{D})(% \omega_{D}^{eff}+1)$$ $$or~{}~{}~{}more~{}~{}~{}explicitly$$ $$=\frac{6\pi{R_{E}}^{3}H}{T_{E}}\left[\rho_{D}{(\omega_{D}^{eff}+1)}^{2}+\rho_{% m}(\omega_{D}^{eff}+1)(\omega_{m}^{eff}+1)\right]$$ (20) III Interacting New age graphic dark energy model and generalized second law of thermodynamics: Similar to the previous section, the matter in the universe bounded by the event horizon is taken as interacting two fluid system- one component is in the form of recently formulated new age graphic dark energy and the other is the dark matter in the form of dust. So as before the time variation of the entropy of the horizon can be obtained from the first law of thermodynamics with expression $$dS_{E}=\frac{4\pi{{R}_{E}}^{3}H(\rho_{t}+p_{t})dt}{T_{E}}$$ (21) where $\rho_{t}=\rho_{ND}+\rho_{m}~{},~{}p_{ND}=\rho_{ND}\omega_{ND}^{eff}~{},~{}p_{m% }=\rho_{m}\omega_{m}^{eff}~{}and~{}p_{t}=p_{ND}+p_{m}.$ Here the energy density of the dust component $(\rho_{m})$ satisfies the conservation equation (4)(or (6)) while the NADE has energy density and pressure $\rho_{ND}$ and $P_{ND}$ with equation of state $P_{ND}=\rho_{ND}\omega_{ND}$. This matter component satisfies the continuity equation (3) or (5). Also the effective state parameters have the same form as in equation(7). Further, an explicit form of $\rho_{ND}$ is given in equation (1) in terms of conformal time. In contrast to HDE, the NADE energy density (given in equation(1)) is not related to the radius of the event horizon. So from the definition of the event horizon, the time variation of $R_{E}$ is given by [32] $$\frac{d{R}_{E}}{dt}=\left({R}_{E}-\frac{1}{H}\right)H$$ (22) Then using Gibbs equation, the time variation of the entropy of the matter inside event horizon is given by $$\frac{dS_{I}}{dt}=-\frac{4\pi{R_{E}}^{2}}{T_{E}}(\rho_{t}+p_{D})$$ (23) Thus combining equation (21) and (23) the change of total entropy is given by $$\frac{d}{dt}(S_{I}+S_{E})=4\pi(\rho_{t}+p_{t})\frac{{{R}_{E}}^{2}H}{T_{E}}% \left({R}_{E}-\frac{1}{H}\right)$$ (24) Further for the new age graphic dark energy the expressions of the equation of state parameters $\omega_{ND}$ and the deceleration parameter ${}^{\prime}q^{\prime}$ in terms of the density parameters are the following: $$\omega_{ND}=-1+\frac{2\sqrt{\Omega_{ND}}}{3na}-\frac{b^{2}(1+\Omega_{k})}{% \Omega_{ND}}$$ (25) and $$q=-\frac{3\Omega_{ND}}{2}-\frac{{\Omega_{ND}}^{\frac{3}{2}}}{na}+\frac{1}{2}(1% -3b^{2})(1+\Omega_{k})$$ (26) with $\Omega_{ND}=\frac{\rho_{ND}}{\frac{3H^{2}}{8\pi G}}$ as the density parameter. IV Discussion and concluding remarks: In this paper, validity of the second law of thermodynamics on the event horizon has been analysed for interacting two fluid system. Here dust is one component of the matter while the other component of the matter is holographic dark energy in section II and in section III the new age graphic dark energy is chosen as the other component. As there is no explicit expression for the radius of the event horizon for the new age graphic dark energy model so validity of the GSLT restricts both the matter as well as the geometry. On the other hand, for the holographic dark energy model, there exists an explicit expression for the radius of the event horizon and consequently, validity of GSLT demands restrictions on matter only.The restrictions in compact form can be written as the following: a) Holographic DE: $$\omega_{D}>~{}max{[-(1+u),u-(1+b^{2})(1+u)]}$$ $$or$$ $$\omega_{D}<~{}min{[-(1+u),u-(1+b^{2})(1+u)]}$$ or equivalently, the parameters $b^{2}$ and $c$ are constrained as $$b^{2}\leq~{}\frac{\left[u+\frac{2}{3}\left(1-\frac{\sqrt{\Omega_{D}-\Omega_{k}% }}{c}\right)\right]}{(1+u)}~{}~{}~{}and~{}~{}~{}c<\sqrt{\Omega_{D}-\Omega_{k}}$$ $$or$$ $$b^{2}\geq~{}\frac{\left[u+\frac{2}{3}\left(1-\frac{\sqrt{\Omega_{D}-\Omega_{k}% }}{c}\right)\right]}{(1+u)}~{}~{}~{}and~{}~{}~{}c>\sqrt{\Omega_{D}-\Omega_{k}}$$ b) New age graphic DE: For open or flat model $R_{E}\geq~{}R_{H}=\frac{1}{H}$ so we have $$\omega_{ND}>-(1+u)~{}~{}i.e.~{}~{}b^{2}\leq\frac{\left[u+\frac{2\sqrt{\Omega_{% ND}}}{3na}\right]}{(1+u)}.$$ However for closed model if $R_{E}~{}\geq~{}R_{H}$ then the above inequalities hold, but if $R_{E}~{}\leq~{}R_{H}$ then the inequalities will be reversed i.e. $$\omega_{ND}<-(1+u)~{}~{}i.e.~{}~{}b^{2}\geq\frac{\left[u+\frac{2\sqrt{\Omega_{% ND}}}{3na}\right]}{(1+u)}.$$ One may note that the second alternative for holographic DE and the closed model in NAGDE (with $R_{E}~{}\leq~{}R_{H}$) corresponds to a possible phantom area. We shall now discuss the validity of GSLT from the present observational point of view. From the recent observations we have [25,33,34] $$\Omega_{D}=0.72,~{}~{}\Omega_{k}=0.02,a=1~{}~{}(present~{}~{}time)~{}~{}n=2.7~{}$$ a)HDE: The explicit form of $q$ and $\omega_{D}$ are given by $$q=-0.57-1.53b^{2}$$ $$\omega_{D}=-1.00005-1.4167b^{2}$$ Now the validity of GSLT demands $\rho_{t}+p_{t}~{}\geq 0$ and $1+\omega_{D}^{eff}\geq 0$ which gives $$b^{2}\leq 0.294~{}~{}~{}and~{}~{}~{}c\geq 0.8366$$ (which agree with [34])and consequently $$q~{}\geq~{}-1.01982~{}~{}~{}and~{}~{}~{}~{}\omega_{D}~{}\geq~{}-1.4166.$$ Note that the above restriction on ${}^{\prime}c^{\prime}$ agrees with that in ref.[34]. Thus the holographic dark energy model may be of phantom nature for the validity of the GSLT as shown in Fig(I). b) NAGDE: From the observed data, $q$ and $\omega_{ND}$ are the following $$q=-0.34-1.53b^{2}$$ $$\omega_{ND}=-0.79-1.42b^{2}$$ Now $\rho_{t}+p_{t}~{}\geq 0$ gives an upper bound for ${}^{\prime}b^{\prime}$ as $b^{2}\leq 0.442$ which gives the explicit restrictions on $q$ and $\omega_{ND}$ in the following manner: $$q\geq-1.01626~{}~{}~{}and~{}~{}~{}\omega_{ND}\geq-1.41764.$$ Hence in this case also [we can find from Fig(II)] the GSLT may be valid in quintessence era as well as in phantom era. Therefore, the phantom divide line has no influence on the validity of the GSLT in both the dark energy models i.e. there may be a smooth transition between the quintessence and phantom era in the new age graphic dark energy model. Finally, we note that to examine the validity of GSLT we have not used any explicit form of entropy or temperature at the event horizon only we have imposed the condition that the first law of thermodynamics is valid here which may be considered as an conservation equation ( since we have assumed that the universe is in thermal equilibrium). However in Ref.[7] it has been shown that the usual definition of the temperature (Hawking temperature) and entropy (Bekenstein entropy) do not hold on the event horizon and consequently the first law of thermodynamics is not satisfied on the event horizon. Therefore, for future work we examine the validity of the first law thermodynamics with appropriate choice of entropy and temperature on the event horizon. References: $[1]$ S.W.Hawking, Commun.Math.Phys 43 199 (1975). $[2]$ J.D.Bekenstein, Phys. Rev. D 7 2333 (1973). $[3]$ J.M.Bardeen , B.Carter and S.W.Hawking, Commun.Math.Phys 31 161 (1973). $[4]$ T.Jacobson, Phys. Rev Lett. 75 1260 (1995). $[5]$ T.Padmanabhan, Class. Quantum Grav 19 5387 (2002); Phys.Rept 406 49 (2005). $[6]$ R.G. Cai and L.M. Cao , Phys. Rev. D 75 064008 (2007). $[7]$ B. Wang, Y. Gong, E. Abdalla , Phys. Rev. D 74 083520 (2006). $[8]$ S.Perimutter et. al. [Supernova Cosmology Project Collaboration] Astrophys. J 517 (1999) 565; A.G. Riess et. al. [Supernova Search Team Collaboration] Astron. J 116 (1998) 1009; P. Astier et. al. [The SNLS Collaboration] Astron. Astrophys. 447 (2006) 31; A.G. Riess et. al. Astrophys. J 659 (2007) 98. $[9]$ D.N. Spergel et. al. [WMAP Collaboration] Astrophys. J Suppl. 148 (2003) 175; H.V. Peiris et. al. [WMAP Collaboration] Astrophys. J. Suppl. 148 (2003) 213; D.N. Spergel et. al. [WMAP Collaboration] Astrophys. J Suppl. 170 (2007) 377; E. Komatsu et. al. [WMAP Collaboration] arXiv:0803.0547 [astro-ph]. $[10]$ V. Sahi , AIP Conf. Proc. 782 (2005) 166 ; [J. Phys. Conf. Ser. 31 (2006) 115]. $[11]$ T. Padmanavan , Phys. Rept. 380 (2002) 235. $[12]$ E.J. Copeland, M. Sami and S. Tsujikawa , IJMPD 15 (2006) 1753. $[13]$ R. Durrer and R. Marteens , Gen. Rel. Grav. 40 (2008) 301. $[14]$ S. Nojiri and S.D. Odintsov , Int. J. Geom. Meth. Mod. Phys. 4 (2007) 115. $[15]$ S. Nojiri , S. Odintsov , arXiv: 0801.4843 [astro-ph]; S. Nojiri , S. Odintsov , arXiv: 0807.0685 [hep-th ]; S. Capozziello, IJMPD 11 (2002) 483. $[16]$ S.M. Carroll , V. Duvvuri, M. Trodden and M.S. Turner Phys. Rev. D 68 (2004) 043528. $[17]$ S. Nojiri and S.D. Odintsov , Phys. Rev. D 68 (2003) 123512; S. Nojiri and S.D. Odintsov , Phys. Rev. D 74 (2006) 086005. $[18]$ Kazuharu Bamba and Chao-Qiang Geng , arXiv: 0901.1509 [hep-th]. $[19]$ H.M. Sadjadi , Phys. Rev. D 76 (2007) 104024. $[20]$ A.G. Cohen , D.B. Kaplan and A.E. Nelson , Phys. Rev. Lett. 82 4971 (1999); $[21]$ Q.G. Huang and M. Li , JCAP 0408 013 (2004); $[22]$ M. Li , Phys. Lett. B 603 01 (2004); $[23]$ H.Wei, R.G.Cai, Phys. Lett. B. 660 113 (2008); $[24]$ R.G.Cai, Phys. Lett. B. 657 228 (2007); $[25]$ H.Wei, R.G.Cai, Phys. Lett. B. 663 1 (2008); $[26]$ A.G. Cohen , D.B. Kaplan and A.E. Nelson , Phys. Rev. Lett. 82 4971 (1999); P.Horava, D.Minic, Phys. Rev. Lett. 85 1610 (2000); S.D.Thomas ,Phys. Rev. Lett. 89 081 301 (2002);M. Li , Phys. Lett. B 603 01 (2004); $[27]$ M.R. Setare , JCAP 01 023 (2007); $[28]$ N. Mazumder and S. Chakraborty , Gen.Rel.Grav.42 813 (2010); $[29]$ R. G. Cai and S. P. Kim, JHEP 02 050 (2005). $[30]$ R.S. Bousso, Phys. Rev. D 71 064024 (2005). $[31]$ G. Izquierdo and D. Pavon, Phys. Lett. B 633 420 (2006). $[32]$ N. Mazumder and S. Chakraborty, Class. Quant. Gravity 26 195016 (2009). $[33]$ C.L. Bennett et al., Astrophys. J. Suppl. 148, 1 (2003); D.N. Spergel, Astrophys. J. Suppl. 148, 175 (2003); M. Tegmark et al., Phys. Rev. D 69, 103501 (2004); U. Seljak, A. Slosar, P. McDonald, J. Cosmol. Astropart. Phys. 10, 014 (2006); D.N. Spergel et al., Astrophys. J. Suppl. 170, 377 (2007). $[34]$ X. Zhang and F. Q. Wu, Phys. Rev. D 72, 043524 (2005) [astro-ph/0506310].
Dynamical Study of DBI-essence in Loop Quantum Cosmology and Braneworld Jhumpa Bhadra${}^{1}$111bhadra.jhumpa@gmail.com and Ujjal Debnath${}^{1}$222ujjaldebnath@yahoo.com , ujjal@iucaa.ernet.in ${}^{1}$ Department of Mathematics, Bengal Engineering and Science University, Shibpur, Howrah-711 103, India. (December 4, 2020) Abstract We have studied homogeneous isotropic FRW model having dynamical dark energy DBI-essence with scalar field. The existence of cosmological scaling solutions restricts the Lagrangian of the scalar field $\phi$. Choosing $p=Xg(Xe^{\lambda\phi})$, where $X=-g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi/2$ with $g$ is any function of $Xe^{\lambda\phi}$ and defining some suitable transformations, we have constructed the dynamical system in different gravity: (i) Loop Quantum Cosmology (LQC), (ii) DGP BraneWorld and (iii) RS-II Brane World. We have investigated the stability of this dynamical system around the critical point for three gravity models and investigated the scalar field dominated attractor solution in support of accelerated universe. The role of physical parameters have also been shown graphically during accelerating phase of the universe. pacs: 98.80.Cq, 98.80.Vc, 98.80.-k, 04.20.Fy Contents I Introduction II Basic Equations in DBI-essence III Evaluation of dynamical system in LQC III.0.1 Critical points: III.0.2 Stability of the model: IV Brane World IV.1 Basic equations in DGP Brane model IV.1.1 Dynamical system IV.1.2 Critical Points: IV.1.3 Stability of the model: IV.2 Basic Equations in RS II Brane World IV.2.1 Dynamical system IV.2.2 Critical points: IV.2.3 Stability of the model: V Discussions I Introduction Cosmic acceleration is on of the most challenging observation of the cosmology Riess ; Perlmutter2 . The reason for this accelerating universe is termed as dark energy which dominates the universe (70% of the universe) having large negative pressure and violates the strong energy condition Sahni1 ; Peebles ; Padmanabhan . The most popular candidate of dark energy is the cosmological constant $\Lambda$ Sami whose EoS parameter is $w=-1$. This kind of dark energy shows that the universe will accelerate forever. Another kind of dark energy dubbed as quintessence ($w<-1/3$) explains that the acceleration will replaced by deceleration in far future. And for phantom energy ($w<-1$), acceleration will change to super acceleration which will eventually destroy every stable gravitational structure Zhang . There are also other candidates of dark energy like Chaplygin gas Kamenshchik , modified Chaplygin gas (MCG) Banerjee , Tachyonic field Sen , DBI-essence Martin , K-essence Armendariz-Picon and so on. There are numerous works done on dark energy on the theory of Einstein’s classical general relativity (GR). But, most physicists deemed that the gravity should be quantized. Loop quantum gravity (LQG) is an outstanding effort to describe the quantum effect of our universe. In this theory classical space time continuum is replaced by discrete quantum geometry. Now a days several cosmological (interacting dark energy model) models are studied in the frame work of LQC. Wu and Zhang Wu studied the cosmological evolution in LQC for the quintessence model. Chen et al Chen provided the parameter space for the existence of the accelerated scaling attractor in LQC with more general interacting term. When the Modified Chaplying Gas coupled to dark matter in the universe is described in the frame work LQC by Debnath et al jamil who resolved the famous cosmic coincidence problem in modern cosmology. There is another modification on gravity (Brane-gravity) which also exhibits the acceleration of the present day universe. It is proposed that our universe is a 3-brane embedded in a four dimensional space. An important ingredient of the brane world scenario is that the standard matter particles and forces are confined on the 3-brane and the only communication between the brane and bulk is through gravitational interaction (i.e., gravity can freely propagate in all dimensions) or some other dilatonic matter. In the review Rubakov ; Maartens ; Brax ; Csa there is different applications with special attention to cosmology in Brane-gravity. In this work we consider the two most popular brane models, namely DGP and RS II branes. Regarding cosmological acceleration, dark energy with energy density of scalar field act subdominant during radiation and dark matter eras and acts dominant at late times. Dynamical system theory has been applied with great success in cosmology and astrophysics within the context of general relativity. This theory are used to describe the behaviour of complex dynamical systems usually by constructing differential equations. This theory deals with a long term qualitative behaviour of the formed first order differential equations. It does not concentrate to find the precise solutions of the system but provide answers like whether the system is stable for long time and whether the stability depends on the initial conditions. Besides the other scientific fields this theory is now become widely useful in the research of cosmology. In the construction of different dark energy model cosmological scaling solutions work significant role Copeland ; Liddle ; Macorra ; Tsujikawa . Tsujikawa et al Tsujikawa ; Piazza proved that scaling solution exists for coupled dark energy whenever they restrict to the form of the field Lagrangian $p(X,\phi)=Xg(Xe^{\lambda\phi})$ where $X=-g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi/2$ and $g$ is any function of $Xe^{\lambda\phi}$. In reference Naskar , they also considered the interacting model in these Lagrangian form and studied the stability of fixed points for several different dark energy models for ordinary (phantom) field, dilatonic ghost condensate and (phantom) tachyon. Our main aim of this work is to examine the nature of the different physical parameters for the universe around the stable critical points in LQC and two brane world models (DGP and RS II) in presence of DBI-esssence type dark energy along with dark matter with suitable interaction term. With the evolution of the universe we find the effective state parameter $w_{eff}$, Critical densities for dark energy ($\Omega_{\phi}$) and for dark matter ($\Omega_{m}$) and examine future dominance nature of kinetic energy and potential energy. In this work, we have considered the field Lagrangian $p(X,\phi)=Xg(Xe^{\lambda\phi})$ and studied the dark energy model in different gravity theories like (i) LQC, (ii) DGP Brane-world, (iii) RS II Brane-world. We also derive the critical point of the dynamical system in different gravity and analyze the stability. Also we do the numerical simulation for LQC, DGP-Brane-world and RS II Brane-world models. Some fruitful conclusions are drawn in section V. II Basic Equations in DBI-essence The action of the Dirac-Born-Infeld (DBI) scalar field $\phi$ can be written as (choosing $8\pi G=c=1$) Yamaguchi $$\displaystyle S_{DBI}=-\int d^{4}x\sqrt{-g}\left[T(\phi)\sqrt{1-\frac{\dot{% \phi}^{2}}{T(\phi)}}-T(\phi)+V(\phi)\right]$$ (1) where $V(\phi)$ is the self-interacting potential and $T(\phi)$ is the warped brane tension. The kinetic term of the above action is non-canonical. Physically, this originates from the fact that the action of the system is proportional to the volume traced out by the brane during its motion. This volume is given by the square-root of the induced metric which automatically leads to a DBI kinetic term. From the above action, it is easy to determine the energy density and pressure of the DBI-essence scalar field which are respectively given by $$\displaystyle\rho_{\phi}=(\gamma-1)T(\phi)+V(\phi)$$ (2) and $$\displaystyle p_{\phi}=\frac{(\gamma-1)}{\gamma}T(\phi)-V(\phi)$$ (3) where $\gamma$ is given by $$\displaystyle\gamma=\frac{1}{\sqrt{1-\frac{\dot{\phi}^{2}}{T(\phi)}}}$$ (4) From above expression, we observe that $T(\phi)>\dot{\phi}^{2}$ and thus $\gamma>1$. We consider a spatially flat Friedmann-Lemaitre- Robertson-Walker (FLRW) Universe containing a perfect fluid and a scalar field $\phi$. Assuming that there is an interaction between scalar field (dark energy) and the perfect fluid (dark matter), so they are not separately conserved. The energy balance equations for the interacting dark energy and dark matter can be expressed as Piazza $$\displaystyle\dot{\rho}_{\phi}+3H(1+w_{\phi})\rho_{\phi}=-Q\rho_{m}\dot{\phi}$$ (5) and $$\displaystyle\dot{\rho}_{m}+3H(1+w_{m})\rho_{m}=Q\rho_{m}\dot{\phi}$$ (6) where $\rho_{m}$ is the energy density of the dark matter, $w_{m}$ is the EoS parameter for the dark matter, $H=\frac{\dot{a}}{a}$ Hubble parameter, $a$ is the scale factor and $Q>0$ is the coupling between dark energy (DBI-essence) and the dark matter. We define the fractional density of dark energy and dark matter, $\Omega_{\phi}=\frac{\rho_{\phi}}{3H^{2}}$ and $\Omega_{m}=\frac{\rho_{m}}{3H^{2}}$ . To get stable attractor solution we must have $\gamma=$ constant, $p_{\phi}=Xg(Y)$, is the scalar field pressure density where $Y=Xe^{\lambda\phi}$, $X=-g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi/2$ with $g$ is any function of $Y$ Tsujikawa ; Piazza . For DBI-essence we choose $$\displaystyle T(\phi)=\frac{\gamma^{2}}{\gamma^{2}-1}\dot{\phi}^{2},~{}~{}~{}~% {}~{}~{}~{}~{}~{}V(\phi)=V_{0}e^{\lambda\phi}$$ (7) Then the pressure $p_{\phi}$ and the energy density $\rho_{\phi}$ can be written in the form $$\displaystyle p_{\phi}=Xg(Y)$$ $$\displaystyle\rho_{\phi}=2X\frac{\partial p_{\phi}}{\partial X}-p_{\phi}=X[g(Y% )+2Yg^{\prime}(Y)]$$ (8) whenever we choose $$\displaystyle g(Y)=\frac{2\gamma^{2}}{\gamma^{2}-1}-\frac{V_{0}}{Y}$$ (9) where ${}^{\prime}$ denotes the derivative with respect to $Y$. The total cosmic energy density $\rho=\rho_{\phi}+\rho_{m}$ satisfies the conservation equation $\dot{\rho}+3H(\rho+p)=0$, where $p=p_{\phi}+p_{m}$. III Evaluation of dynamical system in LQC The modified Friedmann equation for LQC is given by Wu ; Chen ; Fu . $$\displaystyle H^{2}=\frac{\rho}{3}\left(1-\frac{\rho}{\rho_{c}}\right)$$ (10) Here $\rho_{c}\equiv\sqrt{3}\pi^{2}\eta^{3}G^{2}\hbar$ is the critical loop quantum density and $\eta$ is the dimensionless Barbero-Immirzi parameter. It should be noted that for our LQC model, $\rho<\rho_{c}$ . Consequently we obtain the modified Raychaudhuri equation (using the conservation law) $$\displaystyle\dot{H}=-\frac{1}{2}\left(p+\rho\right)(1-2\frac{\rho}{\rho_{c}})$$ (11) We introduce the following dimensionless quantities $$\displaystyle x=\frac{\dot{\phi}}{\sqrt{6}H},~{}~{}~{}~{}~{}~{}~{}~{}y=\frac{e% ^{-\lambda\phi/2}}{\sqrt{3}H},~{}~{}~{}~{}~{}~{}~{}~{}~{}z=\frac{\rho}{\rho_{c}}$$ (12) We see that $y$ and $z(<1)$ must be non-negative, but $x$ may or may not be positive depends on the nature of $\dot{\phi}$. Substituting the expressions of $x,~{}y$ and $z$ in equations (5), (6) and (11), we obtain the first order differential equations in the form of autonomous system as follows: $$\displaystyle\frac{dx}{dN}=-3x+\frac{3x}{2}\left[A(1-w_{m})x^{2}-\frac{(1+% \omega_{m})\left\{1+V_{0}y^{2}(z-1)\right\}}{z-1}\right](1-2z)+$$ $$\displaystyle\frac{\sqrt{6}Qx^{2}\left\{1+Ax^{2}(z-1)+V_{0}y^{2}(z-1)\right\}}% {2Ax^{2}(z-1)}+\frac{\sqrt{6}\lambda V_{0}y^{2}(3x^{2}-4y^{3})(1-z)}{2Ax^{2}(z% -1)}$$ (13) $$\displaystyle\frac{dy}{dN}=\frac{3y}{2}\left[A(1-w_{m})x^{2}-\frac{(1+w_{m})% \left\{1+V_{0}y^{2}(z-1)\right\}}{z-1}\right](1-2z)-\frac{\sqrt{6}}{2}\lambda xy$$ (14) $$\displaystyle\frac{dz}{dN}=-3\left[A(1-w_{m})x^{2}-\frac{(1+w_{m})\left\{1+V_{% 0}y^{2}(z-1)\right\}}{z-1}\right](1-z)z$$ (15) $$\displaystyle\frac{1}{H}\frac{dH}{dN}=-\frac{3}{2}\left[A(1-w_{m})x^{2}-\frac{% (1+w_{m})\left\{1+V_{0}y^{2}(z-1)\right\}}{z-1}\right](1-2z)$$ (16) where $A=\frac{2\gamma^{2}}{\gamma^{2}-1}$, $N=\ln a$ being the number of $e$-folds and $a$ being the scale factor. In terms of the new variables $x,~{}y,~{}z$ we obtain the following physical parameters $$\displaystyle\Omega_{\phi}=Ax^{2}+V_{0}y^{2},~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}% ~{}w_{\phi}=\frac{Ax^{2}-V_{0}y^{2}}{Ax^{2}+V_{0}y^{2}}$$ (17) $$\displaystyle w_{eff}=\frac{p_{\phi}+p_{m}}{\rho_{\phi}+\rho_{m}}=\frac{w_{m}+% (Ax^{2}-V_{0}y^{2})(1-z)}{1+Ax^{2}(1-z)+V_{0}y^{2}(1-z)}$$ (18) The fraction density function as $\Omega_{\phi}=\frac{\rho_{\phi}}{3H^{2}}$ and $\Omega_{m}=\frac{\rho_{m}}{3H^{2}}$ satisfying $$\displaystyle\Omega_{\phi}+\Omega_{m}+\Omega_{LQC}=1$$ where, $\Omega_{LQC}=\frac{\rho}{\rho-\rho_{c}}$ is the density parameter due to the effect of LQC. Since $\rho<\rho_{c}$, so $\Omega_{LQC}<0$ in our case. The new variables ($x,y,z$) have been drawn in figure 1 with respect to $N=\ln a$ and seen that all are positive oriented due to the expansion of the universe. Also $\Omega_{\phi}$, $\Omega_{m}$ and $w_{eff}$ have been drawn in figure 2. $\Omega_{m}$ shows the lower value ($<1$) and $\Omega_{\phi}$ shows the value $>1$ in evolution, so $\Omega_{\phi}$ gets higher value than $\Omega_{m}$. So in late stage, the dark energy (DBI-essence) dominates over dark matter. Also $w_{eff}$ gives the negative value less than $-0.5$ which shows the dark energy dominated phase of the universe. III.0.1 Critical points: The critical points can be obtained by setting $\frac{dx}{dN}=0$, $\frac{dy}{dN}=0$ and $\frac{dz}{dN}=0$ and are presented in the following table. Table 1: The critical points ($x_{c},~{}y_{c},~{}z_{c}$) and the corresponding values of the density parameter $\Omega_{\phi}$.   No.  $$x_{c}$$            $$y_{c}$$              $$z_{c}$$                        $$\Omega_{\phi}$$   (i)  $$\frac{1}{\sqrt{A}}$$             0               0                          1   (ii) $$-\frac{1}{\sqrt{A}}$$           0               0                          1   (iii) $$\frac{\sqrt{\frac{2}{3}}Q}{A(w_{m}-1)}$$      0               0                     $$\frac{2Q^{2}}{3A(1-w_{m})^{2}}$$   (iv) $$\frac{\sqrt{6}(1+w_{m})}{2Q}$$     0        $$\frac{3A(1-w_{m}^{2}+2Q^{2})}{3A(1-w_{m}^{2})}$$            $$\frac{3A(1+w_{m})^{2}}{2Q^{2}}$$ From the Table 1, we see that the components of $y_{c}$ is equal to zero for above four critical points. The value of $\Omega_{\phi}=1$ for the critical points given in (i) and (ii). These provide the accelerated phase of the universe. Similar nature happen for other two critical points (iii) and (iv), but these depend on the interaction term $Q$, $A$ and $w_{m}$. III.0.2 Stability of the model: Now the stability around the critical points can by determined by the sign of the corresponding eigen values. If the eigen values corresponding to the critical point are all negative, the critical points are stable node, otherwise unstable. The eigen values for the above critical points are obtained as in the following: Table 2: The eigen values corresponding to the critical points ($x_{c},~{}y_{c},~{}z_{c}$).   No:        Value1                                     Value2                                   Value3   (i)          -6                                   $$3+\frac{\sqrt{6}Q}{\sqrt{A}}-3w_{m}$$                            $$3-\sqrt{\frac{3\lambda}{2A}}$$   (ii)         -6                                   $$3-\frac{\sqrt{6}Q}{\sqrt{A}}-3w_{m}$$                            $$3+\sqrt{\frac{3\lambda}{2A}}$$   (iii)   $$-\frac{3}{2}+\frac{3w_{m}}{2}+\frac{Q^{2}}{A(1-w_{m})}$$           $$-\frac{2Q^{2}}{A(1-w_{m})}-3(1+w_{m})$$              $$-\frac{3A(1-w_{m}^{2})+2Q(Q+\lambda)}{2A(-1+w_{m})}$$   (iv)      $$-\frac{3}{2}\left(1+R\right)$$                          $$\frac{3}{2}\left(-1+R\right)$$                               $$\frac{3(1+w_{m})\lambda}{2Q}$$                                            where, $$R=\sqrt{\frac{6A(-1+w_{m})(1+w_{m})^{2}-Q^{2}(3+4w_{m})}{Q^{2}}}$$ From the Table 2, we observe in the following: (a) One eigen value for the critical point (i) is positive, since $3(1-w_{m})+\frac{\sqrt{6}Q}{\sqrt{A}}>0$, so around this critical point system is not stable. (b) One eigen value for the critical point (ii) is positive, since, $3+\sqrt{\frac{3\lambda}{2A}}>0$, so around this critical point system is not stable. (c) If $\gamma^{2}<1$ and $Q(Q+\lambda)>\frac{3A(1-w_{m}^{2})}{2}$, the eigen values of the critical point (iii) are all negative and hence the system is stable (node). (d) All eigen values of the critical point (iv) are negative if $\lambda<0$ and $R<1$, so the system may be stable otherwise the system will be unstable. IV Brane World IV.1 Basic equations in DGP Brane model An effectual model of brane-gravity is the Dvali-Gabadadze-Porrati (DGP) braneworld model Dvali ; Deffayet that represents our 4-dimensional universe to a FRW brane embedded in a 5-dimensional Minkowski bulk. It explains the origin of DE as the gravity on the brane escaping to the bulk at large scale. On the 4-dimensional brane the action of gravity is proportional to $M_{p}^{2}$. That action is proportional to the corresponding quantity in 5-dimensions in the bulk. The modified Friedmann equation in DGP brane model considering flat, homogeneous and isotropic brane is given by $$\displaystyle H^{2}=\left(\sqrt{\frac{\rho}{3}+\frac{1}{4r_{c}^{2}}}+\epsilon% \frac{1}{2r_{c}}\right)^{2}$$ (19) where $\rho$ is the cosmic fluid energy density, $H=\frac{\dot{a}}{a}$, Hubble parameter and $r_{c}=\frac{M_{p}^{2}}{2M_{5}^{2}}$ is the crossover scale which resolve the transition from 4D to 5D behavior and $\epsilon=\pm 1$. Corresponding to $\epsilon=+1$ the we have standard DGP$(+)$ model which is self accelerating model without any form of DE, and effective $w$ is always non-phantom. However for $\epsilon=-1$, we have DGP$(-)$ model which does not self accelerate but requires DE on the brane. Using (18), the modified Raychaudhuri equation becomes (choosing $8\pi G=c=1$) $$\displaystyle\left(2H-\epsilon\frac{1}{r_{c}}\right)\dot{H}=-H(\rho+p)$$ (20) IV.1.1 Dynamical system To get dynamical analysis of our DGP brane world model of the universe, we define the following dimensionless quantity $$\displaystyle x=\frac{\dot{\phi}}{\sqrt{6}H},~{}~{}~{}~{}y=\frac{e^{-\lambda% \phi/2}}{\sqrt{3}H}~{}~{}~{}~{}~{}z=2H-\frac{\epsilon}{r_{c}}$$ (21) We see that $y$ must be non-negative, but $x$ and $z$ may or may not be positive. When $\phi$ is increasing, $x$ must be positive and $\phi$ decreases implies $x$ is negative. Also $\epsilon=-1$ implies $z>0$, but for $\epsilon=+1$, the value of $z$ may or may not be positive. Now we introduce the fraction density parameters as $\Omega_{\phi}=\frac{\rho_{\phi}}{3H^{2}}$ and $\Omega_{m}=\frac{\rho_{m}}{3H^{2}}$ satisfying $$\displaystyle\Omega_{\phi}+\Omega_{m}+\Omega_{DGP}=1$$ (22) where, $\Omega_{DGP}=\frac{\epsilon}{r_{c}H}$ is the density parameter due to the effect of DGP brane world with the physical parameters $$\displaystyle\Omega_{\phi}=Ax^{2}+V_{0}y^{2},~{}~{}~{}~{}~{}w_{\phi}=\frac{Ax^% {2}-V_{0}y^{2}}{Ax^{2}+V_{0}y^{2}}$$ (23) $$\displaystyle w_{eff}=\frac{p_{\phi}+p_{m}}{\rho_{\phi}+\rho_{m}}=\frac{4r_{c}% ^{2}(Ax^{2}-V_{0}y^{2})+w_{m}\left[-1-4r_{c}^{2}(-1+Ax^{2}+V_{0}y^{2})-4r_{c}% \epsilon+\epsilon^{2}\right)]}{(2r_{c}+\epsilon)^{2}-1}$$ (24) Using all this and defining $N=\ln a$ (the number of $e$-folds) we get the system of equations as follows: $$\displaystyle\frac{dx}{dN}=-3x+\frac{3x}{2z}\left(z+\frac{\epsilon}{r_{c}}% \right)\left(2Ax^{2}-\frac{(1+w_{m})\{1+4r_{c}^{2}(-1+Ax^{2}+V_{0}y^{2})+4r_{c% }\epsilon-\epsilon^{2}\}}{4r_{c}^{2}}\right)$$ $$\displaystyle+\frac{\sqrt{6}[Q\{1+4r_{c}^{2}(-1+Ax^{2}+V_{0}y^{2})+4r_{c}% \epsilon-\epsilon^{2}\}+4V_{0}\lambda r_{c}^{2}y^{2}]}{8Ar_{c}^{2}}$$ (25) $$\displaystyle\frac{dy}{dN}=\frac{y}{2}\left[\frac{3}{z}\left(z+\frac{\epsilon}% {r_{c}}\right)\left(2Ax^{2}-\frac{(1+w_{m})\{1+4r_{c}^{2}(-1+Ax^{2}+V_{0}y^{2}% )+4r_{c}\epsilon-\epsilon^{2}\}}{4r_{c}^{2}}\right)+\sqrt{6}\lambda x\right]$$ (26) $$\displaystyle\frac{dz}{dN}=-\frac{3}{2z}\left(z+\frac{\epsilon}{r_{c}}\right)^% {2}\left(2Ax^{2}-\frac{(1+w_{m})\{1+4r_{c}^{2}(-1+Ax^{2}+V_{0}y^{2})+4r_{c}% \epsilon-\epsilon^{2}\}}{4r_{c}^{2}}\right)$$ (27) $$\displaystyle\frac{1}{H}\frac{dH}{dN}=-\frac{3}{2z}\left(z+\frac{\epsilon}{r_{% c}}\right)\left(2Ax^{2}-\frac{(1+w_{m})\{1+4r_{c}^{2}(-1+Ax^{2}+V_{0}y^{2})+4r% _{c}\epsilon-\epsilon^{2}\}}{4r_{c}^{2}}\right)$$ (28) The new variables ($x,y,z$) has been drawn in figures 3 and 5 with respect to $N=\ln a$ for DGP($+$) and DGP($-$) models respectively. In all the cases, $x$ has shown to be negative i.e., the DBI scalar field $\phi$ decreases during expansion, but $y$ and $z$ keeps positive sign. The effective EoS parameter $w_{eff}$ and the density parameters $\Omega_{\phi},~{}\Omega_{m}$ are shown in figures 4 and 6 for DGP($+$) and DGP($-$) models respectively. During expansion, the $\Omega_{\phi}$ increases and $\Omega_{m}$ decreases which show the dark energy dominates at late times. Also $w_{eff}$ decreases from some $-0.4$ to $-1$ for both models, which also shows the dark energy dominated phase of the universe. IV.1.2 Critical Points: The critical points can be obtained by setting $\frac{dx}{dN}=0$, $\frac{dy}{dN}=0$ and $\frac{dz}{dN}=0$. The possible critical points ($x_{c},y_{c},z_{c}$) and the corresponding values of $\Omega_{\phi}$ of our DGP model are given by (i) $\left(0,\sqrt{\frac{Q\{-1+(\epsilon-2r_{c})^{2}\}}{4V_{0}(Q+\lambda)r_{c}^{2}}% },-\frac{\epsilon}{r_{c}}\right)$, $\Omega_{\phi}=\frac{Q\{-1+(\epsilon-2r_{c})^{2}\}}{4(Q+\lambda)r_{c}^{2}}$ (ii)$\left(\frac{\sqrt{6}Ar_{c}^{2}+\sqrt{Ar_{c}^{2}[6Ar_{c}^{2}+Q^{2}\{-1+(% \epsilon-2r_{c})^{2}\}]}}{2AQr_{c}^{2}},0,-\frac{\epsilon}{r_{c}}\right)$, $\Omega_{\phi}=\frac{\left(\sqrt{6}Ar_{c}^{2}+\sqrt{Ar_{c}^{2}[6Ar_{c}^{2}+Q^{2% }\{-1+(\epsilon-2r_{c})^{2}\}]}\right)^{2}}{4AQ^{2}r_{c}^{4}}$ (iii)$\left(\frac{\sqrt{6}Ar_{c}^{2}-\sqrt{Ar_{c}^{2}[6Ar_{c}^{2}+Q^{2}\{-1+(% \epsilon-2r_{c})^{2}\}]}}{2AQr_{c}^{2}},0,-\frac{\epsilon}{r_{c}}\right)$, $\Omega_{\phi}=\frac{\left(-\sqrt{6}Ar_{c}^{2}+\sqrt{Ar_{c}^{2}[6Ar_{c}^{2}+Q^{% 2}\{-1+(\epsilon-2r_{c})^{2}\}]}\right)^{2}}{4AQ^{2}r_{c}^{4}}$ The value of $\Omega_{\phi}>1$ or $<1$ for the critical points given in (i) to (iii), depends on the values of the $r_{c}$, $A$ and the interaction term $Q$. IV.1.3 Stability of the model: Now the stability around the critical points can by determined by the sign of the corresponding eigen values. If the eigen values corresponding to the critical point are all negative, the critical points are stable node, otherwise unstable. The eigen values for the above critical points are obtained as in the following: Table 3: The eigen values corresponding to the critical points ($x_{c},~{}y_{c},~{}z_{c}$).   NO:        Value1                                     Value2                                   Value3   (i)          $$0$$               $$-\frac{3Ar_{c}^{2}+\sqrt{9A^{2}r_{c}^{4}-3AQr_{c}^{2}\{-1+(-2r_{c}+\epsilon)^{% 2}\}\lambda}}{2Ar_{c}^{2}}$$             $$\frac{-3Ar_{c}^{2}+\sqrt{9A^{2}r_{c}^{4}-3AQr_{c}^{2}\{-1+(-2r_{c}+\epsilon)^{% 2}\}\lambda}}{2Ar_{c}^{2}}$$   (ii)         $$0$$               $$\frac{\sqrt{3\left[Ar_{c}^{2}\{6Ar_{c}^{2}+Q^{2}\{-1+(-2r_{c}+\epsilon)^{2}\}% \}\right]}}{\sqrt{2}Ar_{c}^{2}}$$             $$-\frac{\left(6Ar_{c}^{2}+\sqrt{6\left[Ar_{c}^{2}\{6Ar_{c}^{2}+Q^{2}\{-1+(-2r_{% c}+\epsilon)^{2}\}\}\right]}\right)\lambda}{4Ar_{c}^{2}}$$   (iii)         $$0$$               $$-\frac{\sqrt{3\left[Ar_{c}^{2}\{6Ar_{c}^{2}+Q^{2}\{-1+(-2r_{c}+\epsilon)^{2}\}% \}\right]}}{\sqrt{2}Ar_{c}^{2}}$$             $$\frac{\left(-6Ar_{c}^{2}+\sqrt{6\left[Ar_{c}^{2}\{6Ar_{c}^{2}+Q^{2}\{-1+(-2r_{% c}+\epsilon)^{2}\}\}\right]}\right)\lambda}{4Ar_{c}^{2}}$$ From Table 3, we see that one eigen value for all three critical points is zero. Hence the dynamical system is unstable around all critical points. IV.2 Basic Equations in RS II Brane World Randall and Sundrum Randall1 ; Randall2 elucidate the higher dimensional scenario by introducing a bulk-brane model dubbed as RS II brane model. They proposed that we live in a four dimensional world (called 3-brane, a domain wall) which is embedded in a 5D space time (bulk). All matter fields are confined in the brane and gravity can only propagate in the bulk. In RS II Brane world the modified Einstein equations in flat universe are $$\displaystyle 3H^{2}=\Lambda_{4}+\kappa_{4}^{2}\rho+\frac{\kappa_{4}^{2}}{2% \lambda_{1}}\rho^{2}+\frac{6}{\lambda_{1}\kappa_{4}^{2}}{\cal U}$$ (29) $$\displaystyle 2\dot{H}+3H^{2}=\Lambda_{4}-\kappa_{4}^{2}p-\frac{\kappa_{4}^{2}% }{2\lambda_{1}}\rho p-\frac{\kappa_{4}^{2}}{2\lambda_{1}}\rho^{2}-\frac{2}{% \lambda_{1}\kappa_{4}^{2}}{\cal U}$$ (30) Here $\kappa_{4}$ and $\Lambda_{4}$ are respectively 4D gravitational constant and effective 4D cosmological constant. The dark radiation ${\cal U}$ satisfies the relation $$\displaystyle\dot{{\cal U}}+4H{\cal U}=0$$ (31) IV.2.1 Dynamical system Here we introduce the new variables $$\displaystyle x=\frac{\dot{\phi}}{\sqrt{6}H},~{}~{}~{}~{}y=\frac{e^{-\lambda% \phi/2}}{\sqrt{3}H}~{}~{}~{}~{}~{}z=\frac{\rho}{2\lambda_{1}}$$ (32) Also we introduce the fraction density function as $\Omega_{\phi}=\frac{\rho_{\phi}}{3H^{2}}$ and $\Omega_{m}=\frac{\rho_{m}}{3H^{2}}$ satisfying $$\displaystyle\Omega_{\phi}+\Omega_{m}+\Omega_{RSII}=1$$ (33) where, $\Omega_{RSII}=1-\frac{1}{\kappa_{4}^{2}(1+z)}$ is the density parameter due to the effect of RS II brane world with, $$\displaystyle\Omega_{\phi}=Ax^{2}+V_{0}y^{2},~{}~{}~{}~{}~{}w_{\phi}=\frac{Ax^% {2}-V_{0}y^{2}}{Ax^{2}+V_{0}y^{2}}$$ (34) $$\displaystyle w_{eff}=\kappa_{4}^{2}(1+z)\left[A(1-w_{m})x^{2}-V_{0}(1+w_{m})y% ^{2}+\frac{w_{m}}{\kappa_{4}^{2}(1+z)}\right]$$ (35) In absence of the cosmological constant and dark radiation ($\Lambda_{4}={\cal U}=0$) the above equations reduce to the dynamical system of equations as follows $$\displaystyle\frac{dx}{dN}=-3x+\frac{3}{2}\kappa_{4}^{2}x\left[\left(Ax^{2}+V_% {0}y^{2}\right)z+2Ax^{2}(1+z)+\{1+w_{m}+(2+w_{m})z\}\left(-Ax^{2}-V_{0}y^{2}+% \frac{1}{\kappa_{4}^{2}(1+z)}\right)\right]$$ $$\displaystyle-\frac{\sqrt{3}\left[-Q\left(Ax^{2}+V_{0}y^{2}\right)+\frac{Q}{% \kappa_{4}^{2}(1+z)}-V_{0}\lambda y^{2}\right]}{\sqrt{2}A}$$ (36) $$\displaystyle\frac{dy}{dN}=\frac{y}{2}\left[6-\frac{3}{1+z}+3A\kappa_{4}^{2}(1% -w_{m})x^{2}(1+z)-3\kappa_{4}^{2}V_{0}y^{2}(1+z)-3w_{m}\{-1+\kappa_{4}^{2}V_{0% }y^{2}(1+z)\}-\sqrt{6}\lambda x\right]$$ (37) $$\displaystyle\frac{dz}{dN}=-3\kappa_{4}^{2}z(1+z)\left[Ax^{2}(1-w_{m})+(1+w_{m% })\left(\frac{1}{\kappa_{4}^{2}(z+1)}-V_{0}y^{2}\right)\right]$$ (38) $$\displaystyle\frac{1}{H}\frac{dH}{dN}=-\frac{3}{2}\kappa_{4}^{2}\left[\left(Ax% ^{2}+V_{0}y^{2}\right)z+2Ax^{2}(1+z)+\{1+w_{m}+(2+w_{m})z\}\left(-Ax^{2}-V_{0}% y^{2}+\frac{1}{\kappa_{4}^{2}(1+z)}\right)\right]$$ (39) The new variables ($x,y,z$) has been drawn in figure 7 with respect to $N=\ln a$ for RS II model. We see that $x,y,z$ are shown to be positive throughout the evolution. The effective EoS parameter $w_{eff}$ and the density parameters $\Omega_{\phi},~{}\Omega_{m}$ are shown in figure 8. During expansion, the $\Omega_{\phi}$ increases and $\Omega_{m}$ decreases which show the dark energy dominates at late times. Also $w_{eff}$ decreases from some value $0.1$ to negative value $<-1$ which also show the dark energy dominated with phantom phase of the universe. IV.2.2 Critical points: The critical points can be obtained by setting $\frac{dx}{dN}=0$, $\frac{dy}{dN}=0$ and $\frac{dz}{dN}=0$. The feasible critical points ($x_{c},y_{c},z_{c}$) and the corresponding values of $\Omega_{\phi}$ of our RS II model are given in the following: Table 4: The critical points ($x_{c},~{}y_{c},~{}z_{c}$) and the corresponding values of the density parameter $\Omega_{\phi}$.   No.  $$x_{c}$$                           $$y_{c}$$                          $$z_{c}$$                                               $$\Omega_{\phi}$$   (i)  $$\frac{\sqrt{2}Q}{\sqrt{3}A\kappa_{4}^{2}(w_{m}-1)}$$                    0                          0                                       $$\frac{2Q^{2}}{3A\kappa_{4}^{4}(w_{m}-1)^{2}}$$   (ii) $$\frac{\sqrt{2}Q+\sqrt{2Q^{2}-3A\kappa_{4}^{2}(w_{m}^{2}-1)}}{\sqrt{3}A\kappa_{% 4}^{2}(w_{m}-1)}$$      0      $$-\frac{6A\kappa_{4}^{2}(-1+w_{m}^{2})+2Q(2Q+\sqrt{4Q^{2}-6A\kappa_{4}^{2}(w_{m% }^{2}-1)})}{3A\kappa_{4}^{2}(w_{m}^{2}-1)}$$     $$\frac{\left(\sqrt{2}Q+\sqrt{2Q^{2}-3A\kappa_{4}^{2}(w_{m}^{2}-1)}\right)^{2}}{% 3A\kappa_{4}^{4}(w_{m}-1)^{2}}$$   (iii) $$\frac{\sqrt{6}Q-\sqrt{6Q^{2}-9A\kappa_{4}^{2}(w_{m}^{2}-1)}}{3A\kappa_{4}^{2}(% w_{m}-1)}$$      0      $$\frac{-6A\kappa_{4}^{2}(-1+w_{m}^{2})+2Q(2Q+\sqrt{4Q^{2}-6A\kappa_{4}^{2}(w_{m% }^{2}-1)})}{3A\kappa_{4}^{2}(w_{m}^{2}-1)}$$         $$\frac{\left(-\sqrt{2}Q+\sqrt{2Q^{2}-3A\kappa_{4}^{2}(w_{m}^{2}-1)}\right)^{2}}% {3A\kappa_{4}^{4}(w_{m}-1)^{2}}$$ From the Table 4, we see that the components of $y_{c}$ are equal to zero for above three critical points. The value of $\Omega_{\phi}>1$ or $<1$ for the critical points given in (i) to (iii), depends on the values of the $w_{m}$, $A$ and the interaction term $Q$. IV.2.3 Stability of the model: Now the stability around the critical points can by determined by the sign of the corresponding eigen values. If the eigen values corresponding to the critical point are all negative, the critical points are stable node, otherwise unstable. The eigen values for the first critical point (i) are obtained as in the following. The eigen values of critical points (ii) and (iii) are very difficult to obtain, so we have not considered that critical points here. Table 5: The eigen values corresponding to the first critical point ($x_{c},~{}y_{c},~{}z_{c}$).   NO:        Value1                                     Value2                                   Value3   (i)          $$\frac{2Q^{2}}{A\kappa_{4}^{2}(w_{m}-1)}-3(1+w_{m})$$               $$-\frac{Q^{2}}{A\kappa_{4}^{2}(w_{m}-1)}-\frac{3}{2}(1-w_{m})$$             $$\frac{3A\kappa_{4}^{2}(-1+w_{m}^{2})-2Q(Q+\lambda)}{2A\kappa_{4}^{2}(-1+w_{m})}$$ At the critical point (i), the three eigen values cannot be negative simultaneously, since $Q$ and $w_{m}$ are small quantities. So the dynamical system is unstable in this case. At the critical points (ii) and (iii) we can not study the stability analysis. V Discussions In this work, we have studied homogeneous isotropic FRW model having dynamical dark energy DBI-essence with scalar field in presence of perfect fluid having barotropic equation of state (i.e., $p_{m}=w_{m}\rho_{m}$). The existence of cosmological scaling solutions restricts the Lagrangian of the scalar field $\phi$. The stable attractor solution can be found only for $\gamma=$ constant. We have chosen the potential function for DBI-essence as $V(\phi)=V_{0}e^{\lambda\phi}$. Choosing $p=Xg(Xe^{\lambda\phi})$, where $X=-g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi/2$ with $g$ is any function of $Xe^{\lambda\phi}$ and defining some suitable transformations, we have constructed the dynamical system in different gravity theories like (i) Loop Quantum Cosmology (LQC), (ii) DGP Brane World and (iii) RS-II Brane World. For all gravity models, $\Omega_{m}$ gradually decreases to a small positive value and $\Omega_{\phi}$ gradually increases to a value near about 1. That means, DBI dark energy dominates over dark matter in late times. Also from the figures of $w_{eff}$, we see that $w_{eff}$ keeps negative sign in late times. For LQC model, $w_{eff}$ lies between $-0.5$ and $-1$, which is the dark energy dominated phase. For DGP model, $w_{eff}$ lies between $-0.4$ and $-1$. So for LQC and DGP models of the universe, the DBI dark energy valid only for quintessence era, they can not generate phantom era. Also in RS II model, $w_{eff}<-0.1$ and decreases to $-1$ upto certain stage of time and after that stage $w_{eff}$ becomes less than $-1$. So in RS II model, the DBI dark energy valid for quintessence era and phantom era in late times. We have found some critical points and investigated the stability of this dynamical system around the critical points for three gravity models and investigated the scalar field dominated attractor solution in support of accelerated universe. Gumjudpai et al Naskar in their work considered the dynamical system analysis for phantom field, tachyonic field and dilaton models of dark energy. They analyzed different dark energy model of scalar field coupled with barotropic perfect fluid and depicts that scaling solution is stable if the state parameter $w_{\phi}>-1$ and the scalar field dominated solution becomes unstable. The fixed points are always classically stable for a phantom field, implying that the universe is eventually dominated by the energy density of a scalar field if phantom is responsible for dark energy. Therefore in this case the final attractor is either a scaling solution with constant $\Omega_{\phi}$ satisfying $0<\Omega_{\phi}<1$ or a scalar-field dominant solution with $\Omega_{\phi}=1$. For our LQC model, four critical points have been found, in which only two critical points may be stable node while all other two critical points are unstable. For DGP model, three critical points have been found but they are all unstable. Also for RS II model, the calculated critical point is also unstable. An attractor scaling is established by Martin and Yamaguchi Martin after considering a dark energy model with DBI field. Recent times, the model of interacting dark energy has been explored in the framework of loop quantum cosmology (LQC). On that framework an interacting MCG with dark matter has been studied by constructing a dynamical system and depicts a scaling attractor solution which resolve the cosmic coincidence problem in modern Cosmology jamil . A dynamical system is explored in DGP, RSII Brane world separately with suitable interacting dark energy coupled with dark matter model Rudra and investigated that the universe in both scenarios follow the power law form of expansion around the critical point. So in conclusion, DBI-essence plays an important role of dark energy for FRW model of the universe in loop quantum cosmology, which drives the acceleration of the universe. Acknowledgement: The authors are thankful to IUCAA, Pune, India for warm hospitality where part of the work was carried out. One of the authors (JB) is thankful to CSIR, Govt of India for providing Junior Research Fellowship. References (1) A. G. Riess et al, Astron. J. 116, 1009 (1998). (2) S. J. Perlmutter et al, Astrophys. J. 517 565 (1999). (3) V. Sahni and A. A. Starobinsky, Int. J. Mod. Phys. A 9, 373 (2000). (4) P. J. E. Peebles and B. Ratra, Rev. Mod. Phys. 75, 559 (2003). (5) T. Padmanabhan, Phys. Rept. 380, 235 (2003). (6) Copeland, E. J., Sami, M. and Tsujikawa, S. Int. J. Mod. Phys.J 15, 1753(2006). (7) Zhang X., Int. J. Mod. Phys. D 14, 1597 (2005). (8) A. Kamenshchik et al, Phys. Lett. B 511 265 (2001). (9) Debnath, U., Banerjee, A. and Chakraborty, S.,Class. Quant. Grav. 21, 5609(2004). (10) A. Sen, JHEP 065 0207 (2002). (11) J. Martin and M. Yamaguchi, Phys. Rev. D 77 103508 (2008). (12) C. Armendariz-Picon et al, Phys. Rev. D 63 103510 (2001). (13) Wu, P., Zhang, S.N.:J. Cosmol. Astropart. Phys. 06, 007 (2008). (14) Chen, S., Wang, B., Jing, J.: Phys. Rev. D 78, 123503 (2008). (15) M. Jamil, U. Debnath, Astrophys Space Sci 333 (2011). (16) Rubakov, V. A., Phys. Usp. 44, 871(2001). (17) Maartens, R. Living Rev. Relativity 7 7(2004). (18) Brax, P. et. al Rep. Prog.Phys. 67, 2183(2004). (19) Csa ki, C. Phys.Rev. D 70, 044039 (2004)]. (20) E. J. Copeland, A. R. Liddle and D. Wands, Phys. Rev. D 57, 4686 (1998). (21) A. R. Liddle and R. J. Scherrer, Phys. Rev. D 59, 023509 (1999). (22) S. Mizuno, S. J. Lee and E. J. Copeland, Phys. Rev. D 70, 043525 (2004); E. J. Copeland, S. J. Lee, J. E. Lidsey and S. Mizuno, Phys. Rev. D 71, 023526 (2005); M. Sami, N. Savchenko and A. Toporensky, Phys. Rev. D 70, 123526 (2004). (23) S. Tsujikawa and M. Sami, Phys. Lett. B 603, 113 (2004). (24) F. Piazza and S. Tsujikawa, JCAP 0407, 004 (2004). (25) B. Gumjudpai, T. Naskar, M. Sami, and S. Tsujikawa, JCAP 0506, 007 (2005). (26) Martin J. & Yamaguchi M., Phys. Rev. D 77, 123508 (2008). (27) Fu, X., Yu, H., Wu, P. Phys. Rev. D 78, 063001 (2008). (28) Dvali, G. R. , Gabadadze, G., Porrati, M. Phys.Lett. B 485 208(2000). (29) Deffayet, D. Phys.Lett. B 502 199(2001); Deffayet, D., Dvali, G.R., Gabadadze, G Phys.Rev.D 65 044023 (2002). (30) A. Ashtekar, T. Pawlowski and P. Singh, Phys. Rev. Lett. 96, 141301 (2006); A. Ashtekar, T. Pawlowski and P. Singh, Phys. Rev. D 74, 084003 (2006). (31) Randall, L., Sundrum, R. Phys. Rev. Lett. 83, 3770(1999). (32) Randall, L., Sundrum, R. Phys. Rev. Lett. 83, 4690(1999). (33) P. Rudra, R. Biswas, U. Debnath, Astrophys. Space Sci. 339, 54 (2012).
Modifications of Tutte-Grothendieck invariants and Tutte polynomials Martin Kochol MÚ SAV, Štefánikova 49, 814 73 Bratislava 1, Slovakia martin.kochol@mat.savba.sk Abstract. We transform Tutte-Grothedieck invariants thus also Tutte polynomials on matroids so that the contraction-deletion rule for loops (isthmuses) coincides with the general case. 2010 Mathematics Subject Classification: 05C31, 05B35 1. Introduction A Tutte-Grothendieck invariant (shortly a T-G invariant) $\Phi$ is a mapping from the class of finite matroids to a commutative ring $(R,+,\cdot,0,1)$ such that $\Phi(M)=\Phi(M^{\prime})$ if $M$ is isomorphic to $M^{\prime}$ and there are constants $\alpha_{1},\beta_{1},\alpha_{2},\beta_{2}\in R$ such that (1) $$\displaystyle\begin{array}[]{lll}&\Phi(M)=1&\text{ if the ground set of $M$ is% empty,}\\ &\Phi(M)=\alpha_{1}\cdot\Phi(M-e)&\text{ if $e$ is an isthmus of $M$,}\\ &\Phi(M)=\beta_{1}\cdot\Phi(M-e)&\text{ if $e$ is a loop of $M$,}\\ &\Phi(M)=\alpha_{2}\cdot\Phi(M/e)+\beta_{2}\cdot\Phi(M-e)&\text{ otherwise,}% \end{array}$$ for every matroid $M$ and every element $e$ of $M$. We also say that $\Phi$ is determined by the 4-tuple $(\alpha_{1},\beta_{1},\alpha_{2},\beta_{2})$. In certain sense (see [7, 2]), all T-G invariants can be reduced from the Tutte polynomial of $M$ (2) $$\displaystyle T(M;x,y)=\sum_{A\subseteq E}(x-1)^{r(M)-r(A)}(y-1)^{|A|-r(A)},$$ where $E$ and $r$ denote the ground set and rank function of $M$, respectively. This is very important invariant that encodes many properties of graphs and has applications in combinatorics, knot theory, statistical physics and coding theory (see cf. [1, 2, 9]). $M-e=M/e$ if $e$ is a loop or an isthmus of $M$. Thus the second (third) row of (1) is contained in the fourth row if $\alpha_{1}=\alpha_{2}+\beta_{2}$ ($\beta_{1}=\alpha_{2}+\beta_{2}$). In this case $\Phi$ is called an isthmus-smooth (loop-smooth) T-G invariant. We show that any T-G invariant can be transformed to an isthmus- and loop-smooth T-G invariants. The transformations are studied in framework of matroid duality. Furthermore, we discuss modifications of covariance and convolution formulas known for the Tutte polynomial. Notice that transformations into isthmus-smooth invariants are used by decomposition algorithms of T-G invariants in [5]. 2. General modifications Lemma 1. Let $\alpha_{1},\beta_{1},\alpha_{2},\beta_{2}$ be arbitrary elements of a commutative ring $(R,+,\cdot,0,1)$. Then $\tilde{T}(M;\alpha_{1},\beta_{1},\alpha_{2},\beta_{2})=\alpha_{2}^{r(M)}\beta_% {2}^{r^{*}(M)}T(M;\alpha_{1}/\alpha_{2},\beta_{1}/\beta_{2})$ is the unique T-G invariant determined by $(\alpha_{1},\beta_{1},\alpha_{2},\beta_{2})$. Proof. For any matroid $M$, denote $\Phi(M)=\tilde{T}(M;\alpha_{1},\beta_{1},\alpha_{2},\beta_{2})$ (interpreting the formula as the substitution $x_{2}=\alpha_{2}$, $y_{2}=\beta_{2}$ in the polynomial $\tilde{T}(M;x_{1},y_{1},x_{2},y_{2})$). We use that $(T;x,y)$ is determined by $(x,y,1,1)$ and induction on $|E|$. The statement of lemma holds true if $|E|=0$, otherwise choose $e\in E$. If $e$ is an isthmus of $M$, then $$\displaystyle\begin{array}[]{c}\Phi(M)=\alpha_{2}^{r(M)}\beta_{2}^{r^{*}(M)}T(% M;\alpha_{1}/\alpha_{2},\beta_{1}/\beta_{2})=\\ \alpha_{2}^{r(M-e)+1}\beta_{2}^{r^{*}(M-e)}\alpha_{1}/\alpha_{2}T(M-e;\alpha_{% 1}/\alpha_{2},\beta_{1}/\beta_{2})=\alpha_{1}\Phi(M-e)\end{array}$$ by induction hypothesis. If $e$ is a loop of $M$, then $$\displaystyle\begin{array}[]{c}\Phi(M)=\alpha_{2}^{r(M-e)}\beta_{2}^{r^{*}(M-e% )+1}\beta_{1}/\beta_{2}T(M-e;\alpha_{1}/\alpha_{2},\beta_{1}/\beta_{2})=\beta_% {1}\Phi(M-e).\end{array}$$ If $e$ is neither a loop nor an isthmus of $M$, then $$\displaystyle\begin{array}[]{c}\Phi(M)=\alpha_{2}^{r(M/e)+1}\beta_{2}^{r^{*}(M% /e)}T(M/e;\alpha_{1}/\alpha_{2},\beta_{1}/\beta_{2})+\\ \alpha_{2}^{r(M-e)}\beta_{2}^{r^{*}(M-e)+1}T(M-e;\alpha_{1}/\alpha_{2},\beta_{% 1}/\beta_{2})=\alpha_{2}\Phi(M/e)+\beta_{2}\Phi(M-e).\end{array}$$ This proves the statement. ∎ Lemma 1 also follows from results of Oxley and Welsh [7] (see [2, Corollary 6.2.6]). Theorem 1. Let $\Phi$ be a T-G invariant determined by $(\alpha_{1},\beta_{1},\alpha_{2},\beta_{2})$, $\beta_{2}\neq 0$, and $\xi\in R$ be a multiple of $\beta_{2}$. Then $\Phi^{\rm is}_{\xi}(M)=\xi^{|E|}\left(\frac{\alpha_{1}-\alpha_{2}}{\beta_{2}}% \right)^{r^{*}(M)}\Phi(M)$ is an isthmus-smooth T-G invariant such that for every matroid $M$, $$\displaystyle\begin{array}[]{ll}\Phi^{\rm is}_{\xi}(M)=1&\text{ if }E=% \emptyset,\\ \Phi^{\rm is}_{\xi}(M)=\xi\beta_{1}(\alpha_{1}-\alpha_{2})/\beta_{2}\Phi^{\rm is% }_{\xi}(M-e)&\text{ if $e$ is a loop of $M$,}\\ \Phi^{\rm is}_{\xi}(M)=\xi\alpha_{2}\Phi^{\rm is}_{\xi}(M/e)+\xi(\alpha_{1}{-}% \alpha_{2})\Phi^{\rm is}_{\xi}(M{-}e)&\text{ otherwise}.\end{array}$$ Proof. By Lemma 1, $\Phi(M)=\alpha_{2}^{r(M)}\beta_{2}^{r^{*}(M)}T(M;\alpha_{1}/\alpha_{2},\beta_{% 1}/\beta_{2})$ for each matroid $M$. Setting $\zeta=\xi\left(\frac{\alpha_{1}-\alpha_{2}}{\beta_{2}}\right)$ and using equality $|E|=r(M)+r^{*}(M)$, we get $$\displaystyle\begin{array}[]{c}\Phi^{\rm is}_{\xi}(M)=\xi^{r(M)}\zeta^{r^{*}(M% )}\Phi(M)=(\xi\alpha_{2})^{r(M)}(\zeta\beta_{2})^{r^{*}(M)}T(M;\frac{\xi\alpha% _{1}}{\xi\alpha_{2}},\frac{\zeta\beta_{1}}{\zeta\beta_{2}}),\end{array}$$ whence by Lemma 1, $\Phi^{\rm is}_{\xi}$ is a T-G invariant determined by $(\xi\alpha_{1},\zeta\beta_{1},\xi\alpha_{2},\zeta\beta_{2})$. Furthermore, $\xi\alpha_{1}=\xi\alpha_{2}+\zeta\beta_{2}$, i.e., $\Phi^{\rm is}_{\xi}$ is an isthmus-smooth T-G invariant. ∎ $\Phi^{\rm is}_{\xi}$ is called the $\xi$-isthmus-smooth modification of $\Phi$. Notice that if $\Phi$ is an isthmus-smooth invariant (i.e., if $\alpha_{1}=\alpha_{2}+\beta_{2}$), then $\Phi^{\rm is}_{\xi}(M)=\xi^{|E|}\Phi(M)$ for every matroid $M$. If $R$ has zero divisors, then $\xi/\beta_{2}$ does not need to be unique. In this case we should formally replace fraction $\xi/\beta_{2}$ by $\xi^{\prime}$ where $\xi=\xi^{\prime}\beta_{2}$. On the other hand if $\alpha_{1}-\alpha_{2}=\xi^{\prime\prime}\beta_{2}$, it suffices to replace fraction $(\alpha_{1}-\alpha_{2})/\beta_{2}$ by $\xi^{\prime\prime}$ and allow $\xi$ to be any element of $R$. If $R$ contains no zero divisors, we can extend $R$ into its quotient field and allow $\xi$ to be any element of $R$, or any element of the quotient field. If $\Phi$ is a T-G invariant determined by $(\alpha_{1},\beta_{1},\alpha_{2},\beta_{2})$, then define $\Phi^{*}$ as the T-G invariant determined by $(\beta_{1},\alpha_{1},\beta_{2},\alpha_{2})$. Clearly, $\Phi=(\Phi^{*})^{*}$. By Lemma 1, $\Phi(M)=\alpha_{2}^{r(M)}\beta_{2}^{r^{*}(M)}T(M;\alpha_{1}/\alpha_{2},\beta_{% 1}/\beta_{2})$ and $\Phi^{*}(M^{*})=\beta_{2}^{r^{*}(M)}\alpha_{2}^{r(M)}T(M^{*};\beta_{1}/\beta_{% 2},\alpha_{1}/\alpha_{2})$ for each matroid $M$. The covariance formula (see [2]) is that $T(M;x,y)=T(M^{*};y,x)$, whence (3) $$\displaystyle\Phi(M)=\Phi^{*}(M^{*}).$$ Theorem 2. Let $\Phi$ be a T-G invariant determined by $(\alpha_{1},\beta_{1},\alpha_{2},\beta_{2}),\alpha_{2}\neq 0$, and $\xi\in R$ be a multiple of $\alpha_{2}$. Then $\Phi^{\rm ls}_{\xi}(M)=\xi^{|E|}\left(\frac{\beta_{1}-\beta_{2}}{\alpha_{2}}% \right)^{r(M)}\Phi(M)$ is a loop-smooth T-G invariant such that for every matroid $M$, $$\displaystyle\begin{array}[]{ll}\Phi^{\rm ls}_{\xi}(M)=1&\text{ if }E=% \emptyset,\\ \Phi^{\rm ls}_{\xi}(M)=\xi\alpha_{1}(\beta_{1}-\beta_{2})/\alpha_{2}\Phi^{\rm ls% }_{\xi}(M-e)&\text{ if $e$ is an isthmus of $M$,}\\ \Phi^{\rm ls}_{\xi}(M)=\xi(\beta_{1}{-}\beta_{2})\Phi^{\rm ls}_{\xi}(M/e)+% \beta_{2}\Phi^{\rm ls}_{\xi}(M{-}e)&\text{ otherwise}.\end{array}$$ Proof. Set $\Phi^{\rm ls}_{\xi}=((\Phi^{*})^{\rm is}_{\xi})^{*}$. By (3) and Theorem 1, $\Phi^{\rm ls}_{\xi}(M)=((\Phi^{*})^{\rm is}_{\xi})^{*}(M)=(\Phi^{*})^{\rm is}_% {\xi}(M^{*})$. Applying Theorem 1 for $\Phi^{*}$ and $M^{*}$, we get $(\Phi^{*})^{\rm is}_{\xi}(M^{*})=\xi^{|E|}\left(\frac{\beta_{1}-\beta_{2}}{% \alpha_{2}}\right)^{r(M)}\Phi^{*}(M^{*})=\xi^{|E|}\left(\frac{\beta_{1}-\beta_% {2}}{\alpha_{2}}\right)^{r(M)}\Phi(M)$. Furthermore by definition of $\Phi^{*}$ and Theorem 1, $((\Phi^{*})^{\rm is}_{\xi})^{*}$ is determined by $\left(\xi\alpha_{1}(\beta_{1}-\beta_{2})/\alpha_{2},\xi\beta_{1},\xi(\beta_{1}% -\beta_{2}),\xi\beta_{2}\right)$. ∎ Notice that $\Phi^{\rm ls}_{\xi}=((\Phi^{*})^{\rm is}_{\xi})^{*}$, whence $\Phi^{\rm is}_{\xi}=((((\Phi^{*})^{*})^{\rm is}_{\xi})^{*})^{*}=((\Phi^{*})^{% \rm ls}_{\xi})^{*}$. Thus (4) $$\displaystyle\Phi^{\rm ls}_{\xi}=((\Phi^{*})^{\rm is}_{\xi})^{*}\text{ and }% \Phi^{\rm is}_{\xi}=((\Phi^{*})^{\rm ls}_{\xi})^{*}.$$ $\Phi^{\rm ls}_{\xi}$ is called the $\xi$-loop-smooth modification of $\Phi$. If $\Phi$ is an isthmus invariant, then $\Phi^{\rm ls}_{\xi}(M)=\xi^{|E|}\Phi(M)$ for every matroid $M$. In Theorems 1 and 2 we have assumed that $\beta_{2}\neq 0$ and $\alpha_{2}\neq 0$, respectively. Let $l_{M}$ ($i_{M}$) denote the number of loops (isthmuses) in a matroid $M$. If $\alpha_{2}=0$, then by (1), $\Phi(M)=\alpha_{1}^{r(M)}\beta_{1}^{l_{M}}\beta_{2}^{r^{*}(M)-l_{M}}$, whence by (3), $\Phi(M)=\beta_{1}^{r^{*}(M)}\alpha_{1}^{i_{M}}\alpha_{2}^{r(M)-i_{M}}$ if $\beta_{2}=0$. Thus $\Phi(M)$ is easy to evaluate if $\alpha_{2}=0$ or $\beta_{2}=0$ (a contrast with the fact that the Tutte polynomial is difficult to evaluate, see [3, 4, 8]). 3. Modifications of the Tutte polynomial Let $\xi\in\mathbb{Z}[x,y]$. Then $\xi$ is a multiple of $1$ whence by Theorem 1, the $\xi$-isthmus-smooth modification of the Tutte polynomial of $M$ is (5) $$\displaystyle\phantom{aaa}T^{\rm is}_{\xi}(M;x,y)=\xi^{|E|}(x-1)^{r^{*}(M)}T(M% ;x,y)$$ and satisfies (6) $$\displaystyle\begin{array}[]{lll}&T^{\rm is}_{\xi}(M;x,y)=1&\text{if }E=% \emptyset,\\ &T^{\rm is}_{\xi}(M;x,y)=\xi y(x-1)T^{\rm is}_{\xi}(M-e;x,y)&\text{if $e$ is a% loop of $M$,}\\ &T^{\rm is}_{\xi}(M;x,y)=\xi T^{\rm is}_{\xi}(M/e;x,y)+\xi(x{-}1)T^{\rm is}_{% \xi}(M{-}e;x,y)&\text{otherwise.}\end{array}$$ By Theorem 2, the $\xi$-loop-smooth modification of the Tutte polynomial of $M$ is (7) $$\displaystyle\phantom{aaa}T^{\rm ls}_{\xi}(M;x,y)=\xi^{|E|}(y-1)^{r(M)}T(M;x,y)$$ and satisfies (8) $$\displaystyle\begin{array}[]{lll}&T^{\rm ls}_{\xi}(M;x,y)=1&\text{if }E=% \emptyset,\\ &T^{\rm ls}_{\xi}(M;x,y)=\xi x(y-1)T^{\rm ls}_{\xi}(M-e;x,y)&\text{if $e$ is % an isthmus,}\\ &T^{\rm ls}_{\xi}(M;x,y)=\xi(y{-}1)T^{\rm ls}_{\xi}(M{-}e;x,y)+\xi T^{\rm ls}_% {\xi}(M{/}e;x,y)&\text{otherwise.}\end{array}$$ By (3), $T^{\rm ls}_{\xi}(M;x,y)=(T^{\rm ls}_{\xi})^{*}(M^{*};x,y)$, and by (4), $(T^{\rm ls}_{\xi})^{*}(M^{*};x,y)=(T^{*})^{\rm is}_{\xi}(M^{*};x,y)$. By (2) and (1), we have $T^{*}(M^{*};x,y)=T(M^{*};y,x)$, whence $(T^{*})^{\rm is}_{\xi}(M^{*};x,y)=T^{\rm is}_{\xi}(M^{*};y,x)$, i.e., we have a variant of the covariance formula (9) $$\displaystyle T^{\rm ls}_{\xi}(M;x,y)=T^{\rm is}_{\xi}(M^{*};y,x).$$ Kook, Reiner, and Stanton [6] introduced the convolution formula $$\displaystyle T(M;x,y)=\sum_{A\subseteq E}T(M/A;x,0)\cdot T(M|A;0,y),$$ (where $M|A$ and $M/A$ denote the restriction of $M$ to $A$ and the contraction of $A$ from $M$, respectively). Hence by (5) and (7), $$\displaystyle T(M;x,y)=\sum_{A\subseteq E}\xi^{-|E|}(-1)^{-r(M/A)}T^{\rm ls}_{% \xi}(M/A;x,0))\cdot(-1)^{-r^{*}(M|A)}T^{\rm is}_{\xi}(M|A;0,y).$$ Since $r^{*}(M|A)=|A|-r(A)$, $r(M/A)=r(M)-r(A)$, and $2r(A)-r(M)-|A|$ has the same parity as $r(M)+|A|$, we get a variant of the convolution formula (10) $$\displaystyle T(M;x,y)=\xi^{-|E|}(-1)^{r(M)}\sum_{A\subseteq E}(-1)^{|A|}T^{% \rm ls}_{\xi}(M/A;x,0)\cdot T^{\rm is}_{\xi}(M|A;0,y).$$ The ring $\mathbb{Z}[x,y]$ has no divisors of zero, therefore it has a quotient field $\mathbb{F}[x,y]$, consisting of all rational polynomials with integral coefficients. Thus, as pointed out in the remark after Theorem 1, for any $\xi\in\mathbb{F}[x,y]$, we can consider $\xi$-isthmus- and $\xi$-loop-smooth modifications of the Tutte polynomial, thus also formulas (9), (10). If $\Phi$ is a T-G invariant determined by $(\alpha_{1},\beta_{1},\alpha_{2},\beta_{2})$ and $\xi,\zeta\in R$, then by Lemma 1, there exists a T-G invariant determined by $(\xi\alpha_{1},\zeta\beta_{1},\xi\alpha_{2},\zeta\beta_{2})$ denoted by $\Phi_{\xi,\zeta}$. Clearly, $\Phi_{\xi,\zeta}(M)=\xi^{r(M)}\zeta^{r^{*}(M)}\Phi(M)$ for each matroid $M$. Suppose that If $\Phi_{\xi,\zeta}$ is isthmus- and loop-smooth in the same time. Then $\xi\alpha_{1}=\zeta\beta_{1}=\xi\alpha_{2}+\zeta\beta_{2}$, whence $\xi/\zeta=\beta_{1}/\alpha_{1}=\beta_{2}/(\alpha_{1}-\alpha_{2})=(\beta_{1}-% \beta_{2})/\alpha_{2}$, and thus (11) $$\displaystyle\beta_{1}=\alpha_{1}\beta_{2}/(\alpha_{1}-\alpha_{2}).$$ On the other hand (11) implies $\beta_{1}/\alpha_{1}=\beta_{2}/(\alpha_{1}-\alpha_{2})$ and $(\beta_{1}-\beta_{2})/\alpha_{2}=\beta_{2}/(\alpha_{1}-\alpha_{2})$. Thus (11) is a necessary and sufficient condition for existence of $\xi$ and $\zeta$ such that $\Phi_{\xi,\zeta}$ is an isthmus- and loop-smooth invariant. Therefore this kind of transforation cannot be applied for each $\Phi$. In particular, (11) is not valid for the Tutte polynomial because $y\neq x/(x-1)$. References [1] L. Beaudin, J. Ellis-Monaghan, G. Pangborn, and R. Shrock, A little statistical mechanics for the graph theorist, Discrete Math. 310 (2010) 2037–2053. [2] T. Brylawski and J. Oxley, The Tutte polynomial and its applications, in: Matroid Applications, (N. White, Editor), Cambridge University Press, Cambridge (1992), pp. 123-225. [3] L.A. Goldberg and M. Jerrum, Inapproximability of the Tutte polynomial, Inform. and Comput. 206 (2008) 908–929. [4] F. Jaeger, D.L. Vertigan, and D.J.A. Welsh, On the computational complexity of the Jones and Tutte polynomials, Math. Proc. Cambridge Philos. Soc. 108 (1990) 35–53. [5] M. Kochol, Splitting formulas for Tutte-Grothendieck invariants, manuscript (2014). [6] W. Kook, V. Reiner, and D. Stanton, A convolution formula for the Tutte polynomial, J. Combin. Theory Ser. B 76 (1999) 297–300. [7] J.G. Oxley and D.J.A. Welsh, The Tutte polynomial and percolation, in: Graph Theory and Related Topics, (J.A. Bondy and U.S.R. Murty, Editors), Academic Press, New York (1979), pp. 329-339. [8] D. Vertigan, The computational complexity of Tutte invariants for planar graphs, SIAM J. Comput. 135 (2005) 690–712. [9] D. J. A. Welsh, Complexity: Knots, Colourings and Counting, London Math. Soc. Lecture Notes Series 186, Cambridge University Press, Cambridge (1993).
Experimental Sharing of Nonlocality among Multiple Observers with One Entangled Pair via Optimal Weak Measurements Meng-Jun Hu These authors contributed equally to this work    Zhi-Yuan Zhou These authors contributed equally to this work    Xiao-Min Hu    Chuan-Feng Li    Guang-Can Guo    Yong-Sheng Zhang yshzhang@ustc.edu.cn Laboratory of Quantum Information, University of Science and Technology of China, Hefei 230026, China Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei 230026, China (November 19, 2020) Abstract Bell nonlocality plays a fundamental role in quantum theory. Numerous tests of the Bell inequality have been reported since the ground-breaking discovery of the Bell theorem. Up to now, however, most discussions of the Bell scenario have focused on a single pair of entangled particles distributed to only two separated observers. Recently, it has been shown surprisingly that multiple observers can share the nonlocality present in a single particle from an entangled pair using the method of weak measurements [Phys. Rev. Lett. 114, 250401 (2015)]. Here we report an observation of double CHSH-Bell inequality violations for a single pair of entangled photons with strength continuous-tunable optimal weak measurements in photonic system for the first time. Our results not only shed new light on the interplay between nonlocality and quantum measurements but may also be significant for important applications such as unbounded randomness certification and quantum steering. Introduction. Nonlocality, which was first pointed by Einstein, Podolsky and Rosen (EPR) EPR , plays a fundamental role in quantum theory. It has been intensively investigated since the ground-breaking discovery of Bell theorem by John Bell in 1964 Bell . Bell theorem states that any local-realistic theory can not reproduce all the predictions of quantum theory and gives an experimental testable inequality speakable bell that later improved by Clauser, Horne, Shimony and Holt (CHSH) CHSH . Numerous tests of CHSH-Bell inequality have been realized in various quantum systems Clauser ; Aspect ; Zeilinger ; Rowe ; Monroe ; Ansmann ; Hofman ; Giustina ; Christensen and strong loophole-free Bell tests have been reported recently Hensen ; MGiustina ; Shalm . To date, however, most discussions of Bell scenario focus on one pair of entangled particles distributed to only two separated observers Alice and Bob bunner . It is thus a novel and fundamental question whether or not multiple observers can share the nonlocality present in a single particle from an entangled pair. Using the concept of weak measurements, Silva et al. give a surprising positive answer to above question and show a marvelous physical fact that measurement disturbance and information gain of a single system are closely related to nonlocality distribution among multiple observers in one entangled pair silva . In this letter, we report an experimental realization of sharing nonlocality among multiple observers with strength continuous-tunable optimal weak measurements in photonic system. The realization of sharing nonlocality is certified by the observed double violations of CHSH-Bell inequality with one pair of entangled photons. The crux of our method is that the first measurement in Bob’s wing is performed in a non-destructive way through weak measurement so that the rest of nonlocality of entangled pair can still be revealed by the subsequent observer. Our results not only shed new light on the interplay between nonlocality and quantum measurements but also could be found significant applications such as in unbounded randomness certification pironio ; curchod and quantum steering wiseman ; vitus . Review of weak measurements. As one of the foundations of quantum theory, the measurement postulate states that upon measurement, a quantum system will collapse into one of its eigenstates, with the probability determined by the Born rule. While this type of strong measurement, which is projective and irreversible, obtains the maximum information about a system, it also completely destroys the system after the measurement. Weak measurements, however, can be used to extract less information about a system with smaller disturbance, and, over the past decades, have been shown to be a powerful method in signal amplification Hosten ; Dixon ; Xu , state tomography lundeen ; lundeen and in solving quantum paradoxes Botero . In contrast to strong projective measurements, weak measurements are non-destructive and can retain some original properties of the measured system, e.g., coherence and entanglement. Because the entanglement is not completely destroyed by weak measurements, a particle that has been measured with intermediate strength can still be entangled with other particles, and therefore, shared nonlocality among multiple observers is possible. Consider a von Neumann-type measurement von on a spin-$1/2$ particle that is in the superposition state $|\psi\rangle=\alpha|\uparrow\rangle+\beta|\downarrow\rangle$ with $|\alpha|^{2}+|\beta|^{2}=1$ where $|\uparrow\rangle$ ($|\downarrow\rangle$) denotes the spin up (down) state. After the measurement, the spin state is entangled with the pointer’s state, i.e., $|\psi\rangle\otimes|\phi\rangle\rightarrow\alpha|\uparrow\rangle\otimes|\phi_{% \uparrow}\rangle+\beta|\downarrow\rangle\otimes|\phi_{\downarrow}\rangle$, where $|\phi\rangle$ is the initial state of the pointer and $|\phi_{\uparrow}\rangle$ ($|\phi_{\downarrow}$) indicates the measurement results of spin up (down). By tracing out the state of the pointer, the spin state becomes $$\rho=F\rho_{0}+(1-F)(\pi_{\uparrow}\rho_{0}\pi_{\uparrow}+\pi_{\downarrow}\rho% _{0}\pi_{\downarrow}),$$ (1) where $\rho_{0}=|\psi\rangle\langle\psi|,\pi_{\uparrow}=|\uparrow\rangle\langle% \uparrow|,\pi_{\downarrow}=|\downarrow\rangle\langle\downarrow|$ and $F=\langle\phi_{\downarrow}|\phi_{\uparrow}\rangle$. The quantity $F$ is called the measurement quality factor because it measures the disturbance of the measurement silva . If $F=0$, the spin state is reduced to a completely decoherent state in the measurement eigenbasis, representing a strong measurement; otherwise, if $F=1$, there is no measurement at all. For remaining case, where $F\in(0,1)$, represents measurements with intermediate strength corresponding to weak measurements. Another important quantity associated with weak measurements is the information gain $G$ that is determined by the precision of the measurement silva . In the case of strong measurements, the probability of obtaining the outcome $+1$ ($-1$) that corresponds to spin eigenstate $|\uparrow\rangle$ ($|\downarrow\rangle$) can be calculated by the Born rule $P(+1)=\mathrm{Tr}(\pi_{\uparrow}\rho_{0})$ ($P(-1)=\mathrm{Tr}(\pi_{\downarrow}\rho_{0})$). However, the non-orthogonality of the pointer states $\langle\phi_{\uparrow}|\phi_{\downarrow}\rangle\neq 0$ in weak measurements results in ambiguous outcomes. An observer who performs a weak measurement must choose a complete orthogonal set of pointer states $\{|\phi_{+1}\rangle,|\phi_{-1}\rangle\}$ as reading states to define the outcomes $\{+1,-1\}$ corresponding to the spin eigenstates $\{|\uparrow\rangle,|\downarrow\rangle\}$. The probabilities of the outcome $\pm 1$ in weak measurements then become $P(\pm 1)=\mathrm{Tr}(\pi_{\uparrow}\rho_{0})|\langle\phi_{\pm 1}|\phi_{% \uparrow}\rangle|^{2}+\mathrm{Tr}(\pi_{\downarrow}\rho_{0})|\langle\phi_{\pm 1% }|\phi_{\downarrow}\rangle|^{2}$. Here, $|\langle\phi_{+1}|\phi_{\uparrow}\rangle|^{2}$ and $|\langle\phi_{-1}|\phi_{\downarrow}\rangle|^{2}$ correspond to the probabilities of obtaining the correct outcomes while $|\langle\phi_{-1}|\phi_{\uparrow}\rangle|^{2}$ and $|\langle\phi_{+1}|\phi_{\downarrow}\rangle|^{2}$ correspond to the probabilities of the wrong outcomes. For simplicity, we consider the case of symmetric ambiguousness in which $|\langle\phi_{+1}|\phi_{\uparrow}\rangle|^{2}=|\langle\phi_{-1}|\phi_{% \downarrow}\rangle|^{2}$ and $|\langle\phi_{-1}|\phi_{\uparrow}\rangle|^{2}=|\langle\phi_{+1}|\phi_{% \downarrow}\rangle|^{2}$; here, the probabilities of the outcomes can be reformulated as $$P(\pm 1)=G\cdot\dfrac{1}{2}[1\pm\mathrm{Tr}(\sigma\rho_{0})]+(1-G)\cdot\dfrac{% 1}{2},$$ (2) where $\sigma=\pi_{\uparrow}-\pi_{\downarrow}$ defines the spin observable and $G=1-|\langle\phi_{-1}|\phi_{\uparrow}\rangle|^{2}-|\langle\phi_{+1}|\phi_{% \downarrow}\rangle|^{2}$ represents the precision of the measurement (See more details in Supplemental Material, Part A). The first term in Eq. (2) represents the contribution of probabilities for a strong measurement while the second term corresponds to a random outcome. The quality factor $F$ and the precision $G$ of weak measurements are determined solely by the pointer states and satisfy the trade-off relation $F^{2}+G^{2}\leq 1$ silva . A weak measurement is optimal if $F^{2}+G^{2}=1$ is satisfied. Modified Bell test with weak measurements. In a typical Bell test scenario, one pair of entangled spin-$1/2$ particles is distributed between two separated observers, Alice and Bob (Fig. 1a), who each receive a binary input $x,y\in\{0,1\}$ and subsequently give a binary output $a,b\in\{1,-1\}$. For each input $x$ ($y$), Alice (Bob) performs a strong projective measurement of her (his) spin along a specific direction and obtains the outcome $a$ ($b$). The scenario is characterized by a joint probability distribution $P(ab|xy)$ of obtaining outcomes $a$ and $b$, conditioned on measurement inputs $x$ for Alice and $y$ for Bob. The fixed measurement inputs $x$ and $y$ defines the correlations $C_{(x,y)}=\sum_{a,b}abP(ab|xy)$. The CHSH-Bell test is focused on the so-called $S$ value defined by the combination of correlations $$S=|C_{(0,0)}+C_{(0,1)}+C_{(1,0)}-C_{(1,1)}|.$$ (3) While $S$ is bounded by $2$ in any local hidden variable theory CHSH , quantum theory gives a more relaxed bound of $2\sqrt{2}$ observable . Here, we consider a new Bell scenario in which there are two observers Bob1 and Bob2 with access to the same one-half of the entangled state of spin-$1/2$ particles (Fig. 1b). Alice, Bob1 and Bob2 each receive a binary input $x,y_{1},y_{2}\in\{0,1\}$ and subsequently provide a binary output $a,b_{1},b_{2}\in\{1,-1\}$. For each input $y_{1}$, Bob1 performs a weak measurement of his spin along a specific direction while Alice and Bob2 perform a strong projective measurement for their input $x$ and $y_{2}$. With the outcome $b_{1}$, Bob1 sends the measured spin particle to Bob2 who is totally ignorant of the existence of Bob1. The scenario is now characterized by joint conditional probabilities $P(ab_{1}b_{2}|xy_{1}y_{2})$, and an incisive question is raised as to whether Bob1 and Bob2 can both share nonlocality with Alice in this scenario. The answer is surprisingly positive according to the theoretical research of Silva et al., who demonstrated that the statistics of both Alice-Bob1 and Alice-Bob2 can indeed violate the CHSH-Bell inequality silva . The weak measurements quantities $G$ and $F$, respectively, determine the $S$ values of Alice-Bob1 and Alice-Bob2 in the new Bell scenario. In the case that the Tsirelson’s bound $2\sqrt{2}$ of the CHSH-Bell inequality can be attained, the calculation gives (See more details in Supplemental Material, Part B) $$S_{A-B1}=2\sqrt{2}G,S_{A-B2}=\sqrt{2}(1+F).$$ (4) Experimental observation of double Bell inequality violations. To observe significant double violations of the CHSH-Bell inequality, the realization of an optimal weak measurement is a key and necessary requirement. Here, we designed a feasible and optimal scheme of weak measurements that differs from the original scheme proposed in Ref. [18], which is difficult to realize in practice. In practical experiments, we choose the path degree of freedom of photons as the pointer. The strength of a weak measurement can be continuously tuned by adjusting the interference between two beam displacers (BDs). For instance, suppose that the polarization state of a photon is $\alpha|H\rangle+\beta|V\rangle$ with $|\alpha|^{2}+|\beta|^{2}=1$, where $|H\rangle$ ($|V\rangle$) denotes horizontal (vertical) polarization. A strong measurement of the observable $\sigma=|H\rangle\langle H|-|V\rangle\langle V|$ implies a transformation of the initial state $(\alpha|H\rangle+\beta|V\rangle)\otimes|0\rangle$ to the entangled state $\alpha|H\rangle\otimes|0\rangle+\beta|V\rangle\otimes|1\rangle$ with $\{|0\rangle,|1\rangle\}$ as orthogonal path pointer states, while a weak measurement can be realized with a transformation of initial state to $\alpha|H\rangle\otimes|\phi_{H}\rangle+\beta|V\rangle\otimes|\phi_{V}\rangle$ with $|\phi_{H}\rangle=\mathrm{cos}\theta|0\rangle+\mathrm{sin}\theta|1\rangle$, $|\phi_{V}\rangle=\mathrm{sin}\theta|0\rangle+\mathrm{cos}\theta|1\rangle$ ($0\leq\theta\leq\pi/2$) as the corresponding pointer states. If the reading states are chosen as $|\phi_{+1}\rangle=|0\rangle$ and $|\phi_{-1}\rangle=|1\rangle$, it can be found that $F=\langle\phi_{H}|\phi_{V}\rangle=\mathrm{sin}2\theta$ and $G=1-|\langle\phi_{-1}|\phi_{H}\rangle|^{2}-|\langle\phi_{+1}|\phi_{V}\rangle|^% {2}=\mathrm{cos}2\theta$, satisfying the optimal condition of $F^{2}+G^{2}=1$. In our experiment, the method of state exchange exchange is used, i.e., we first exchange the polarization state and the path state with a BD such that the polarization state is used as a pointer state that can be continuously tuned by rotating half wave plates (HWPs); we then exchange back the polarization state and the path state through another BD. The entire process is equivalent to the description in the last paragraph. When tracing out the pointer degree of freedom, the function of weak measurement setup is to implement a two-outcome positive operator valued measurement (POVM) on the measured system. To illustrate how our weak measurement setup works, we consider a photon in a general superposed state $|\Phi\rangle=\alpha|H\rangle+\beta|V\rangle$ of polarization that is transmitted into the setup (Dashed rectangle in Fig. 2). The polarization state of the photon, after passing through HWP1 rotated by $\dfrac{\varphi}{2}$ degrees, can be written as $|\Phi\rangle=\alpha|\varphi\rangle+\beta|\varphi^{\perp}\rangle$, where $|\varphi\rangle=\mathrm{cos}\varphi|H\rangle+\mathrm{sin}\varphi|V\rangle$ and $|\varphi^{\perp}\rangle=-\mathrm{cos}\varphi|V\rangle+\mathrm{sin}\varphi|H\rangle$ are two orthogonal states; this description also defines the polarization observable measured by Bob1 as $\sigma_{\varphi}=|\varphi\rangle\langle\varphi|-|\varphi^{\perp}\rangle\langle% \varphi^{\perp}|$. The input state $|\Phi\rangle$ in basis $\{|\varphi\rangle,|\varphi^{\perp}\rangle\}$ becomes $|\Phi\rangle=a|\varphi\rangle+b|\varphi^{\perp}\rangle$ with $a=\alpha\mathrm{cos}\varphi+\beta\mathrm{sin}\varphi$ and $b=\alpha\mathrm{sin}\varphi-\beta\mathrm{cos}\varphi$. The photon then enters the BD version of a Mach-Zehnder interferometer with the vertical (horizontal) photon transmitted along the up path (down path), where HWP2 and HWP3 are rotated by $\dfrac{\theta}{2}$ and $\dfrac{\pi}{4}-\dfrac{\theta}{2}$ degrees, respectively. The strength of the weak measurement is controlled by angle $\theta$, which determines the strength of the interference. HWP4 is fixed at $\dfrac{\pi}{4}$ for exchanging the polarization states $|H\rangle$ and $|V\rangle$ back to each other, and HWP5 is rotated by the same angle $\dfrac{\varphi}{2}$ as HWP1 to transform the basis $\{|H\rangle,|V\rangle\}$ to the measurement basis $\{|\varphi\rangle,|\varphi^{\perp}\rangle\}$. The photon’s state passing through HWP5, called the post-measurement state of Bob1, becomes $|\Psi_{+1}\rangle=a\mathrm{cos}\theta|\varphi\rangle+b\mathrm{sin}\theta|% \varphi^{\perp}\rangle$, corresponding to the $+1$ outcome. Here, we only obtain one outcome in a single shot measurement. The post-measurement state $|\Psi_{-1}\rangle=a\mathrm{sin}\theta|\varphi\rangle+b\mathrm{cos}\theta|% \varphi^{\perp}\rangle$ of Bob1 corresponding to the $-1$ outcome can be similarly obtained by rotating HWP1 and HWP5 at $\dfrac{\varphi}{2}+\dfrac{\pi}{4}$ degrees. In principle, we can combine the double-ended path out of BD2 in Fig. 2 to output the result of $-1$, i.e., a complete measurement can be realized (See more discussions in Supplemental Material, Part C). The function of our weak measurement setup is to achieve a dichotomic POVM with Kraus operators $\{M_{+1}=\mathrm{cos}\theta|\varphi\rangle\langle\varphi|+\mathrm{sin}\theta|% \varphi^{\perp}\rangle\langle\varphi^{\perp}|,M_{-1}=\mathrm{sin}\theta|% \varphi\rangle\langle\varphi|+\mathrm{cos}\theta|\varphi^{\perp}\rangle\langle% \varphi^{\perp}|\}$ corresponding to outcomes $\{+1,-1\}$. In our Bell test experiment (Fig. 2), polarization-entangled pairs of photons in state $(|H\rangle|V\rangle-|V\rangle|H\rangle)/\sqrt{2}$ are generated by pumping a type-II apodized periodically poled potassium titanyl phosphate (PPKTP) crystal to produce photon pairs at a wavelength of 798nm. A $4.5\mathrm{mW}$ pump laser centred at a wavelength of 399nm is produced by a Moglabs ECD004 laser, and a PPKTP crystal is embedded in the middle of a Sagnac interferometer to ensure the production of high-quality, high-brightness entangled pair new ; zhou . The maximum coincidence counts rates in the horizontal/vertical basis are approximately 3,200 (per s). The visibility of coincidence detection for the maximally entangled state is measured to be $0.997\pm 0.006$ in the horizontal/vertical polarization basis $\{|H\rangle,|V\rangle\}$ and $0.993\pm 0.008$ in the diagonal/antidiagonal polarization basis $\{(|H\rangle\pm|V\rangle)/\sqrt{2}\}$, achieved by rotating the polarization analyzers at Alice and Bob. Alice, Bob1 and Bob2 each have two measurement choices, and for each choice, two trials are needed, corresponding to two different outcomes. For each fixed $\theta$, which determines the strength of the weak measurement $F=\mathrm{sin}2\theta$, we implemented $64$ trials for calculating $S_{A-B1}$ and $S_{A-B2}$. To ensure that the Tsirelson’s bound $2\sqrt{2}$ can be approached, Alice chooses a measurement along direction $Z$ or $X$, while Bobs choose a measurements along $(-Z+X)/\sqrt{2}$ or $-(Z+X)/\sqrt{2}$ direction. In this experiment, HWP7 is set at $(0^{\circ},45^{\circ})$ or $(22.5^{\circ},67.5^{\circ})$, corresponding to Alice’s measurement along the $Z$ or $X$ direction, while HWP1 and HWP6, representing measurements of Bob1 and Bob2, are set at $(-11.25^{\circ},33.75^{\circ})$ or $(11.25^{\circ},56.25^{\circ})$, corresponding to the $(-Z+X)/\sqrt{2}$ or $-(Z+X)/\sqrt{2}$ direction, respectively. Five different angles $\theta=\{4^{\circ},16.4^{\circ},18.4^{\circ},20.5^{\circ},28^{\circ}\}$ are chosen for which the values of $\theta=\{16.4^{\circ},18.4^{\circ},20.5^{\circ}\}$ are located in the region in which double violations are predicted to be observed. In particular, theoretical analysis shows that optimal double violations $S_{A-B1}=S_{A-B2}=2.26$ are present under optimal weak measurements when $F=0.6$, corresponding to $\theta=18.4^{\circ}$. Our final results are shown in Fig. 3, where double violations are clearly displayed at $\theta=\{16.4^{\circ},18.4^{\circ},20.5^{\circ}\}$ with approximately 10 standard deviations. Considering the possible statistical error, systematic error and imperfection of our apparatus, these experimental results fit well within the theoretical predictions. Discussion and conclusion. In conclusion, we have observed double violations of the Bell inequality for the entangled state of a photon pair by using a strength continuous-tunable optimal weak measurement setup. Our experimental results verify the nonlocality distribution among multiple observers and shed new light on our understanding of the fascinating properties of nonlocality and quantum measurements. The weak measurement technique used herein can find significant applications in unbounded randomness certification pironio ; curchod , which is a valuable resource applied from quantum cryptography hugo ; scarani and quantum gambling goldenberg ; zhang to quantum simulation simulation . Here, the $S$ value of the correlation between Alice and Bob2 is determined by the quality factor of Bob1’s weak measurement, implying that Bob1 can control the nonlocal correlation of Alice and Bob2 by manipulating the strength of his measurement. This result provides tremendous motivation for the further quantum steering research wiseman ; vitus . Furthermore, the optimal weak measurement method with tunable strength shown herein can also be generalized to other systems such as ion traps and cavity QEDs, among others. The authors thank Yun-Feng Huang, Bi-Heng Liu and Bao-Sen Shi for helpful discussions and technical supports. Meng-Jun Hu also acknowledges Sheng Liu, Zhi-Bo Hou, Jian Wang, Chao Zhang, Ya Xiao, Dong-Sheng Ding, Yong-Nan Sun and Geng-Chen for many useful suggestions and stimulated discussions. This work was supported by the National Natural Science Foundation of China (No. 61275122 and 61590932 ), the Strategic Priority Research Program (B) of the Chinese Academy of Sciences (No. XDB01030200). References (1) A. Einstein, B. Podolsky, and N. Rosen, Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. 47, 777 (1935). (2) J. S. Bell, On the Einstein-Podolsky-Rosen paradox, Physics 1, 195-200 (1964). (3) J. S. Bell, Speakable and Unspeakable in Quantum Mechanics: Collected Papers on Quantum Philosophy (Cambridge Univ. Press, 2004). (4) J. F. Clauser, M. A. Horne, A. Shimony, and R. A. Holt, Proposed experiment to test local hidden-variable theories, Phys. Rev. Lett. 23, 880 (1969). (5) S. J. Freedman and J. F. Clauser, Experimental test of local hidden-variable theories, Phys. Rev. Lett. 28, 938 (1972). (6) A. Aspect, J. Dalibard, and G. Roger, Experimental test of Bell’s inequalities using time-varying analyzers, Phys. Rev. Lett. 49, 1804 (1982). (7) G. Weihs, T. Jennewein, C. Simon, H. Weinfurter, and A. Zeilinger, Violation of Bell’s inequality under strict Einstein locality conditions, Phys. Rev. Lett. 81, 5039 (1998). (8) M. A. Rowe, D. Kielpinski, V. Meyer, C. A. Sackett, W. M. Itano, C. Monroe, and D. J. Wineland, Experimental violation of a Bell’s inequality with efficient detection, Nature 409, 791 (2001). (9) D. N. Matsukevich, P. Maunz, D. L. Moehring, S. Olmschenk, and C. Monroe, Bell inequality violation with two remote atomic qubits, Phys. Rev. Lett. 100, 150404 (2008). (10) M. Ansmann et al., Violation of Bell’s inequality in Josephson phase qubits, Nature 461, 504 (2009). (11) J. Hofmann et al., Heralded entanglement between widely separated atoms, Science 337, 72 (2012). (12) M. Giustina et al., Bell violation using entangled photons without the fair-sampling assumption, Nature 497, 227 (2013). (13) B. G. Christensen et al., Detection-loophole-free test of quantum nonlocality, and applications, Phys. Rev. Lett. 111, 130406 (2013). (14) B. Hensen et al., Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometres, Nature 526, 682 (2015). (15) M. Giustina et al., Significant-loophole-free test of Bell’s theorem with entangled photons, Phys. Rev. Lett. 115, 250401 (2015). (16) L. K. Shalm et al., Strong loophole-free test of local realism, Phys. Rev. Lett. 115, 250402 (2015). (17) N. Brunner, D. Cavalcanti, S. Pironio, V. Scarani, and S. Wehner, Bell nonlocality, Rev. Mod. Phys. 86, 419 (2014). (18) R. Silva, N. Gisin, Y. Guryanova, and S. Popescu, Multiple observers can share the nonlocality of half of an entangled pair by using optimal weak measurement, Phys. Rev. Lett. 114, 250401 (2015). (19) N. J. Cerf, C. Adami, and P. G. Kwiat, Optical simulation of quantum logic, Phys. Rev. A 57, R1477(R) (1998). (20) S. Pironio et al., Random numbers certified by Bell’s theorem, Nature 464, 1021 (2010). (21) F. J. Curchod, M. Johansson, R. Augusiak, M. J. Hoban, P. Wittek, and A. Ac1́n, Unbounded randomness certification using sequences of measurements, arXiv:1510.03394v1 [quant-ph]. (22) H. W. Wiseman, S. J. Jones, and A. C. Doherty, Steering, entanglement, nonlocality, and Einstein-Podolsky-Rosen Paradox, Phys. Rev. Lett. 98, 140402 (2007). (23) V. Händchen, T. Eberle, S. Steinlechner, A. Samblowski, T. Franz, R. F. Werner, and R. Schnabel, Observation of one-way Einstein-Podolsky-Rosen steering, Nat. Photon. 6, 596 (2012). (24) O. Hosten and P. Kwiat, Observation of the spin Hall effect of light via weak measurements, Science 319, 787 (2008). (25) P. B. Dixon, D. J. Starling, A. N. Jordan, and J. C. Howell, Ultrasensitive beam deflection measurement via interferometric weak value amplification, Phys. Rev. Lett. 102, 173601 (2009). (26) X. Y. Xu, Y. Kedem, K. Sun, L. Vaidman, C. F. Li, and G. C. Guo, Phase estimation with weak measurement using a white light source, Phys. Rev. Lett. 111, 033604 (2013). (27) J. S. Lundeen, B. Sutherland, A. Patel, C. Stewart, and C. Bamber, Direct measurement of the quantum wavefunction, Nature 474, 188 (2011). (28) J. S. Lundeen and C. Bamber, Procedure for direct measurement of general quantum states using weak measurement, Phys. Rev. Lett. 108, 070402 (2012). (29) Y. Aharonov, A. Botero, S. Popescu, B. Reznik, and J. Tollakson, Revisiting Hardy’s paradox: counterfactual statements, real measurements, entanglement and weak values, Phys. Lett. A 301, 130 (2002). (30) J. von Neumann, Mathematical Foundations of Quantum Mechanics (Princeton University Press, Princeton, 1955). (31) B. S. Cirel’son, Quantum generalizations of Bell’s inequality, Lett. Math. Phys. 4, 93 (1980). (32) A. Fedrizzi, T. Herbst, A. Poppe, T. Jennewein, and A. Zeilinger, A wavelength-tunable fiber-coupled source of narrowband entangled photons, Opt. Express 15, 15377 (2007). (33) Y. Li, Z. Y. Zhou, D. S. Ding, and B. S. Shi, CW-pumped telecom band polarization entangled photon pair generation in a Sagnac interferometer, Opt. Express 23, 28792 (2015). (34) N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, Quantum cryptography, Rev. Mod. Phys. 74, 145 (2002). (35) V. Scarani, H. B. Pasquinncci, N. J. Cerf, M. Dušek, N. Lütkenhans, and M. Peev, The security of practical quantum key distribution, Rev. Mod. Phys. 81, 1301 (2009). (36) L. Goldenberg, L. Vaidman, and S. Wiesner, Quantum gambling, Phys. Rev. Lett. 82, 3356 (1999). (37) P. Zhang, Y. S. Zhang, Y. F. Huang, L. Peng, C. F. Li, and G. C. Guo, Optical realization of quantum gambling machine, EPL 82, 30002 (2008). (38) I. M. Georgescu, S. Ashhab, and F. Nori, Quantum simulation, Phys. Rev. Mod. 86, 153 (2014).
New Constraints on Cosmic Reionization from the 2012 Hubble Ultra Deep Field Campaign Brant E. Robertson11affiliation: Department of Astronomy and Steward Observatory, University of Arizona, Tucson AZ 85721 , Steven R. Furlanetto22affiliation: Department of Physics & Astronomy, University of California, Los Angeles CA 90095 , Evan Schneider11affiliation: Department of Astronomy and Steward Observatory, University of Arizona, Tucson AZ 85721 , Stephane Charlot33affiliation: UPMC-CNRS, UMR7095, Institut d’Astrophysique de Paris, F-75014, Paris, France , Richard S. Ellis44affiliation: Department of Astrophysics, California Institute of Technology, MC 249-17, Pasadena, CA 91125 , Daniel P. Stark11affiliation: Department of Astronomy and Steward Observatory, University of Arizona, Tucson AZ 85721 , Ross J. McLure55affiliation: Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ, UK , James S. Dunlop55affiliation: Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ, UK , Anton Koekemoer66affiliation: Space Telescope Science Institute, Baltimore, MD 21218 , Matthew A. Schenker44affiliation: Department of Astrophysics, California Institute of Technology, MC 249-17, Pasadena, CA 91125 , Masami Ouchi77affiliation: Institute for Cosmic Ray Research, University of Tokyo, Kashiwa City, Chiba 277-8582, Japan , Yoshiaki Ono77affiliation: Institute for Cosmic Ray Research, University of Tokyo, Kashiwa City, Chiba 277-8582, Japan , Emma Curtis-Lake55affiliation: Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ, UK , Alexander B. Rogers55affiliation: Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ, UK , Rebecca A. A. Bowler55affiliation: Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ, UK , Michele Cirasuolo55affiliation: Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ, UK Abstract Understanding cosmic reionization requires the identification and characterization of early sources of hydrogen-ionizing photons. The 2012 Hubble Ultra Deep Field (UDF12) campaign has acquired the deepest infrared images with the Wide Field Camera 3 aboard Hubble Space Telescope and, for the first time, systematically explored the galaxy population deep into the era when cosmic microwave background (CMB) data indicates reionization was underway. The UDF12 campaign thus provides the best constraints to date on the abundance, luminosity distribution, and spectral properties of early star-forming galaxies. We synthesize the new UDF12 results with the most recent constraints from CMB observations to infer redshift-dependent ultraviolet (UV) luminosity densities, reionization histories, and electron scattering optical depth evolution consistent with the available data. Under reasonable assumptions about the escape fraction of hydrogen ionizing photons and the intergalactic medium clumping factor, we find that to fully reionize the universe by redshift $z\sim 6$ the population of star-forming galaxies at redshifts $z\sim 7-9$ likely must extend in luminosity below the UDF12 limits to absolute UV magnitudes of $M_{\mathrm{UV}}\sim-13$ or fainter. Moreover, low levels of star formation extending to redshifts $z\sim 15-25$, as suggested by the normal UV colors of $z\simeq$7-8 galaxies and the smooth decline in abundance with redshift observed by UDF12 to $z\simeq 10$, are additionally likely required to reproduce the optical depth to electron scattering inferred from CMB observations. Subject headings:cosmology:reionization – galaxies:evolution – galaxies:formation 1. Introduction The process of cosmic reionization remains one of the most important outstanding problems in galaxy formation and cosmology. After recombination at $z\approx 1090$ (Hinshaw et al., 2012), gas in the universe was mostly neutral. However, observations of the Gunn & Peterson (1965) trough in quasar spectra (e.g., Fan et al., 2001, 2002, 2003, 2006b; Djorgovski et al., 2001) indicate that intergalactic gas has become almost fully reionized by redshift $z\sim 5$. The electron scattering optical depth inferred from CMB observations suggests that if the universe was instantaneously reionized, then reionization would occur as early as redshift $z\approx 10$ (Spergel et al., 2003; Hinshaw et al., 2012). Given the dramatic decline in the abundance of quasars beyond redshift $z\sim 6$, they very likely cannot be a significant contributor to cosmic reionization (e.g., Willott et al., 2010; Fontanot et al., 2012) even though quasars have been discovered as early as $z\sim 7$ (Mortlock et al., 2011). Star-forming galaxies at redshifts $z\gtrsim 6$ have therefore long been postulated as the likely agents of cosmic reionization, and their time-dependent abundance and spectral properties are thus crucial ingredients for understanding how intergalactic hydrogen became reionized (for reviews, see Fan et al., 2006a; Robertson et al., 2010; Loeb & Furlanetto, 2012). We present an analysis of the implications of the 2012 Hubble Ultra Deep Field111http://udf12.arizona.edu (UDF12) campaign results on the abundance and spectral characteristics of galaxies at $z\sim 7-12$ for the reionization process. The UDF12 campaign is a 128-orbit Hubble Space Telescope (HST) program (GO 12498, PI: Ellis) with the infrared (IR) channel on the Wide Field Camera 3 (WFC3/IR) that acquired the deepest ever images in the IR with HST in the Hubble Ultra Deep Field (HUDF) in Fall 2012 (the UDF12 project and data overview are described in Ellis et al. 2013 and Koekemoer et al. 2013). Combined with previous HUDF observations (GO 11563, PI: G. Illingworth; GO 12060, 12061, 12062, PIs: S. Faber and H. Ferguson; GO 12099, PI: A. Riess), the UDF12 imaging reaches depths of $Y_{\mathrm{105}}=30$, $J_{\mathrm{125}}=29.5$, $J_{\mathrm{140}}=29.5$, and $H_{\mathrm{160}}=29.5$ ($5-\sigma$ AB magnitudes). The UDF12 observations have provided the first determinations of the galaxy abundance at redshifts $8.5\leq z\leq 12$ (Ellis et al., 2013), precise determinations of the galaxy luminosity function at redshifts $z\sim 7-8$ (Schenker et al., 2012a; McLure et al., 2012), robust ultraviolet (UV) spectral slope measurements at $z\sim 7-8$ (Dunlop et al., 2012b), and size-luminosity relation measures at redshifts $z\sim 6-8$ (Ono et al., 2012a). Our earlier UDF12 publications already provide some new constraints on the role that galaxies play in cosmic reionization and the duration of the process. In Ellis et al. (2013), we argued that continuity in the declining abundance of star-forming galaxies over $6<z<10$ (and possibly to $z\simeq 12$), implied the likelihood of further star formation beyond the redshift limits currently probed. Likewise, in Dunlop et al. (2012b), the constancy of the UV continuum slope measured in $z\simeq 7-9$ galaxies over a wide range in luminosity supports the contention that the bulk of the stars at this epoch are already enriched by earlier generations. Collectively, these two results support an extended reionization process. We synthesize these UDF12 findings with the recent 9-year Wilkinson Microwave Anisotropy Probe (WMAP) results (Hinshaw et al., 2012) and stellar mass density measurements (Stark et al., 2012) to provide new constraints on the role of high-redshift star-forming galaxies in the reionization process. Enabled by the new observational findings, we perform Bayesian inference using a simple parameterized model for the evolving UV luminosity density to find reionization histories, stellar mass density evolutions, and electron scatter optical depth progressions consistent with the available data. We limit the purview of this paper to empirical modeling of the reionization process, and comparisons with more detailed galaxy formation models will be presented in a companion paper (Dayal et al., in preparation). Throughout this paper, we assume the 9-year WMAP cosmological parameters (as additionally constrained by external CMB datasets; $h=0.705$, $\Omega_{\mathrm{m}}=0.272$, $\Omega_{\Lambda}=0.728$, $\Omega_{\mathrm{b}}=0.04885$). Magnitudes are reported using the AB system (Oke & Gunn, 1983). All Bayesian inference and maximum likelihood fitting is performed using the MultiNest code (Feroz & Hobson, 2008; Feroz et al., 2009). 2. The Process of Cosmic Reionization Theoretical models of the reionization process have a long history. Early analytic and numerical models of the reionization process (e.g., Madau et al., 1999; Miralda-Escudé et al., 2000; Gnedin, 2000; Barkana & Loeb, 2001; Razoumov et al., 2002; Wyithe & Loeb, 2003; Ciardi et al., 2003) highlighted the essential physics that give rise to the ionized intergalactic medium (IGM) at late times. In the following description of the cosmic reionization process we follow most closely the modeling of Madau et al. (1999), Bolton & Haehnelt (2007b), Robertson et al. (2010), and Kuhlen & Faucher-Giguère (2012), but there has been closely related recent work by Ciardi et al. (2012) and Jensen et al. (2013). The reionization process is a balance between the recombination of free electrons with protons to form neutral hydrogen and the ionization of hydrogen atoms by cosmic Lyman continuum photons with energies $E>13.6~{}\mathrm{eV}$. The dimensionless volume filling fraction of ionized hydrogen $Q_{\mathrm{HII}}$ can be expressed as a time-dependent differential equation capturing these competing effects as $$\dot{Q}_{\mathrm{HII}}=\frac{\dot{n}_{\mathrm{ion}}}{\langle n_{\mathrm{H}}% \rangle}-\frac{Q_{\mathrm{HII}}}{t_{\mathrm{rec}}}$$ (1) where dotted quantities are time derivatives. The comoving density of hydrogen atoms $$\langle n_{\mathrm{H}}\rangle=X_{\mathrm{p}}\Omega_{\mathrm{b}}\rho_{\mathrm{c}}$$ (2) depends on the primordial mass-fraction of hydrogen $X_{\mathrm{p}}=0.75$ (e.g., Hou et al., 2011), the critical density $\rho_{\mathrm{c}}=1.8787\times 10^{-29}h^{-2}~{}\mathrm{g}~{}\mathrm{cm}^{3}$, and the fractional baryon density $\Omega_{\mathrm{b}}$. As a function of redshift, the average recombination time in the IGM is $$t_{\mathrm{rec}}=\left[C_{\mathrm{HII}}\alpha_{\mathrm{B}}(T)(1+Y_{\mathrm{p}}% /4X_{\mathrm{p}})\langle n_{\mathrm{H}}\rangle(1+z)^{3}\right]^{-1},$$ (3) where $\alpha_{\mathrm{B}}(T)$ is the case B recombination coefficient for hydrogen (we assume an IGM temperature of $T=20,000$K), $Y_{\mathrm{p}}=1-X_{\mathrm{p}}$ is the primordial helium abundance (and accounts for the number of free electrons per proton in the fully ionized IGM, e.g., Kuhlen & Faucher-Giguère 2012), and $C_{\mathrm{HII}}\equiv\langle n_{\mathrm{H}}^{2}\rangle/\langle n_{\mathrm{H}}% \rangle^{2}$ is the “clumping factor” that accounts for the effects of IGM inhomogeneity through the quadratic dependence of the recombination rate on density. Simulations suggest that the clumping factor of IGM gas is $C_{\mathrm{HII}}\approx 1-6$ at the redshifts of interest (e.g., Sokasian et al., 2003; Iliev et al., 2006; Pawlik et al., 2009; Shull et al., 2012; Finlator et al., 2012). While early hydrodynamical simulation studies suggested that the clumping factor could be as high as $C_{\mathrm{HII}}\sim 10-40$ at redshifts $z<8$ (e.g., Gnedin & Ostriker, 1997), recent studies that separately identify IGM and interstellar medium gas in the simulations and employ a more detailed modeling of the evolving UV background have found lower values of $C_{\mathrm{HII}}$. An interesting study of the redshift evolution of the clumping factor was provided by Pawlik et al. (2009, see their Figures 5 and 7). At early times in their simulations ($z\geq 10$), the clumping factor was low ($C_{\mathrm{HII}}<3$) but increased with decreasing redshift at a rate similar to predictions from the Miralda-Escudé et al. (2000) model of the evolving IGM density probability distribution function. In the absence of photoheating, at lower redshifts ($z\lesssim 9$) the clumping factor would begin to increase more rapidly than the Miralda-Escudé et al. (2000) prediction to reach $C_{\mathrm{HII}}\sim 10-20$ by $z\sim 6$. In the presence of photoheating, the evolution of the clumping factor depends on the epoch when the uniform UV background becomes established. If the IGM was reheated early ($z\sim 10-20$) the predicted rise in clumping factor breaks after the UV background is established and increases only slowly to $C_{\mathrm{HII}}\sim 3-6$ at late times ($z\sim 6$). If instead, and perhaps more likely, the IGM is reheated later (e.g, $z\sim 6-8$) the clumping factor may actually decrease at late times from $C_{\mathrm{HII}}\sim 6-10$ at $z\sim 8$ to $C_{\mathrm{HII}}\sim 3-6$ at $z\sim 6$. The results of these simulations (e.g., Pawlik et al., 2009; Finlator et al., 2012) in part motivate our choice to treat the clumping factor as a constant $C_{\mathrm{HII}}\sim 3$ since over a wide range of possible redshifts for the establishment of the UV background the clumping factor is expected to be $C_{\mathrm{HII}}\sim 2-4$ at $z\lesssim 12$ (see Figure 5 of Pawlik et al., 2009) and lower at earlier times. In comparison with our previous work (Robertson et al., 2010), where we considered $C_{\mathrm{HII}}=2-6$ and frequently used $C_{\mathrm{HII}}=2$ in Equation 3, we will see that our models complete reionization somewhat later when a somewhat larger value of $C_{\mathrm{HII}}$ is more appropriate. However, we note that the end of the reionization process may be more complicated than what we have described above (see, e.g., Section 9.2.1 of Loeb & Furlanetto 2012). As reionization progresses, the ionized phase penetrates more and more deeply into dense clumps within the IGM – the material that will later form the Lyman-$\alpha$ forest (and higher column density systems). These high-density clumps recombine much faster than average, so $C_{\mathrm{HII}}$ may increase throughout reionization (Furlanetto & Oh, 2005). Combined with the failure of Equation 1 to model the detailed distribution of gas densities in the IGM, we expect our admittedly crude approach to fail at the tail end of reionization. Fortunately, we are primarily concerned with the middle phases of reionization here, so any unphysical behavior when $Q_{\mathrm{HII}}$ is large is not important for us. The comoving production rate $\dot{n}_{\mathrm{ion}}$ of hydrogen-ionizing photons available to reionize the IGM depends on the intrinsic productivity of Lyman continuum radiation by stellar populations within galaxies parameterized in terms of the rate of hydrogen-ionizing photons per unit UV (1500$\mathring{A}$) luminosity $\xi_{\mathrm{ion}}$ (with units of $\mathrm{ergs}^{-1}~{}\mathrm{Hz}$), the fraction $f_{\mathrm{esc}}$ of such photons that escape to affect the IGM, and the total UV luminosity density $\rho_{\mathrm{UV}}$ (with units of $\mathrm{ergs}~{}\mathrm{s}^{-1}~{}\mathrm{Hz}^{-1}~{}\mathrm{Mpc}^{-3}$) supplied by star-forming galaxies to some limiting absolute UV magnitude $M_{\mathrm{UV}}$. The product $$\dot{n}_{\mathrm{ion}}=f_{\mathrm{esc}}\xi_{\mathrm{ion}}\rho_{\mathrm{UV}}$$ (4) then determines the newly available number density of Lyman continuum photons per second capable of reionizing intergalactic hydrogen. We note that the expression of $\dot{n}_{\mathrm{ion}}$ in terms of UV luminosity density rather than star formation rate (c.f., Robertson et al., 2010) is largely a matter of choice; stellar population synthesis models with assumed star formation histories are required to estimate $\xi_{\mathrm{ion}}$ and using the star formation rate density $\rho_{\mathrm{SFR}}$ in Equation 4 therefore requires no additional assumptions. Throughout this paper, we choose $f_{\mathrm{esc}}=0.2$. As shown by Ouchi et al. (2009), escape fractions comparable to or larger than $f_{\mathrm{esc}}=0.2$ during the reionization epoch are required for galaxies with typical stellar populations to contribute significantly. We also consider an evolving $f_{\mathrm{esc}}$ with redshift, with the results discussed in Section 6.2 below. The advances presented in this paper come primarily from the new UDF12 constraints on the abundance of star-forming galaxies over $6.5<z<12$, the luminosity functions down to $M_{\mathrm{UV}}\simeq-17$, and robust determinations of their UV continuum colors. For the latter, in Section 3, we use the UV spectral slope of high-redshift galaxies by Dunlop et al. (2012b) and the stellar population synthesis models of Bruzual & Charlot (2003) to inform a choice for the number $\xi_{\mathrm{ion}}$ of ionizing photons produced per unit luminosity. For the former, the abundance and luminosity distribution of high-redshift galaxies determined by Ellis et al. (2013), Schenker et al. (2012a), and McLure et al. (2012) provide estimates of the evolving UV luminosity density $\rho_{\mathrm{UV}}$. The evolving UV luminosity density supplied by star-forming galaxies brighter than some limiting magnitude $M_{\mathrm{UV}}$ is simply related to an integral of the luminosity function as $$\rho_{\mathrm{UV}}(z)=\int_{-\infty}^{M_{\mathrm{UV}}}\Phi(M)L(M)dM,$$ (5) where $L$ is the luminosity and the functional form of the galaxy luminosity function is often assumed to be a Schechter (1976) function $$\Phi(M)=0.4\ln 10~{}\phi_{\star}\left[10^{0.4(M_{\star}-M)}\right]^{1+\alpha}% \exp\left[-10^{0.4(M_{\star}-M)}\right]$$ (6) parameterized in terms of the normalization $\phi_{\star}$ (in units of $\mathrm{Mpc}^{-3}{~{}\mathrm{mag}}^{-1}$), the characteristic galaxy magnitude $M_{\star}$, and the faint-end slope $\alpha$. Each of these parameters may evolve with redshift $z$, which can affect the relative importance of faint galaxies for reionization (e.g., Oesch et al., 2009; Bouwens et al., 2012b). In Sections 4 and 4.1 below, we present our method of using the previous and UDF12 data sets to infer constraints on the luminosity density as a function of redshift and limiting magnitude. 2.1. Stellar Mass Density as a Constraint on Reionization The UV luminosity density is supplied by short-lived, massive stars and therefore reflects the time rate of change of the stellar mass density $\rho_{\star}(z)$. In the context of our model, there are two routes for estimating the stellar mass density. First, we can integrate the stellar mass density supplied by the star formation rate inferred from the evolving UV luminosity density as $$\rho_{\star}(z)=(1-R)\int_{\infty}^{z}\eta_{\mathrm{sfr}}(z^{\prime})\rho_{% \mathrm{UV}}(z^{\prime})\frac{dt}{dz^{\prime}}dz^{\prime},$$ (7) where $\eta_{\mathrm{sfr}}(z)$ provides the stellar population model-dependent conversion between UV luminosity and star formation rate (in units $M_{\sun}~{}\mathrm{yr}^{-1}~{}\mathrm{ergs}^{-1}~{}\mathrm{s}~{}\mathrm{Hz}$), $R$ is the fraction of mass returned from a stellar population to the ISM ($28\%$ for a Salpeter 1955 model after $\sim 10$ Gyr for a $0.1-100M_{\sun}$ IMF), and $dt/dz$ gives the rate of change of universal time per unit redshift (in units of $\mathrm{yr}$). While $\eta_{\mathrm{sfr}}(z)$ is in principle time-dependent, there is no firm evidence yet of its evolution and we adopt a constant value throughout. While the evolving stellar mass density can be calculated in our model, the observational constraints on $\rho_{\star}$ have involved integrating a composite stellar mass function constructed from the UV luminosity function and a stellar mass to UV luminosity relation (González et al., 2011; Stark et al., 2012, see also Labbe et al. 2012). Using near-IR observations with the Spitzer Space Telescope, stellar masses of UV-selected galaxies are measured as a function of luminosity. Stark et al. (2012) find that the stellar mass – $L_{\mathrm{UV}}$ relation, corrected for nebular emission contamination, is well described by $$\log m_{\star}=1.433\log L_{\mathrm{UV}}-31.99+f_{\star}(z)$$ (8) where $m_{\star}$ is the stellar mass in $M_{\sun}$, $L_{\mathrm{UV}}$ is the UV luminosity density in $\mathrm{ergs}~{}\mathrm{s}^{-1}~{}\mathrm{Hz}^{-1}$, and $f_{\star}(z)=[0,-0.03,-0.18,-0.40]$ for redshift $z\approx[4,5,6,7]$. The stellar mass density will then involve an integral over the product of the UV luminosity function and the stellar mass $m_{\star}(L_{\mathrm{UV}})$. However, the significant scatter in the $m_{\star}(L_{\mathrm{UV}})$ relation ($\sigma\approx 0.5$ in $\log m_{\star}$, see González et al., 2011; Stark et al., 2012) must be taken into account. The gaussian scatter $p[M^{\prime}-M(\log m_{\star})]$ in luminosity contributing at a given $\log m_{\star}$ can be incorporated into the stellar mass function $dn_{\star}/d\log m_{\star}$ with a convolution over the UV luminosity function. We write the stellar mass function as $$\frac{dn_{\star}}{d\log m_{\star}}=\frac{dM}{d\log m_{\star}}\int_{-\infty}^{% \infty}\Phi(M^{\prime})p[M^{\prime}-M(\log m_{\star})]dM^{\prime}.$$ (9) For vanishing scatter Equation 9 would give simply $dn_{\star}/dm_{\star}=\Phi[M(m_{\star})]\times dM/dm_{\star}$. The stellar mass density $\rho_{\star}$ can be computed by integrating this mass function as $$\rho_{\star}(<m_{\star},z)=\int_{-\infty}^{\log m_{\star}}\frac{dn_{\star}}{d% \log m_{\star}^{\prime}}m_{\star}^{\prime}d\log m_{\star}^{\prime}.$$ (10) A primary feature of the stellar mass function is that the stellar mass – UV luminosity relation and scatter flattens it relative to the UV luminosity function. Correspondingly, the stellar mass density converges faster with decreasing stellar mass or luminosity than does the UV luminosity density (Equation 5). The stellar mass density then serves as an additional, integral constraint on $\dot{n}_{\mathrm{ion}}$. 2.2. Electron Scattering Optical Depth Once the evolving production rate of ionizing photons $\dot{n}_{\mathrm{ion}}$ is determined, the reionization history $Q_{\mathrm{HII}}(z)$ of the universe can be calculated by integrating Equation 1. An important integral constraint on the reionization history is the electron scattering optical depth $\tau$ inferred from observations of the CMB. The optical depth can be calculated from the reionization history as a function of redshift $z$ as $$\tau(z)=\int_{0}^{z}c\langle n_{\mathrm{H}}\rangle\sigma_{\mathrm{T}}f_{% \mathrm{e}}Q_{\mathrm{HII}}(z^{\prime})H(z^{\prime})(1+z^{\prime})^{2}dz^{% \prime},$$ (11) where $c$ is the speed of light, $\sigma_{\mathrm{T}}$ is the Thomson cross section, and $H(z)$ is the redshift-dependent Hubble parameter. The number $f_{\mathrm{e}}$ of free electrons per hydrogen nucleus in the ionized IGM depends on the ionization state of helium. Following Kuhlen & Faucher-Giguère (2012) and other earlier works, we assume that helium is doubly ionized ($f_{\mathrm{e}}=1+Y_{\mathrm{p}}/2X_{\mathrm{p}}$) at $z\leq 4$ and singly ionized ($f_{\mathrm{e}}=1+Y_{\mathrm{p}}/4X_{\mathrm{p}}$) at higher redshifts. To utilize the observational constraints on $\tau$ as a constraint on the reionization history, we employ the posterior probability distribution $p(\tau)$ determined from the Monte Carlo Markov Chains used in the 9-year WMAP results222http://lambda.gsfc.nasa.gov as a marginalized likelihood for our derived $\tau$ values. This method is described in more detail in Section 5. 3. UV Spectral Slopes and the Ionizing Photon Budget A critical ingredient for determining the comoving production rate of hydrogen ionizing photons is the ratio $\xi_{\mathrm{ion}}$ of the Lyman continuum photon emission rate per unit UV (1500$\mathring{A}$) luminosity spectral density of individual sources. Since Lyman continuum photons are predominately produced by hot, massive, UV-bright stars, it is sensible to expect that $\xi_{\mathrm{ion}}$ will be connected with the UV spectral slope of a stellar population. Here, we use observational constraints on the UV slope of high-redshift galaxies determined from the UDF12 campaign by Dunlop et al. (2012b) and stellar population synthesis models by Bruzual & Charlot (2003, BC03) to estimate a physically-motivated $\xi_{\mathrm{ion}}$ consistent with the data. In this paper, we concentrate on placing constraints on $\xi_{\mathrm{ion}}$; for a much more detailed analysis and interpretation of the UV slope results from UDF12, please see Dunlop et al. (2012b). Prior to the UDF12 program, observations of high-redshift galaxies in the HUDF09 WFC3/IR campaign provided a first estimate of the UV spectral slopes ($\beta$, where $f_{\lambda}\propto\lambda^{\beta}$) of $z\gtrsim 7$ galaxies. Early results from the HUDF09 team indicated that these high-redshift galaxies had extraordinarily blue UV slopes of $\beta\approx-3$ (Bouwens et al., 2010), much bluer than well-studied starburst galaxies at lower redshifts (e.g., Meurer et al., 1999). In the intervening period before the UDF12 data was acquired, several workers argued against such extreme values (Finkelstein et al., 2010, 2012; McLure et al., 2011; Dunlop et al., 2012a; Bouwens et al., 2012a; Rogers et al., 2012). The UDF12 campaign provided significantly deeper $H_{\mathrm{160}}$ imaging data used in the spectral slope determination (e.g., $\beta=4.43(J_{\mathrm{125}}-H_{\mathrm{160}})-2$) at redshifts $z\sim 7-8$) and added $J_{\mathrm{140}}$ imaging that reduces potential observational biases and enables a first UV slope determination at $z\sim 9$. These measurements were presented by Dunlop et al. (2012b), whose results are discussed in the context of the present paper in Figure 1 (data points, left panel). Dunlop et al. (2012b) measured the spectral slope $\beta$ as a function of galaxy luminosity and redshift in the range $-19.5\leq M_{\mathrm{UV}}\leq-17.5$ at $z\sim 7-8$. Using their reported measurements (see their Table 1), we performed simple fits of a constant to their $\beta$ values at each redshift separately and found maximum likelihood values of $\beta(z\sim 7)=-1.915$ and $\beta(z\sim 8)=-1.970$ (Figure 1, red lines in left panel) consistent with the single $M_{\mathrm{UV}}=-18$ $z\sim 9$ measurement of $\beta(z\sim 9)=-1.80\pm 0.63$. The 68% credibility intervals on a constant $\beta$ at each redshift (Figure 1, grey areas in left panel) suggest that across redshifts $z\sim 7-9$ galaxies are consistent with a non-evolving UV spectral slope in the range $-2.1\leq\beta\leq-1.7$. The apparent constancy of $\beta$ with redshift avoids the need for strong assumptions about the redshift evolution of galaxy properties. To connect these UV spectral slope determinations to a value of $\xi_{\mathrm{ion}}$, we must rely on stellar population synthesis models. We use the standard BC03 models to extract model spectral of stellar populations with a range of star formation histories (bursts and constant star formation rates), metallicities ($Z=0.0001-0.05$), dust absorption ($A_{\mathrm{V}}\approx 0.1-1$; calculated using the Charlot & Fall 2000 dust model) and initial mass functions (IMF; Chabrier 2003 and Salpeter 1955). These models provide both the Lyman continuum produced by hot stars and the full spectral energy distributions (SEDs) of the composite stellar population per unit star formation rate (SFR) or stellar mass. We determine $\xi_{\mathrm{ion}}$ for each model by dividing the Lyman continuum photon production rate per unit SFR or stellar mass by a $1500\mathring{A}$ luminosity spectral density (in $\mathrm{ergs}~{}\mathrm{s}^{-1}~{}\mathrm{Hz}^{-1}$) per unit SFR or stellar mass measured with a synthetic filter with a flat response and $100\mathring{A}$ width. The model SFR or stellar mass then scales out of the ratio $\xi_{\mathrm{ion}}$ (with units $\mathrm{ergs}^{-1}~{}\mathrm{Hz}$). By measuring the UV slope values of each model using synthetic photometry with the $J_{\mathrm{125}}$ and $H_{\mathrm{160}}$ total throughput response on SEDs redshifted appropriately for $z\sim 7$ observations, the value of $\xi_{\mathrm{ion}}$ for each model $\beta$ can be studied as a function of age, metallicity, IMF, and star formation history. We have checked that our methods for measuring $\beta$ and IR colors from the synthetic model spectra reproduce results from the literature (see Robertson et al. 2007 and Figure 2 of Rogers et al. 2012). The right panel of Figure 1 shows the $\xi_{\mathrm{ion}}$ of a variety of BC03 constant SFR models as a function of UV slope $\beta$, over the range $-2.1\lesssim\beta\lesssim-1.7$ suggested by the Dunlop et al. (2012b) color measurements and further constrained such that the age of the stellar populations is less than the age of the universe at redshift $z\sim 7$ ($t\approx 7.8\times 10^{8}~{}\mathrm{yr}$). We will limit our further discussion of BC03 models to constant SFR histories, as we find single bursts populations display too wide a range of $\xi_{\mathrm{ion}}$ with $\beta$ to be tightly constrained by $\beta$ alone (although limits on the luminosity will constrain the available $\xi_{\mathrm{ion}}$ for a given $\beta$ for single burst models, without the individual SED fits to objects we cannot usefully constrain such models over the wide range of luminosity and redshifts we examine). Three broad types of BC03 constant SFR models are consistent with values of $\beta=-2$. Generically, the BC03 constant SFR models evolve from large values of $\xi_{\mathrm{ion}}$ and large negative values of $\beta$ at early times to a roughly horizontal evolution with constant $\xi_{\mathrm{ion}}$ with $\beta$ increasing at late times ($t\gtrsim 10^{8}~{}\mathrm{yr}$). Metal poor ($Z<Z_{\sun}$) constant SFR populations without dust produce values of $\beta$ significantly bluer (more negative) than observed at $z\sim 7-9$ (Dunlop et al., 2012b). Mature ($\gtrsim 10^{8}~{}\mathrm{yr}$ old), metal-rich ($Z\sim Z_{\sun}$), dust free stellar populations evolve to a constant value of $\log\xi_{\mathrm{ion}}\approx 24.95-25.2~{}\log~{}\mathrm{ergs}^{-1}~{}\mathrm% {Hz}$ over the observed $\beta$ values (as Figure 1 indicates, the results are largely independent of IMF for constant SFR models). Applying a dust absorption of $A_{\mathrm{V}}\sim 0.1$ using the model of Charlot & Fall (2000, with parameters $\tau_{\mathrm{V}}=0.25$ and an ISM attenuation fraction of 0.3) shifts the SED evolution tracks down in $\xi_{\mathrm{ion}}$ (from dust absorption) and to redder $\beta$, such that young, metal-rich stellar populations with dust can also reproduce values of $\beta\approx-2$ while maintaining $\log\xi_{\mathrm{ion}}=24.75-25.35~{}\log~{}\mathrm{ergs}^{-1}~{}\mathrm{Hz}$. Moderately metal-poor models ($Z\sim 0.2-0.4Z_{\sun}$) with as much as $A_{\mathrm{V}}\approx 0.1$ can also reproduce $\beta\approx-2$ for population ages $t>10^{8}~{}\mathrm{yr}$, but the most metal poor models in this range require $t>4\times 10^{8}~{}\mathrm{yr}$ (an initial formation redshift of $z\gtrsim 12$ if observed at $z\sim 7$). We note that our conclusions about the connection between $\beta$ (or $J_{\mathrm{125}}-H_{\mathrm{160}}$ color) and the properties of stellar populations is wholly consistent with previous results in the literature (e.g., Figure 7 of Finkelstein et al., 2010). While the $\beta$ measurements of Dunlop et al. (2012b) have greatly reduced the available BC03 stellar population model parameter space, there is still a broad allowable range of $\log~{}\xi_{\mathrm{ion}}\approx 24.75-25.35~{}\log~{}\mathrm{ergs}^{-1}~{}% \mathrm{Hz}$ available for constant SFR models with UV spectral slopes of $\beta\approx-2$. We therefore adopt the value $$\log~{}\xi_{\mathrm{ion}}=25.2~{}\log~{}\mathrm{ergs}^{-1}~{}\mathrm{Hz}~{}~{}% (\mathrm{adopted})$$ (12) throughout the rest of the paper. This $\xi_{\mathrm{ion}}$ is in the upper range of the available values shown in Figure 1, but is comparable to values adopted elsewhere in the literature (e.g., Kuhlen & Faucher-Giguère, 2012, who assume $\log\xi_{\mathrm{ion}}\approx 25.3$ for $\beta=-2$, see their Equations 5 and 6). We also considered stellar populations reddened by nebular continuum emission (e.g., Schaerer & de Barros, 2009, 2010; Ono et al., 2010; Robertson et al., 2010), which in principle could allow relatively young, metal poor populations with larger $\xi_{\mathrm{ion}}$ to fall into the window of $\beta$ values found by Dunlop et al. (2012b). We find that for $f_{\mathrm{esc}}\sim 0.2$, nebular models applied to young ($<100~{}\mathrm{Myr}$) constant star formation rate BC03 models are still marginally too blue ($\beta\sim-2.3$). Although more detailed modeling is always possible to explore the impact of nebular emission on $\xi_{\mathrm{ion}}$, the uniformity observed by Dunlop et al. (2012b) in the average value of $\beta$ over a range in galaxy luminosities may argue against a diverse mixture young and mature stellar populations in the current $z\simeq 7-8$ samples. However, as Dunlop et al. (2012b) noted, a larger intrinsic scatter could be present in the UV slope distribution of the observed population but not yet detected. Similarly, top heavy initial mass function stellar populations with low metallicity, like the $1-100M_{\sun}$ Salpeter IMF models of Schaerer (2003) used by Bouwens et al. (2010) to explain the earlier HUDF09 data, are disfavored owing to their blue spectral slopes. For reference, for conversion from UV luminosity spectral density to SFR we note that for population ages $t>10^{8}~{}\mathrm{yr}$ a constant SFR BC03 model with a Chabrier (2003) IMF and solar metallicity provides a $1500\mathring{A}$ luminosity spectral density of $$L_{\mathrm{UV}}\approx 1.25\times 10^{28}\times\frac{\mathrm{SFR}}{M_{\sun}~{}% \mathrm{yr}^{-1}}~{}\mathrm{ergs}~{}\mathrm{s}^{-1}~{}\mathrm{Hz}^{-1},$$ (13) while, as noted by Madau et al. (1998), a comparable Salpeter (1955) model provides 64% of this UV luminosity. A very metal-poor population ($Z=Z_{\sun}/200$) would provide 40% more UV luminosity per unit SFR. 4. Ultraviolet Luminosity Density In addition to constraints on the spectral energy distributions of high-redshift galaxies (Dunlop et al., 2012b), the UDF12 observations provide a critical determination of the luminosity function of star forming galaxies at redshifts $7\lesssim z\lesssim 9$. As described in Section 2, when calculating the comoving production rate $\dot{n}_{\mathrm{ion}}$ of hydrogen ionizing photons per unit volume (Equation 4) the UV luminosity density $\rho_{\mathrm{UV}}$ provided by an integral of the galaxy luminosity function is required (Equation 5). An accurate estimate of the $\rho_{\mathrm{UV}}$ provided by galaxies down to observed limits requires a careful analysis of star-forming galaxy samples at faint magnitudes. Using the UDF12 data, Schenker et al. (2012a) and McLure et al. (2012) have produced separate estimates of the $z\sim 7-8$ galaxy luminosity function for different sample selections (color-selected drop-out and spectral energy distribution-fitted samples, respectively). As we demonstrate, the UV luminosity densities computed from these separate luminosity functions are consistent within $1-\sigma$ at $z\sim 7$ and in even closer agreement at $z\sim 8$. Further, McLure et al. (2012) have provided the first luminosity function estimates at $z\sim 9$. Combined, these star-forming galaxy luminosity function determination provide the required constraints on $\rho_{\mathrm{UV}}$ in the epoch $z\gtrsim 7$ when, as we show below, the ionization fraction of the IGM is likely changing rapidly. Given the challenge of working at the limits of the observational capabilities of HST and the relatively small volumes probed by the UDF (with expected cosmic variance of $\sim 30\%-40\%$ at redshifts $z\sim 7-9$, see Robertson 2010b, a; Muñoz et al. 2010, and Section 4.2.3 of Schenker et al. 2012a), we anchor our constraints on the evolving UV luminosity density with precision determinations of the galaxy luminosity function at redshifts $4\lesssim z\lesssim 6$ by Bouwens et al. (2007). To utilize as much information as possible about the luminosity function (LF) constraints at $z\sim 4-9$, we perform Bayesian inference to generate full posterior distributions of the galaxy luminosity functions at each redshift. To achieve this, we perform Schechter (1976) function parameter estimation at $z\sim 4-9$ using the stepwise maximum likelihood UV luminosity function constraints reported in Table 5 of Bouwens et al. (2007, for $z\sim 4-6$) and Table 2 of McLure et al. (2012, for $z\sim 7-8$) allowing all parameters (the luminosity function normalization $\phi_{\star}$, the characteristic galaxy magnitude $M_{\star}$, and the faint-end slope $\alpha$) to vary. For these LF determinations, we assume Gaussian errors and a $\chi^{2}$ likelihood. For additional constraints at $z\sim 7-8$, we use the samples from the full posterior distributions of LF parameters determined by Schenker et al. (2012a), using their method for Bayesian inference of LF parameters discussed in their Section 4.2. Lastly, at redshift $z\sim 9$ we perform parameter estimation on $\phi_{\star}$ given the stepwise maximum likelihood LF determination provided in Table 4 of McLure et al. (2012) while keeping $M_{\star}$ and $\alpha$ fixed at the best-fit $z\sim 8$ values reported by McLure et al. (2012). Again we assume Gaussian errors and a $\chi^{2}$ likelihood. The limited information available at $z\sim 9$ and our restricted fitting method at this redshift mean that the inferred allowed variation in the $\rho_{\mathrm{UV}}(z\sim 9)$ will be underestimated. However, we have checked that this restriction does not strongly influence our results presented in Section 5. In each case our maximum likelihood luminosity function parameters are consistent within $1-\sigma$ of the values originally reported by Bouwens et al. (2007) and McLure et al. (2012), and are of course identical in the case of the Schenker et al. (2012a) LFs. Figure 2 shows the integrated UV luminosity density $\rho_{\mathrm{UV}}$ as a function of limiting magnitude for these galaxy luminosity function determinations at $z\sim 5-8$ (constraints at $z\sim 4$ and $z\sim 9$ are also used). For each redshift, we show the maximum likelihood $\rho_{\mathrm{UV}}$ (white lines) and the inner 68% variation in the marginalized $\rho_{\mathrm{UV}}$ (blue regions). Since the luminosity functions are steep ($\alpha\lesssim-1.7$), the luminosity densities $\rho_{\mathrm{UV}}$ increase dramatically below the characteristic magnitude $M_{\star}$ at each redshift, but especially at $z\gtrsim 7$. The total $\rho_{\mathrm{UV}}$ supplied by star-forming galaxies therefore strongly depends on the limiting magnitude adopted to limit the integral in Equation 5. The UDF12 campaign depth of $M_{\mathrm{UV}}<-17$ provides $\rho_{\mathrm{UV}}\approx 10^{26}\mathrm{ergs}~{}\mathrm{s}^{-1}~{}\mathrm{Hz}% ^{-1}~{}\mathrm{Mpc}^{-3}$ at $z\sim 7$, declining to $\rho_{\mathrm{UV}}\sim 3.2\times 10^{25}~{}\mathrm{ergs}~{}\mathrm{s}^{-1}~{}% \mathrm{Hz}^{-1}~{}\mathrm{Mpc}^{-3}$ at $z\sim 8$. To put these $\rho_{\mathrm{UV}}$ values in context, we also indicate the critical values of $\rho_{\mathrm{UV}}$ required to keep $\dot{Q}_{\mathrm{HII}}=0$ in Equation 1 and maintain reionization (grey areas and dashed line; assuming case A recombination333When calculating whether the production rate of ionizing photons from galaxies can maintain the ionization fraction of the IGM near the end of reionization, an assumption of case A recombination is appropriate since recombinations to the hydrogen ground state largely do not help sustain the IGM ionization. For a detailed discussion, see Furlanetto & Oh (2005); Faucher-Giguère et al. (2009); Kuhlen & Faucher-Giguère (2012). We therefore adopt case A recombination in Figure 2. However, we assume case B recombination throughout the rest of the paper.) if $f_{\mathrm{esc}}=0.2$, $\log\xi_{\mathrm{ion}}=25.2~{}\log~{}\mathrm{ergs}^{-1}~{}\mathrm{Hz}$, and $C_{\mathrm{HII}}=3$. Under these reasonable assumptions, the currently observed galaxy population clearly is not abundant enough to maintain reionization at $z\gtrsim 7$. Understanding the role of galaxies in the reionization process will therefore likely require extrapolations to luminosities beyond even the UDF12 depth, and the constraints on the LF shape achieved by the UDF12 observations will be important for performing these extrapolations reliably. The extent of the extrapolation down the luminosity function required for matching the reionization constraints depends in detail on our assumptions for the escape fraction $f_{\mathrm{esc}}$ or the ionizing photon production rate per UV luminosity $\xi_{\mathrm{ion}}$ of galaxies. We emphasize that the critical ionizing photon production rate $\dot{n}_{\mathrm{ion}}$ depends on the product $\dot{n}_{\mathrm{ion}}=f_{\mathrm{esc}}\xi_{\mathrm{ion}}\rho_{\mathrm{UV}}$, and will shift up or down proportionally to $f_{\mathrm{esc}}\xi_{\mathrm{ion}}$ at fixed $\rho_{\mathrm{UV}}$. Similarly, the $\dot{n}_{\mathrm{ion}}$ value that balances recombination is proportional to the clumping factor $C_{\mathrm{HII}}$, and variation in the clumping factor will also shift the required $\rho_{\mathrm{UV}}$ up or down. 4.1. UV Luminosity Density Likelihoods We wish to use the evolving UV luminosity density $\rho_{\mathrm{UV}}$ to constrain the availability of ionizing photons $\dot{n}_{\mathrm{ion}}$ as given in Equation 4. Given how significantly $\rho_{\mathrm{UV}}$ increases with the limiting absolute magnitude $M_{\mathrm{UV}}$, we must evaluate the full model presented in Section 2 for chosen values of $M_{\mathrm{UV}}$. We select $M_{\mathrm{UV}}=-17$, $M_{\mathrm{UV}}=-13$, and $M_{\mathrm{UV}}=-10$ to provide a broad range of galaxy luminosities extending to extremely faint magnitudes. The observations probe to $M_{\mathrm{UV}}=-17$, and this magnitude therefore serves as a natural limit. We consider limits as faint as $M_{\mathrm{UV}}=-10$ as this magnitude may correspond to the minimum mass dark matter halo able to accrete gas from the photoheated intergalactic medium or the minimum mass dark matter halo able to retain its gas supply in the presence of supernova feedback. These scenarios have so far proven impossible to distinguish empirically (Muñoz & Loeb, 2011). To make use of the constraints of the UV luminosity density on $\dot{n}_{\mathrm{ion}}$, we need to adopt a likelihood for use with Bayesian inference on parameterized forms of $\rho_{\mathrm{UV}}(z)$. Given the parameterized form of the Schechter (1976) function, we find that the marginalized posterior distributions of $\rho_{\mathrm{UV}}$ are skewed at $z\gtrsim 6$, with tails extending to larger $\log\rho_{\mathrm{UV}}$ values than a Gaussian approximation would provide. To capture this skewness, when constraining the cosmic reionization history in Section 5 below, we therefore use the full marginalized posterior distribution of $\rho_{\mathrm{UV}}$ provided by the integrated LF determinations calculated in Section 4. Figure 3 shows the marginalized posterior distribution of the UV luminosity density $\rho_{\mathrm{UV}}$ for limiting magnitudes of $M_{\mathrm{UV}}=-17$ (red lines), $M_{\mathrm{UV}}=-13$ (orange lines), and $M_{\mathrm{UV}}=-10$ (blue lines) for our Schechter function fits to the $z\sim 5-6$ LFs of Bouwens et al. (2007), the $z\sim 7-8$ LFs of Schenker et al. (2012a), and the $z\sim 7-8$ LFs of McLure et al. (2012). Although Schenker et al. (2012a) and McLure et al. (2012) use the same data sets, their luminosity function determinations are based on different selection techniques. They therefore represent independent determinations of the high-redshift luminosity functions and are treated accordingly. We additionally use constraints from $z\sim 4$ (Bouwens et al., 2007) and $z\sim 9$ (McLure et al., 2012). The posterior distributions for $\rho_{\mathrm{UV}}(z\sim 9)$ we calculate are likely underestimated in their width owing to an assumption of fixed $M_{\star}$ and $\alpha$ values, but this assumption does not strongly influence our results. In what follows, these posterior distributions on $\rho_{\mathrm{UV}}$ are used as likelihood functions when fitting a parameterized model to the evolving UV luminosity density. 5. Constraints on Reionization The observational constraints on the process of cosmic reionization in the redshift range $z\gtrsim 7$ are the spectral character of high-redshift star-forming galaxies determined from the UDF12 program (Section 3 and Dunlop et al. 2012b), the evolving luminosity density constraints enabled by those same observations (Section 4, Ellis et al. 2013, Schenker et al. 2012a, and McLure et al. 2012), and the electron scattering optical depth inferred from the 9-year WMAP observations of the CMB (Hinshaw et al. 2012 and briefly below). As we demonstrate, these constraints are in tension when taking the current data at face value. The Lyman continuum photon production rates per unit UV luminosity spectral density calculated using the BC03 models that are consistent with the UV spectral slopes of galaxies at $z\sim 7-9$ are $\log\xi_{\mathrm{ion}}\approx 24.8-25.3~{}\log~{}\mathrm{ergs}^{-1}~{}\mathrm{Hz}$ (Section 3), and for a reasonable escape fraction of $f_{\mathrm{esc}}\sim 0.2$ and IGM clumping factor of $C_{\mathrm{HII}}$, UV luminosity density of $\log\rho_{\mathrm{UV}}>26~{}\log~{}\mathrm{ergs}~{}\mathrm{s}^{-1}~{}\mathrm{% Hz}^{-1}~{}\mathrm{Mpc}^{-3}$ is required to induce significant ionization at $z\geq 7$, but as we showed in Ellis et al. (2013) the observed abundance of star-forming galaxies continues a measured decline at high-redshift ($z>8$). Further, reproducing the WMAP Thomson optical depth requires an extended reionization process, since instantaneous reionization ($Q_{\mathrm{HII}}=1$) would need to occur at $z=10.3\pm 1.1$ to reproduce the measured $\tau=0.084\pm 0.013$ (Hinshaw et al., 2012) and very likely the ionization fraction $Q_{\mathrm{HII}}<1$ at $z\gtrsim 7$. Based on the UDF12 and WMAP constraints, we therefore anticipate that the UV luminosity density declines to some minimum level beyond $z>7$, and then persists with redshift to sustain a sufficient partial ionization of the IGM to satisfy the Thomson optical depth constraint. We need a methodology to quantify this process and, given its redshift dependence, to determine the required minimum luminosity of abundant star-forming galaxies to reionize the universe by $z\sim 6$. Our chosen methodology is to use a simple parameterized model of the evolving UV luminosity density to calculate the redshift-dependent ionizing photon production rate density $\dot{n}_{\mathrm{ion}}$, constrained to reproduce the UV luminosity density values reflected by the likelihood functions shown in Figure 3. The evolving $\dot{n}_{\mathrm{ion}}$ with redshift is used to calculate the reionization history $Q_{\mathrm{HII}}(z)$ by integrating Equation 1. The corresponding Thomson optical depth is calculated using Equation 11 and then evaluated against the posterior distribution of $\tau$ provided by the public 9-year WMAP Monte Carlo Markov Chains. At each posterior sample evaluation, our method requires a full reconstruction of the reionization history and integration of the electron scattering optical depth and we have incorporated the reionization calculation into the MultiNest Bayesian inference software (Feroz & Hobson, 2008; Feroz et al., 2009). 5.1. A Parameterized Model for the Evolving UV Luminosity Density To infer constraints on the reionization process, we must adopt a flexible parameterized model for the evolving UV luminosity density. The model must account for the decline of $\rho_{\mathrm{UV}}$ apparent in Figures 2 and 3, without artificially extending the trend in $\rho_{\mathrm{UV}}$ at $z\lesssim 6$ to redshifts $z\gtrsim 8$. The rapid decline in $\rho_{\mathrm{UV}}$ between $z\sim 4$ and $z\sim 5$ suggests a trend of $d\log\rho_{\mathrm{UV}}/d\log z\sim-3$, but the higher redshift values of $\rho_{\mathrm{UV}}$ flatten away from this trend, especially for faint limiting magnitudes. Beyond $z\sim 10$, we expect that some low-level UV luminosity density will be required to reproduce the Thomson optical depth, to varying degrees depending on the chosen limiting magnitude. With these features in mind, we have tried a variety of parameterized models for $\rho_{\mathrm{UV}}$. We will present constraints using a three-parameter model given by $$\rho_{\mathrm{UV}}(z)=\rho_{\mathrm{UV},z=4}\left(\frac{z}{4}\right)^{-3}+\rho% _{\mathrm{UV},z=7}\left(\frac{z}{7}\right)^{\gamma}.$$ (14) The low-redshift amplitude of this model is anchored by the UV luminosity density at $z\sim 4$, $\rho_{\mathrm{UV},z=4}$ with units of $\mathrm{ergs}~{}\mathrm{s}^{-1}~{}\mathrm{Hz}^{-1}~{}\mathrm{Mpc}^{-3}$. The high-redshift evolution is determined by the normalization $\rho_{\mathrm{UV},z=7}$ at $z\sim 7$, provided in units of $\mathrm{ergs}~{}\mathrm{s}^{-1}~{}\mathrm{Hz}^{-1}~{}\mathrm{Mpc}^{-3}$, and the power-law slope $\gamma$. Using this model, we compute the evolving $\rho_{\mathrm{UV}}(z)$ and evaluate the parameter likelihoods as described immediately above. We have examined other models, including general broken (double) power-laws and low-redshift power-laws with high-redshift constants or fixed slope power-laws. Single power-law models tend to be dominated by the low-redshift decline in the $\rho_{\mathrm{UV}}$ and have difficulty reproducing the Thomson optical depth. Generic broken power-law models have a degeneracy between the low- and high-redshift evolution, even for fixed redshifts about which the power laws are defined, and are disfavored based on their Bayesian information relative to less complicated models that can also reproduce the $\rho_{\mathrm{UV}}$ constraints. The model in Equation 14 is therefore a good compromise between sufficient generality and resulting parameter degeneracies. However, when using this model, we use a prior to limit $\gamma<0$ to prevent increasing $\rho_{\mathrm{UV}}$ beyond $z\sim 9$. The best current constraints on the abundance of $z>8.5$ galaxies is from Ellis et al. (2013), where we demonstrated that the highest-redshift galaxies (with $M_{\mathrm{UV}}\lesssim-19$) continue the smooth decline in abundance found at slightly later times. Correspondingly, the constraint on $\gamma$ amounts to a limit on introducing a new population of galaxies arising at $z\gtrsim 9$. The usefulness of this potentiality for the reionization of the universe has been noted elsewhere (e.g., Cen, 2003; Alvarez et al., 2012). While imposing no constraint besides $\gamma<0$, we typically find that nearly constant, low-level luminosity densities at high-redshift are nonetheless favored (see below). Results for models that feature low-redshift power-law declines followed by low-level constant $\rho_{\mathrm{UV}}$ at high-redshift will produce similar quantitative results. All parameterized models we have examined that feature a declining or constant $\rho_{\mathrm{UV}}$ and can reproduce the Thomson optical depth constraint produce similar results, and we conclude that our choice of the exact form of Equation 14 is not critical. 5.2. Reionization Constraints from Galaxies Figure 4 shows constraints on the reionization process calculated using the model described in Section 2. We perform our Bayesian inference modeling assuming $M_{\mathrm{UV}}<-17$ (close to the UDF12 limit at $z\sim 8$, maximum likelihood model is shown as a dashed line in all panels), $M_{\mathrm{UV}}<-10$ (an extremely faint limit, with the maximum likelihood model shown as a dotted line in all panels), and an intermediate limit $M_{\mathrm{UV}}<-13$ (colored regions show 68% credibility regions, while white lines indicate the maximum likelihood model). We now will discuss the UV luminosity density, stellar mass density, ionized filling fraction, and electron scattering optical depth results in turn. The upper left hand panel shows parameterized models of the UV luminosity density (as given in Equation 14), constrained by observations of the UV luminosity density (inferred from the measured luminosity functions, see Figures 2 and 3) integrated down to each limiting magnitude. The figure shows error bars to indicate the 68% credibility width of the posterior distributions of $\rho_{\mathrm{UV}}$ at redshifts $z\sim 4-9$ integrated to $M_{\mathrm{UV}}<-13$, but the likelihood of each model is calculated using the full marginalized $\rho_{\mathrm{UV}}$ posterior distributions appropriate for its limiting magnitude. In each case the $\rho_{\mathrm{UV}}(z)$ evolution matches well the constraints provided by the luminosity function extrapolations, which owes to the well-defined progression of declining $\rho_{\mathrm{UV}}$ with redshift inferred from the UDF12 and earlier datasets. For reference, the maximum likelihood values for the parameters of Equation 14 for $M_{\mathrm{UV}}<-13$ are $\log~{}\rho_{\mathrm{UV},z=4}=26.50~{}\log~{}\mathrm{ergs}~{}\mathrm{s}^{-1}~{% }\mathrm{Hz}^{-1}~{}\mathrm{Mpc}^{-3}$, $\log~{}\rho_{\mathrm{UV},z=7}=25.82~{}\log~{}\mathrm{ergs}~{}\mathrm{s}^{-1}~{% }\mathrm{Hz}^{-1}~{}\mathrm{Mpc}^{-3}$, and $\gamma=-0.003$. What drives the constraints on $\rho_{\mathrm{UV}}$ for these limiting magnitudes? Some tension exists at high-redshift, as the models prefer a relatively flat $\rho_{\mathrm{UV}}(z)$ beyond the current reach of the data. In each case, the maximum likelihood models become flat at high-redshift and reflect the need for continued star formation at high-redshift to sustain a low-level of partial IGM ionization. To better answer this question, we have to calculate the full stellar mass density evolutions, reionization histories, and Thomson optical depths of each model. The lower left hand panel shows how the models compare to the stellar mass densities extrapolated from the results by Stark et al. (2012), using the method described in Section 2. The error bars in the figure reflect the stellar mass densities extrapolated for contributions from galaxies with $M_{\mathrm{UV}}<-13$, but the appropriately extrapolated mass densities are used for each model. The error bars reflect bootstrap-calculated uncertainties that account for statistically possible variations in the best fit stellar mass - UV luminosity relation given by Equation 8, and are $\sigma\approx 0.3$ dex at $z\sim 7$. To balance the relative constraint from the stellar mass density evolution with the single optical depth constraint (below), we assign the likelihood contributions of the stellar mass density and Thomson optical depth equal weight. The stellar mass densities calculated from the models evolve near the $1-\sigma$ upper limits of the extrapolated constraints, again reflecting the need for continued star formation to sustain a low-level of IGM ionization.444We note that if we ignore the electron scattering optical depth constraint, the other datasets favor a UV luminosity density that declines more rapidly at high redshift than does our maximum likelihood model calculated including all the constraints. However, the UV luminosity density parameters in Equation 14 remain similar. Excluding the stellar mass density constraint has almost no effect on the maximum likelihood parameters inferred for the model. Assuming a ratio of Lyman continuum photon production rate to UV luminosity $\log\xi_{\mathrm{ion}}=25.2~{}\log\mathrm{ergs}^{-1}~{}\mathrm{Hz}$ for individual sources consistent with UV slopes measured from the UDF12 data (Dunlop et al., 2012b), an ionizing photon escape fraction $f_{\mathrm{esc}}=0.2$, and an intergalactic medium clumping factor of $C_{\mathrm{HII}}=3$, the reionization history $Q_{\mathrm{HII}}(z)$ produced by the evolving $\rho_{\mathrm{UV}}$ can be calculated by integrating Equation 1 and is shown in the upper right panel of Figure 4. We find that the currently observed galaxy population at magnitudes brighter than $M_{\mathrm{UV}}<-17$ (dashed line) can only manage to reionize fully the universe at late times $z\sim 5$, with the IGM 50% ionized by $z\sim 6$ and $<5\%$ ionized at $z\sim 12$. Contributions from galaxies with $M_{\mathrm{UV}}<-13$ reionize the universe just after $z\sim 6$, in agreement with a host of additional constraints on the evolving ionized fraction (see Section 6 below) and as suggested by some previous analyses (e.g., Salvaterra et al., 2011). These galaxies can sustain a $\sim 50\%$ ionized fraction at $z\sim 7.5$, continuing to $\sim 12-15\%$ at redshift $z\sim 12$. Integrating further down to $M_{\mathrm{UV}}<-10$ produces maximum likelihood models that reionize the universe slightly earlier. The Thomson optical depths resulting from the reionization histories can be determined by evaluating Equation 11. To reproduce the 9-Year WMAP $\tau$ values, most models display low levels of UV luminosity density that persist to high redshift. Even maintaining a flat $\rho_{\mathrm{UV}}$ at the maximum level allowed by the luminosity density constraints at redshifts $z\lesssim 9$, as the maximum likelihood models do, the optical depth just can be reproduced. When considering only the currently observed population $M_{\mathrm{UV}}<-17$, the UV luminosity density would need to increase toward higher $z>10$ redshifts for the universe to be fully reionized by redshift $z\sim 6$ and the Thomson optical depth to be reproduced. We remind the reader that while the exact results for, e.g., the $Q_{\mathrm{HII}}$ evolution of course depends on the choices for the escape fraction $f_{\mathrm{esc}}$ or the ionizing photon production rate $\xi_{\mathrm{ion}}$. If $f_{\mathrm{esc}}$ or $\xi_{\mathrm{ion}}$ are lowered, the evolution of $Q_{\mathrm{HII}}$ is shifted toward lower redshift. For instance, with all other assumptions fixed, we find that complete reionization is shifted to $z\sim 5$ for $f_{\mathrm{esc}}=0.1$ and $z\sim 4.75$ for $\log~{}\xi_{\mathrm{ion}}=25.0~{}\log~{}\mathrm{ergs}^{-1}~{}\mathrm{Hz}$. Complete reionization can also be made earlier by choosing a smaller $C_{\mathrm{HII}}$ or delayed by choosing a larger $C_{\mathrm{HII}}$. Given the above results, we conclude that under our assumptions (e.g., $f_{\mathrm{esc}}=0.2$ and $C_{\mathrm{HII}}\approx 3$) galaxies currently observed down to the limiting magnitudes of deep high-redshift surveys (e.g., UDF12) do not reionize the universe alone, and to simultaneously reproduce the $\rho_{\mathrm{UV}}$ constraints, the $\tau$ constraint, and to reionize the universe by $z\sim 6$ requires yet fainter populations. This conclusion is a ramification of the UDF12 UV spectral slope constraints by Dunlop et al. (2012b), which eliminate the possibility that increased Lyman continuum emission by metal-poor populations could have produced reionization at $z>6$ (e.g., Robertson et al., 2010). However, too much additional star formation beyond the $M_{\mathrm{UV}}<-13$ models shown in Figure 4 will begin to exceed the stellar mass density constraints, depending on $f_{\mathrm{esc}}$. It is therefore interesting to know whether the $M_{\mathrm{UV}}<-13$ models that satisfy the $\rho_{\mathrm{UV}}$, $\rho_{\star}$, and $\tau$ constraints also satisfy other external constraints on the reionization process, and we now turn to such an analysis. 6. Comparison to Other Probes of The Ionized Fraction In this section we will collect constraints on the IGM neutral fraction from the literature and compare them to the evolution of this quantity in our models based on the UDF12 data. These constraints come from a wide variety of astrophysical measurements, but all are subject to substantial systematic or modeling uncertainties, about which we will comment below. We show the full set in Figure 5, along with a comparison to our model histories. We will first briefly discuss these constraints and then how our model histories fare in comparison to them. 6.1. The Lyman-$\alpha$ Forest The best known Lyman-$\alpha$ forest constraints come from Fan et al. (2006b), who measured the effective optical depth evolution along lines of sight taken from SDSS (including both Lyman-$\alpha$ and higher-order transitions, where available). Those authors then made assumptions about the IGM temperature and the distribution of density inhomogeneities throughout the universe to infer the evolution of the neutral fraction (these assumptions are necessary for comparing the higher-order transitions to Lyman-$\alpha$ as well, because those transitions sample different parts of the IGM). The corresponding limits on the neutral fraction according to their model are shown by the filled triangles in Figure 5. The original data here are the transmission measurements in the different transitions. Transforming those into constraints on the ionized fraction requires a model that: (1) predicts the temperature evolution of the IGM (Hui & Haiman, 2003; Trac et al., 2008; Furlanetto & Oh, 2009); (2) describes the distribution of gas densities in the IGM (Miralda-Escudé et al., 2000; Bolton & Becker, 2009); (3) accounts for spatial structure in the averaged transmission measurements (Lidz et al., 2006); and (4) predicts the topology of ionized and neutral regions at the tail end of reionization (Furlanetto et al., 2004b; Choudhury et al., 2009). These are all difficult, and the Fan et al. (2006b) constraints – based upon a simple semi-analytic model for the IGM structure – can be evaded in a number of ways. In particular, their model explicitly ignores the possibility that $Q_{\mathrm{HII}}<1$, working only with the residual neutral gas inside a mostly ionized medium. Unsurprisingly, our models do not match these Lyman-$\alpha$ forest measurements very well. The crude approach of Equation 1 fails to address effects from the detailed density structure of the IGM on the evolution of the mean ionization fraction near the end of the reionization process. Our model is therefore not sufficient to model adequately the tail end of reionization, and only more comprehensive models that account for both radiative transfer effects and the details of IGM structure will realistically model these data at $z<6$. More useful to us is a (nearly) model-independent upper limit on the neutral fraction provided by simply counting the dark pixels in the spectra (Mesinger, 2010). McGreer et al. (2011) present several different sets of constraints, depending on how one defines “dark” and whether one uses a small number of very deep spectra (with a clearer meaning to the dark pixels but more cosmic variance) versus a larger set of shallower spectra. Their strongest constraints are roughly $1-Q_{\mathrm{HII}}<0.2$ at $z=5.5$ and $1-Q_{\mathrm{HII}}<0.5$ at $z=6$, shown by the open triangles in Figure 5. Interestingly, this model-independent approach permits rather late reionization. 6.2. Ionizing Background Another use of the Lyman-$\alpha$ forest is to measure the ionizing background and thereby constrain the emissivity $\epsilon$ of galaxies: the ionization rate $\Gamma\propto\epsilon\lambda$, where $\lambda$ is the Lyman continuum mean free path. Bolton & Haehnelt (2007b) attempted such a measurement at $z>5$. The method is difficult as it involves interpretation of spectra near the saturation limit of the forest. Nonetheless, the Bolton & Haehnelt (2007b) analysis shows that the ionizing background falls by about a factor of ten from $z\sim 3$ to to $z\sim 6$ (though see McQuinn et al., 2011). The resulting comoving emissivity is roughly the same as at $z\sim 2$, corresponding to 1.5–3 photons per hydrogen atom over the age of the Universe (see also Haardt & Madau, 2012). Additional detailed constraints were derived by Faucher-Giguère et al. (2008), who used quasar spectra at $2\lesssim z\lesssim 4$ to infer a photoionization rate for intergalactic hydrogen. Combining these measurements with the previous work by Bolton & Haehnelt (2007b) and estimates of the intergalactic mean free path of Lyman continuum photons (Prochaska et al., 2009; Songaila & Cowie, 2010), Kuhlen & Faucher-Giguère (2012) calculated constraints on the comoving ionizing photon production rate. These inferred values of $\log~{}\dot{n}_{\mathrm{ion}}<51~{}\log~{}\mathrm{s}^{-1}~{}\mathrm{Mpc}^{-3}$ at $z\sim 2-6$ are lower than those produced by naively extrapolating typical models of the evolving high-redshift UV luminosity density that satisfy reionization constraints. To satisfy both the low-redshift IGM emissivity constraints and the reionization era constraints then available, Kuhlen & Faucher-Giguère (2012) posited an evolving escape fraction $$f_{\mathrm{esc}}(z)=f_{0}\times[(1+z)/5]^{\kappa}.$$ (15) If the power-law slope $\kappa$ is large enough, the IGM emissivity constraints at $z<6$ can be satisfied with a low $f_{0}$ while still reionizing the universe by $z\sim 6$ and matching previous WMAP constraints on the electron scattering optical depth. Other previous analyses have emphasized similar needs for an evolving escape fraction (e.g., Ferrara & Loeb, 2012; Mitra et al., 2013). To study the effects of including the IGM emissivity constraints by permitting an evolving $f_{\mathrm{esc}}$, we repeat our calculations in Section 5 additionally allowing an escape fraction given by Equation 15 with varying $f_{0}$ and $\kappa$ (and maximum escape fraction $f_{\mathrm{max}}=1$). With a limiting magnitude $M_{\mathrm{UV}}<-13$, and utilizing the updated constraints from UDF12 and WMAP, we find maximum likelihood values $f_{0}=0.054$ and $\kappa=2.4$ entirely consistent with the results of Kuhlen & Faucher-Giguère (2012, see their Figure 7). When allowing for this evolving $f_{\mathrm{esc}}$ (with $f_{\mathrm{max}}=1$), we sensibly find that the need for low-level star formation to satisfy reionization constraints is reduced (the maximum likelihood model has a high-redshift luminosity density evolution of $\rho_{\mathrm{UV}}\propto z^{-1.2}$) and the agreement with the stellar mass density constraints is improved. However, the impact of the evolving $f_{\mathrm{esc}}(z)$ depends on the maximum allowed $f_{\mathrm{esc}}$. We allow only $f_{\mathrm{esc}}$ to vary (not the product $f_{\mathrm{esc}}\xi_{\mathrm{ion}}$) owing to our UV spectral slope constraints, and the maximum $f_{\mathrm{esc}}=1$ (and, correspondingly, the maximum ionizing photon production rate per galaxy) is reached by $z\sim 15$. In this model with maximum $f_{\mathrm{max}}=1$, the escape fraction at $z\sim 7-8$ (where the ionized volume filling factor $Q_{\mathrm{HII}}$ is changing rapidly) is $f_{\mathrm{esc}}\sim 0.17-0.22$, very similar to our fiducial constant $f_{\mathrm{esc}}=0.2$ adopted in Section 5. The rate of escaping ionizing photons per galaxy in this evolving $f_{\mathrm{esc}}$ model with maximum $f_{\mathrm{max}}=1$ trades off against the high-redshift UV luminosity density evolution. The need for a low-level of star formation out to high redshift is somewhat reduced as the increasing escape fraction with redshift compensates to maintain a partial IGM ionization fraction and recover the observed WMAP $\tau$. Without placing additional constraints on the end of reionization, this evolving $f_{\mathrm{esc}}$ model with $f_{\mathrm{max}}=1$ produces a broader redshift range of $z\sim 4.5-6.5$ for the completion of the reionization process than the constant $f_{\mathrm{esc}}$ model described in Section 5. If we instead adopt an evolving $f_{\mathrm{esc}}$ model with $f_{\mathrm{max}}=0.2$ we recover $\rho_{\mathrm{UV}}$, $\rho_{\star}$, and Thomson optical depth evolutions that are similar to the constant $f_{\mathrm{esc}}$ model from Section 5, but can also satisfy the additional low-redshift IGM emissivity constraints. Compared with the comoving ionizing photon production rates $\dot{n}=[3.2,3.5,4.3]\times 10^{50}~{}\mathrm{s}^{-1}~{}\mathrm{Mpc}^{-3}$ at redshifts $z=[4.0,4.2,5.0]$ inferred by Kuhlen & Faucher-Giguère (2012) from measurements by Faucher-Giguère et al. (2008), Prochaska et al. (2009), and Songaila & Cowie (2010), this maximum likelihood model with evolving $f_{\mathrm{esc}}$ recovers similar values ($\dot{n}=[3.3,3.5,4.7]\times 10^{50}~{}\mathrm{s}^{-1}~{}\mathrm{Mpc}^{-3}$ at $z=[4,4.2,5]$). In contrast, the constant $f_{\mathrm{esc}}$ model in Section 5 calculated without consideration of these IGM emissivity constraints would produce $\dot{n}\sim 8-13\times 10^{50}~{}\mathrm{s}^{-1}~{}\mathrm{Mpc}^{-3}$ at $z\sim 4-5$. The reduction of the escape fraction at $z\sim 4-6$ compared with the baseline model with constant $f_{\mathrm{esc}}$ allows full ionization of the IGM to occur slightly later in the evolving $f_{\mathrm{esc}}$ model (at redshifts as low as $z\approx 5.3$, although in the maximum likelihood model reionization completes within $\Delta z\approx 0.1$ of the redshift suggested by the maximum likelihood constant $f_{\mathrm{esc}}$ model of Section 5). With a similar range of assumptions to Kuhlen & Faucher-Giguère (2012) for a time-dependence in the escape fraction, the model presented in Section 5 can therefore also satisfy the low-redshift IGM emissivity constraints without assuming an escape fraction of $f_{\mathrm{esc}}\sim 1$ at high redshifts. While the evolving covering fraction in galaxy spectra (Jones et al., 2012) provides additional observational support for a possible evolution in the escape fraction at $z\lesssim 5$, such empirical support for evolving $f_{\mathrm{esc}}$ does not yet exist at $z\gtrsim 5$. 6.3. The Lyman-$\alpha$ Damping Wing Another use of the Lyman-$\alpha$ line is to measure precisely the shape of the red damping wing of the line: the IGM absorption is so optically thick that the shape of this red absorption wing depends upon the mean neutral fraction in the IGM (Miralda-Escudé, 1998), though the interpretation depends upon the morphology of the reionization process, leading to large intrinsic scatter and biases (McQuinn et al., 2008; Mesinger & Furlanetto, 2008a). Such an experiment requires a deep spectrum of a bright source in order to identify the damping wing. One possibility is a gamma-ray burst, which has an intrinsic power-law spectrum, making it relatively easy to map the shape of a damping wing. The disadvantage of these sources is that (at lower redshifts) they almost always have damped Lyman-$\alpha$ (DLA) absorption from the host galaxy, which must be disentangled from any IGM signal (Chen et al., 2007). To date, the best example of such a source is GRB050904 at $z=6.3$, which received rapid followup and produced a high signal-to-noise spectrum (Totani et al., 2006). McQuinn et al. (2008) studied this spectrum in light of patchy reionization models. Because it has intrinsic DLA absorption, the constraints are relatively weak: they disfavor a fully neutral IGM but allow $Q_{\mathrm{HII}}\sim 0.5$. We show this measurement with the open circle in Figure 5. Quasars provide a second possible set of sources. These are much easier to find but suffer from complicated intrinsic spectra near the Lyman-$\alpha$ line. Schroeder et al. (2012) have modeled the spectra of three SDSS quasars in the range $z=6.24$–6.42 and found that all three are best fit if a damping wing is present (see also Mesinger & Haiman 2004). Although the spectra themselves cannot distinguish an IGM damping wing from a DLA, they argue that the latter should be sufficiently rare that the IGM must have $Q_{\mathrm{HII}}\lesssim 0.1$ at 95% confidence. We show this point as the open square in Figure 5. This conclusion is predicated on accurate modeling of the morphology of reionization around quasars and the distribution of strong IGM absorbers at the end of reionization. 6.4. The Near Zones of Bright Quasars Any ionizing source that turns on inside a mostly neutral medium will carve out an H II region whose extent depends upon the total fluence of ionizing photons from the source. If one can measure (or guess) this fluence, the extent of the ionized bubble then offers a constraint on the original neutral fraction of the medium. This is, of course, a difficult proposition, as the extremely large Lyman-$\alpha$ IGM optical depth implies that the spectrum may go dark even if the region is still highly-ionized: in this case, you find only a lower limit to the size of the near zone (Bolton & Haehnelt, 2007a). Carilli et al. (2010) examined the trends of near-zone sizes in SDSS quasars from $z=5.8$–$6.4$. The sample shows a clear trend of decreasing size with increasing redshift (after compensation for varying luminosities) by about a factor of two over that redshift range. Under the assumption that near zones correspond to ionized bubbles in a (partially) neutral medium, the volume $V\propto(1-Q_{\mathrm{HII}})^{-1}$, or, in terms of the radius, $R_{\rm NZ}^{-3}\propto(1-Q_{\mathrm{HII}})$. In that case, a two-fold decrease in size corresponds to an order of magnitude increase in the neutral fraction. However, if the Fan et al. (2006b) Lyman-$\alpha$ forest measurements are correct, the neutral fraction at $z\sim 5.8$ is so small that the zones are very unlikely to be in this regime. In that case, the trend in sizes cannot be directly transformed into a constraint on the filling factor of ionized gas. We therefore do not show this constraint on Figure 5, as its interpretation is unclear. The recently discovered $z=7.1$ quasar has a very small near-zone (Mortlock et al., 2011). Bolton et al. (2011) used a numerical simulation to analyze it in detail. They concluded the spectrum was consistent both with a small ionized bubble inside a region with $Q_{\mathrm{HII}}\lesssim 0.1$ and with a highly-ionized medium $(1-Q_{\mathrm{HII}})\sim 10^{-4}$–$10^{-3}$ if a DLA is relatively close to the quasar. They argue that the latter is relatively unlikely (occurring $\sim 5\%$ of the time in their simulations). Further support to the IGM hypothesis is lent by recent observations showing no apparent metals in the absorber, which would be unprecedented for a DLA (Simcoe et al., 2012). A final possibility is that the quasar is simply very young ($\lesssim 10^{6}$ yr) and has not had time to carve out a large ionized region. We show this constraint, assuming that the absorption comes from the IGM, with the filled square in Figure 5. 6.5. The Kinetic Sunyaev-Zel’dovich Effect Recent small-scale temperature measurements have begun to constrain the contribution of patchy reionization to the kinetic Sunyaev-Zel’dovich (kSZ) effect, which is generated by CMB photons scattering off coherent large-scale velocities in the IGM. These scatterings typically cancel (because any redshift gained from scattering off of gas falling into a potential well is canceled by scattering off gas on the other side), but during reionization modulation by the ionization field can prevent such cancellation (Gruzinov & Hu, 1998; Knox et al., 1998). There is also a contribution from nonlinear evolution, see Ostriker & Vishniac (1986). Both the Atacama Cosmology Telescope and the South Pole Telescope have placed upper limits on this signal (Dunkley et al., 2011; Reichardt et al., 2012). Because the patchiness and inhomogeneity of the process induce this signal, the kSZ signal grows so long as this patchy contribution persists. As a result, it essentially constrains the duration of reionization. Using SPT data, Zahn et al. (2012) claim a limit of $\Delta z<7.2$ (where $\Delta z$ is the redshift difference between $Q_{\mathrm{HII}}=0.2$ and $Q_{\mathrm{HII}}=0.99$) at 95% confidence. The primary limiting factor here is a potential correlation between the thermal Sunyaev-Zel’dovich effect and the cosmic infrared background, which pollutes the kSZ signal. If that possibility is ignored, the limit on the duration falls to $\Delta z<4.4$. Mesinger et al. (2012) found similar limits from ACT data. They showed that the limit without a correlation is difficult to reconcile with the usual models of reionization. Intensive efforts are now underway to measure the correlation. We show two examples of the slowest possible evolution allowed by these constraints (allowing for a correlation) with the straight dashed lines in Figure 5. Note that the timing of these curves is entirely arbitrary (as is the shape: there is absolutely no reason to expect a linear correlation between redshift and neutral fraction). They simply offer a rough guide to the maximum duration over which substantial patchiness can persist. 6.6. Lyman-$\alpha$ Lines in Galaxies The final class of probes to be considered here relies on Lyman-$\alpha$ emission lines from galaxies (including some of those cataloged in the HUDF09 and UDF12 campaigns). As these line photons propagate from their source galaxy to the observer, they pass through the IGM and can suffer absorption if the medium has a substantial neutral fraction. This absorption is generally due to the red damping wing, as the sources most likely lie inside of ionized bubbles, so the line photons have typically redshifted out of resonance by the time they reach neutral gas. One set of constraints come from direct (narrowband) surveys for Lyman-$\alpha$ emitting galaxies. As the IGM becomes more neutral, these emission lines should suffer more and more extinction, and so we should see a drop in the number density of objects selected in this manner (Santos, 2004; Furlanetto et al., 2004a, 2006; McQuinn et al., 2007; Mesinger & Furlanetto, 2008b). Such surveys have a long history (e.g., Hu et al. 2002; Malhotra & Rhoads 2004; Santos et al. 2004; Kashikawa et al. 2006). To date, the most comprehensive surveys have come from the Subaru telescope. Ouchi et al. (2010) examined $z=6.6$ Lyman-$\alpha$ emitters and found evidence for only a slight decline in the Lyman-$\alpha$ transmission. They estimate that $Q_{\mathrm{HII}}\gtrsim 0.6$ at that time. Ota et al. (2008) examined the $z=7$ window. Very few sources were detected, which may indicate a rapid decline in the population. They estimate that $Q_{\mathrm{HII}}\approx 0.32$–0.64. We show these two constraints with the open pentagons in Figure 5. The primary difficulty with measurements of the evolution of the Lyman-$\alpha$ emitter number density is that the overall galaxy population is also evolving, and it can be difficult to determine if evolution in the emitter number counts is due to changes in the IGM properties or in the galaxy population. An alternate approach is therefore to select a galaxy sample independently of the Lyman-$\alpha$ line (or at least as independently as possible) and determine how the fraction of these galaxies with strong Lyman-$\alpha$ lines evolves (Stark et al., 2010). Such a sample is available through broadband Lyman break searches with HST and Subaru. Several groups have performed these searches to $z\sim 8$ (Fontana et al., 2010; Pentericci et al., 2011; Schenker et al., 2012b; Ono et al., 2012b) and found that, although the fraction of Lyman-$\alpha$ emitters increases slightly for samples of similar UV luminosity from $z\sim 3$–$6$, there is a marked decline beyond $z\sim 6.5$ (see also Treu et al., 2012). This decline could still be due to evolutionary processes within the population (e.g., dust content, see Dayal & Ferrara, 2012), but both the rapidity of the possible evolution and its reversal from trends now well-established at lower redshift make this unlikely. Assuming that this decline can be attributed to the increasing neutrality of the IGM, it requires $Q_{\mathrm{HII}}\lesssim 0.5$ (McQuinn et al., 2007; Mesinger & Furlanetto, 2008b; Dijkstra et al., 2011). We show this constraint with the filled pentagon in Figure 5. There is one more signature of reionization in the Lyman-$\alpha$ lines of galaxies. A partially neutral IGM does not extinguish these lines uniformly: galaxies inside of very large ionized bubbles suffer little absorption, while isolated galaxies disappear even when $Q_{\rm HII}$ is large. This manifests as a change in the apparent clustering of the galaxies, which is attractive because such a strong change is difficult to mimic with baryonic processes within and around galaxies (Furlanetto et al., 2006; McQuinn et al., 2007; Mesinger & Furlanetto, 2008b). Clustering is a much more difficult measurement than the number density, requiring a large number of sources. It is not yet possible with the Lyman-$\alpha$ line sources at $z\sim 7$, but at $z\sim 6.6$ there is no evidence for an anomalous increase in clustering (McQuinn et al., 2007; Ouchi et al., 2010), indicating $Q_{\mathrm{HII}}\gtrsim 0.5$ at that time. We show this constraint with the filled diamond in Figure 5. 6.7. Comparison to Our Models Figure 5 also shows the reionization history in our preferred model, with the associated confidence intervals, that extrapolates the observed luminosity function to $M_{\mathrm{UV}}=-13$. Amazingly, this straightforward model obeys all the constraints we have listed in this section, with the exception of the Fan et al. (2006b) Lyman-$\alpha$ forest measurements. However, we remind the reader that our crude reionization model fails at very small neutral fractions because it does not properly account for the high gas clumping in dense systems near galaxies, so we do not regard this apparent disagreement as worrisome to any degree. It is worth considering the behavior of this model in some detail. A brief examination of the data points reveals that the most interesting limits come from the Lyman-$\alpha$ lines inside of galaxies (the filled pentagon, from spectroscopic followup of Lyman-break galaxies, and the open pentagons, from direct narrowband abundance at $z\sim 6.6$ and $7$). These require relatively high neutral fractions at $z\sim 7$ in order for the IGM to affect the observed abundance significantly; the model shown here just barely satisfies the constraints. In a model in which we integrate the luminosity function to fainter magnitudes (e.g., $M_{\mathrm{UV}}<-10$, dotted line), the ionized fraction is somewhat higher at this time and may prevent the lines from being significantly extinguished, while a model with the minimum luminosity closer to the observed limit (e.g., $M_{\mathrm{UV}}<-17$, dashed line) reionizes the universe so late that the emitter population at $z\sim 6.6$ should have been measurably reduced. Clearly, improvements in the measured abundance of strong line emitters at this epoch (perhaps with new multi-object infrared spectrographs, like LUCI on LBT or MOSFIRE on Keck, or with new widefield survey cameras, like HyperSuprimeCam) will be very important for distinguishing viable reionization histories. Equally important will be improvements in the modeling of the decline in the Lyman-$\alpha$ line emitters, as a number of factors both outside (the morphology of reionization, resonant absorption near the source galaxies, and absorption from dense IGM structures) and inside galaxies (dust absorption, emission geometry, and winds) all affect the detailed interpretation of the raw measurements (e.g., Santos 2004; Dijkstra et al. 2011; Bolton & Haehnelt 2012). In order to have a relatively high neutral fraction at $z\sim 7$, a plausible reionization history cannot complete the process by $z\sim 6.4$ (to which the most distant Lyman-$\alpha$ forest spectrum currently extends), unless either: (1) the Lyman-continuum luminosity of galaxies, per unit 1500 $\mathring{A}$ luminosity, evolves rapidly over that interval, (2) the escape fraction $f_{\mathrm{esc}}$ increases rapidly toward lower redshifts, or (3) the abundance of galaxies below $M_{\mathrm{UV}}\sim-17$ evolves rapidly. Otherwise, the UDF12 luminosity functions have sufficient precision to fix the shape of the $Q_{\rm HII}(z)$ curve over this interval (indeed, to $z\sim 8$), and as shown in Figure 5, the slope is relatively shallow. Indeed, in our best-fit model the universe still has $1.0-Q_{\rm HII}\sim 0.1$ at $z\sim 6$. This suggests intensifying searches for the last neutral regions in the IGM at this (relatively) accessible epoch. Although reionization ends rather late in this best-fit model, it still satisfies the WMAP optical depth constraint (at $1-\sigma$) thanks to a long tail of low-level star formation out to very high redshifts, as suggested by Ellis et al. (2013), although the agreement is much easier if the WMAP value turns out to be somewhat high. This implies that the kinetic Sunyaev-Zel’dovich signal should have a reasonably large amount of power. Formally, our best-fit model has $\Delta z\sim 5$, according to the definition of Zahn et al. (2012), which is within the range that can be constrained if the thermal Sunyaev-Zel’dovich contamination can be sorted out. We also note that the WMAP results (Hinshaw et al., 2012) indicate that when combining the broadest array of available data sets, the best fit electron scattering optical depth lowers by $\sim 5\%$ to $\tau\approx 0.08$. In summary, we have shown that the UDF12 measurements allow a model of star formation during the cosmic dawn that satisfies all available constraints on the galaxy populations and IGM ionization history, with reasonable extrapolation to fainter systems and no assumptions about high-redshift evolution in the parameters of star formation or UV photon production and escape. Of course, this is not to say such evolution cannot occur (see Kuhlen & Faucher-Giguère 2012, Alvarez et al. 2012, and our Section 6.2 for examples in which it does), but it does not appear to be essential. 7. Summary The 2012 Hubble Ultra Deep Field (UDF12) campaign, a 128-orbit Hubble Space Telescope (HST) program (GO 12498, PI: R. Ellis, as described in Ellis et al. 2013 and Koekemoer et al. 2013), has acquired the deepest infrared WFC3/IR images ever taken with HST. These observations have enabled the first identification of galaxies at $8.5\leq z\leq 12$ in the Ultra Deep Field (Ellis et al., 2013), newly accurate luminosity function determinations at redshifts $z\sim 7-8$ (Schenker et al., 2012a; McLure et al., 2012), robust determinations of the ultraviolet spectral slopes of galaxies at $z\sim 7-8$ (Dunlop et al., 2012b), and the first estimates of the luminosity function and spectral slopes at $z\sim 9$ (McLure et al., 2012; Dunlop et al., 2012b). Synthesizing these constraints on high-redshift galaxy populations with the recent 9-year Wilkinson Microwave Anisotropy Probe (WMAP) constraints on the electron scattering optical depth generated by ionized cosmic hydrogen (Hinshaw et al., 2012) and previous determinations of the galaxy luminosity function at redshifts $4\lesssim z\lesssim 6$ (Bouwens et al., 2007), we infer constraints on the reionization history of the universe. First, we use the UV spectral slope $\beta=-2$ of high-redshift galaxies measured by Dunlop et al. (2012b) to constrain the available Bruzual & Charlot (2003) stellar population models consistent with the spectral character of galaxies at $z\sim 7-9$. These models motivate the adoption of a Lyman continuum photon production rate per unit UV luminosity spectral density $\log\xi_{\mathrm{ion}}=25.2~{}\log~{}\mathrm{ergs}^{-1}~{}\mathrm{Hz}$; the data does not favor a luminosity-dependent variation in the efficiency of Lyman continuum photon production. With this value of $\xi_{\mathrm{ion}}$ for high-redshift galaxies, and under reasonable assumptions for the Lyman continuum photon escape fraction from galaxies and the clumping factor of intergalactic gas (as motivated by cosmological simulations), we find that the currently observed galaxy population accessible to the limiting depth of UDF12 ($M_{\mathrm{UV}}<-17$ to $z\sim 8$) cannot simultaneously reionize the universe by $z\sim 6$ and reproduce the Thomson optical depth $\tau$ unless the abundance of star-forming galaxies or the ionizing photon escape fraction increases beyond redshift $z\sim 12$ from what is currently observed. If we utilize constraints on the evolving galaxy luminosity function at redshifts $4\lesssim z\lesssim 9$ to extrapolate down in luminosity, we find that the tension between the declining abundance of star-forming galaxies and their stellar mass density with redshift, the observed requirement to reionize the universe by $z\sim 6$, and reproducing the large electron scattering optical depth $\tau\approx 0.084$ is largely relieved if the galaxy population continues down to $M_{\mathrm{UV}}<-13$ and the epoch of galaxy formation continues to $z\sim 12-15$. Given the first identification of a $z\sim 12$ candidate galaxy by the UDF12 program, the prospect for high-redshift galaxies to reionize the universe is positive provided that the epoch of galaxy formation extends to $z\gtrsim 12$. Further observations by HST (e.g., the “Frontier Fields”) and, ultimately, the James Webb Space Telescope will be required to answer these questions more definitively. We thank Gary Hinshaw and David Larson for pointing us to the MCMC chains used in inferring the WMAP 9-year cosmological constraints. BER is supported by Steward Observatory and the University of Arizona College of Science. SRF is partially supported by the David and Lucile Packard Foundation. US authors acknowledge financial support from the Space Telescope Science Institute under award HST-GO-12498.01-A. RJM acknowledges the support of the European Research Council via the award of a Consolidator Grant, and the support of the Leverhulme Trust via the award of a Philip Leverhulme research prize. JSD and RAAB acknowledge the support of the European Research Council via the award of an Advanced Grant to JSD. JSD also acknowledges the support of the Royal Society via a Wolfson Research Merit award. ABR and EFCL acknowledge the support of the UK Science & Technology Facilities Council. SC acknowledges the support of the European Commission through the Marie Curie Initial Training Network ELIXIR. This work is based in part on observations made with the NASA/ESA Hubble Space Telescope, which is operated by the Association of Universities for Research in Astronomy, Inc, under NASA contract NAS5-26555. References Alvarez et al. (2012) Alvarez, M. A., Finlator, K., & Trenti, M. 2012, ApJ, 759, L38 Barkana & Loeb (2001) Barkana, R., & Loeb, A. 2001, Phys. Rep., 349, 125 Bolton & Becker (2009) Bolton, J. S., & Becker, G. D. 2009, MNRAS, 398, L26 Bolton & Haehnelt (2007a) Bolton, J. S., & Haehnelt, M. G. 2007a, MNRAS, 381, L35 Bolton & Haehnelt (2007b) —. 2007b, MNRAS, 382, 325 Bolton & Haehnelt (2012) —. 2012, MNRAS, 412 Bolton et al. (2011) Bolton, J. S., Haehnelt, M. G., Warren, S. J., Hewett, P. C., Mortlock, D. J., Venemans, B. P., McMahon, R. G., & Simpson, C. 2011, MNRAS, 416, L70 Bouwens et al. (2007) Bouwens, R. J., Illingworth, G. D., Franx, M., & Ford, H. 2007, ApJ, 670, 928 Bouwens et al. (2012a) Bouwens, R. J., et al. 2012a, ApJ, 754, 83 Bouwens et al. (2012b) —. 2012b, ApJ, 752, L5 Bouwens et al. (2010) —. 2010, ApJ, 708, L69 Bruzual & Charlot (2003) Bruzual, G., & Charlot, S. 2003, MNRAS, 344, 1000 Carilli et al. (2010) Carilli, C. L., et al. 2010, ApJ, 714, 834 Cen (2003) Cen, R. 2003, ApJ, 591, 12 Chabrier (2003) Chabrier, G. 2003, PASP, 115, 763 Charlot & Fall (2000) Charlot, S., & Fall, S. M. 2000, ApJ, 539, 718 Chen et al. (2007) Chen, H.-W., Prochaska, J. X., & Gnedin, N. Y. 2007, ApJ, 667, L125 Choudhury et al. (2009) Choudhury, T. R., Haehnelt, M. G., & Regan, J. 2009, MNRAS, 394, 960 Ciardi et al. (2012) Ciardi, B., Bolton, J. S., Maselli, A., & Graziani, L. 2012, MNRAS, 423, 558 Ciardi et al. (2003) Ciardi, B., Ferrara, A., & White, S. D. M. 2003, MNRAS, 344, L7 Dayal & Ferrara (2012) Dayal, P., & Ferrara, A. 2012, MNRAS, 421, 2568 Dijkstra et al. (2011) Dijkstra, M., Mesinger, A., & Wyithe, J. S. B. 2011, MNRAS, 414, 2139 Djorgovski et al. (2001) Djorgovski, S. G., Castro, S., Stern, D., & Mahabal, A. A. 2001, ApJ, 560, L5 Dunkley et al. (2011) Dunkley, J., et al. 2011, ApJ, 739, 52 Dunlop et al. (2012a) Dunlop, J. S., McLure, R. J., Robertson, B. E., Ellis, R. S., Stark, D. P., Cirasuolo, M., & de Ravel, L. 2012a, MNRAS, 420, 901 Dunlop et al. (2012b) Dunlop, J. S., et al. 2012b, arXiv:1212.0860 Ellis et al. (2013) Ellis, R. S., et al. 2013, ApJ, 763, L7 Fan et al. (2006a) Fan, X., Carilli, C. L., & Keating, B. 2006a, ARA&A, 44, 415 Fan et al. (2001) Fan, X., et al. 2001, AJ, 122, 2833 Fan et al. (2002) Fan, X., Narayanan, V. K., Strauss, M. A., White, R. L., Becker, R. H., Pentericci, L., & Rix, H.-W. 2002, AJ, 123, 1247 Fan et al. (2006b) Fan, X., et al. 2006b, AJ, 132, 117 Fan et al. (2003) —. 2003, AJ, 125, 1649 Faucher-Giguère et al. (2008) Faucher-Giguère, C.-A., Lidz, A., Hernquist, L., & Zaldarriaga, M. 2008, ApJ, 688, 85 Faucher-Giguère et al. (2009) Faucher-Giguère, C.-A., Lidz, A., Zaldarriaga, M., & Hernquist, L. 2009, ApJ, 703, 1416 Feroz & Hobson (2008) Feroz, F., & Hobson, M. P. 2008, MNRAS, 384, 449 Feroz et al. (2009) Feroz, F., Hobson, M. P., & Bridges, M. 2009, MNRAS, 398, 1601 Ferrara & Loeb (2012) Ferrara, A., & Loeb, A. 2012, ArXiv e-prints Finkelstein et al. (2010) Finkelstein, S. L., Papovich, C., Giavalisco, M., Reddy, N. A., Ferguson, H. C., Koekemoer, A. M., & Dickinson, M. 2010, ApJ, 719, 1250 Finkelstein et al. (2012) Finkelstein, S. L., et al. 2012, ApJ, 756, 164 Finlator et al. (2012) Finlator, K., Oh, S. P., Özel, F., & Davé, R. 2012, MNRAS, 427, 2464 Fontana et al. (2010) Fontana, A., et al. 2010, ApJ, 725, L205 Fontanot et al. (2012) Fontanot, F., Cristiani, S., & Vanzella, E. 2012, MNRAS, 425, 1413 Furlanetto et al. (2004a) Furlanetto, S. R., Hernquist, L., & Zaldarriaga, M. 2004a, MNRAS, 354, 695 Furlanetto & Oh (2005) Furlanetto, S. R., & Oh, S. P. 2005, MNRAS, 363, 1031 Furlanetto & Oh (2009) —. 2009, ApJ, 701, 94 Furlanetto et al. (2004b) Furlanetto, S. R., Zaldarriaga, M., & Hernquist, L. 2004b, ApJ, 613, 1 Furlanetto et al. (2006) —. 2006, MNRAS, 365, 1012 Gnedin (2000) Gnedin, N. Y. 2000, ApJ, 535, 530 Gnedin & Ostriker (1997) Gnedin, N. Y., & Ostriker, J. P. 1997, ApJ, 486, 581 González et al. (2011) González, V., Labbé, I., Bouwens, R. J., Illingworth, G., Franx, M., & Kriek, M. 2011, ApJ, 735, L34 Gruzinov & Hu (1998) Gruzinov, A., & Hu, W. 1998, ApJ, 508, 435 Gunn & Peterson (1965) Gunn, J. E., & Peterson, B. A. 1965, ApJ, 142, 1633 Haardt & Madau (2012) Haardt, F., & Madau, P. 2012, ApJ, 746, 125 Hinshaw et al. (2012) Hinshaw, G., et al. 2012, arXiv:1212.5226 Hou et al. (2011) Hou, Z., Keisler, R., Knox, L., Millea, M., & Reichardt, C. 2011, arXiv:1104.2333 Hu et al. (2002) Hu, E. M., Cowie, L. L., McMahon, R. G., Capak, P., Iwamuro, F., Kneib, J.-P., Maihara, T., & Motohara, K. 2002, ApJ, 568, L75 Hui & Haiman (2003) Hui, L., & Haiman, Z. 2003, ApJ, 596, 9 Iliev et al. (2006) Iliev, I. T., Mellema, G., Pen, U.-L., Merz, H., Shapiro, P. R., & Alvarez, M. A. 2006, MNRAS, 369, 1625 Jensen et al. (2013) Jensen, H., Laursen, P., Mellema, G., Iliev, I. T., Sommer-Larsen, J., & Shapiro, P. R. 2013, MNRAS, 428, 1366 Jones et al. (2012) Jones, T., Stark, D. P., & Ellis, R. S. 2012, ApJ, 751, 51 Kashikawa et al. (2006) Kashikawa, N., et al. 2006, ApJ, 648, 7 Knox et al. (1998) Knox, L., Scoccimarro, R., & Dodelson, S. 1998, Physical Review Letters, 81, 2004 Koekemoer et al. (2013) Koekemoer, A. M., et al. 2013, ApJS, submitted, arXiv:1212.1448 Kuhlen & Faucher-Giguère (2012) Kuhlen, M., & Faucher-Giguère, C.-A. 2012, MNRAS, 423, 862 Labbe et al. (2012) Labbe, I., et al. 2012, arXiv:1209.3037 Lidz et al. (2006) Lidz, A., Oh, S. P., & Furlanetto, S. R. 2006, ApJ, 639, L47 Loeb & Furlanetto (2012) Loeb, A., & Furlanetto, S. 2012, The First Galaxies in the Universe (Princeton University Press) Madau et al. (1999) Madau, P., Haardt, F., & Rees, M. J. 1999, ApJ, 514, 648 Madau et al. (1998) Madau, P., Pozzetti, L., & Dickinson, M. 1998, ApJ, 498, 106 Malhotra & Rhoads (2004) Malhotra, S., & Rhoads, J. E. 2004, ApJ, 617, L5 McGreer et al. (2011) McGreer, I. D., Mesinger, A., & Fan, X. 2011, MNRAS, 415, 3237 McLure et al. (2012) McLure, R. J., et al. 2012, arXiv:1212.5222 McLure et al. (2011) —. 2011, MNRAS, 418, 2074 McQuinn et al. (2007) McQuinn, M., Hernquist, L., Zaldarriaga, M., & Dutta, S. 2007, MNRAS, 381, 75 McQuinn et al. (2008) McQuinn, M., Lidz, A., Zaldarriaga, M., Hernquist, L., & Dutta, S. 2008, MNRAS, 388, 1101 McQuinn et al. (2011) McQuinn, M., Oh, S. P., & Faucher-Giguère, C.-A. 2011, ApJ, 743, 82 Mesinger (2010) Mesinger, A. 2010, MNRAS, 407, 1328 Mesinger & Furlanetto (2008a) Mesinger, A., & Furlanetto, S. R. 2008a, MNRAS, 385, 1348 Mesinger & Furlanetto (2008b) —. 2008b, MNRAS, 386, 1990 Mesinger & Haiman (2004) Mesinger, A., & Haiman, Z. 2004, ApJ, 611, L69 Mesinger et al. (2012) Mesinger, A., McQuinn, M., & Spergel, D. N. 2012, MNRAS, 422, 1403 Meurer et al. (1999) Meurer, G. R., Heckman, T. M., & Calzetti, D. 1999, ApJ, 521, 64 Miralda-Escudé (1998) Miralda-Escudé, J. 1998, ApJ, 501, 15 Miralda-Escudé et al. (2000) Miralda-Escudé, J., Haehnelt, M., & Rees, M. J. 2000, ApJ, 530, 1 Miralda-Escudé et al. (2000) Miralda-Escudé, J., Haehnelt, M., & Rees, M. J. 2000, ApJ, 530, 1 Mitra et al. (2013) Mitra, S., Ferrara, A., & Choudhury, T. R. 2013, MNRAS, 428, L1 Mortlock et al. (2011) Mortlock, D. J., et al. 2011, Nature, 474, 616 Muñoz & Loeb (2011) Muñoz, J. A., & Loeb, A. 2011, ApJ, 729, 99 Muñoz et al. (2010) Muñoz, J. A., Trac, H., & Loeb, A. 2010, MNRAS, 405, 2001 Oesch et al. (2009) Oesch, P. A., et al. 2009, ApJ, 690, 1350 Oke & Gunn (1983) Oke, J. B., & Gunn, J. E. 1983, ApJ, 266, 713 Ono et al. (2012a) Ono, Y., et al. 2012a, arXiv:1212.3869 Ono et al. (2012b) —. 2012b, ApJ, 744, 83 Ono et al. (2010) Ono, Y., Ouchi, M., Shimasaku, K., Dunlop, J., Farrah, D., McLure, R., & Okamura, S. 2010, ApJ, 724, 1524 Ostriker & Vishniac (1986) Ostriker, J. P., & Vishniac, E. T. 1986, ApJ, 306, L51 Ota et al. (2008) Ota, K., et al. 2008, ApJ, 677, 12 Ouchi et al. (2009) Ouchi, M., et al. 2009, ApJ, 706, 1136 Ouchi et al. (2010) —. 2010, ApJ, 723, 869 Pawlik et al. (2009) Pawlik, A. H., Schaye, J., & van Scherpenzeel, E. 2009, MNRAS, 394, 1812 Pentericci et al. (2011) Pentericci, L., et al. 2011, ApJ, 743, 132 Prochaska et al. (2009) Prochaska, J. X., Worseck, G., & O’Meara, J. M. 2009, ApJ, 705, L113 Razoumov et al. (2002) Razoumov, A. O., Norman, M. L., Abel, T., & Scott, D. 2002, ApJ, 572, 695 Reichardt et al. (2012) Reichardt, C. L., et al. 2012, ApJ, 755, 70 Robertson et al. (2007) Robertson, B., Li, Y., Cox, T. J., Hernquist, L., & Hopkins, P. F. 2007, ApJ, 667, 60 Robertson (2010a) Robertson, B. E. 2010a, ApJ, 716, L229 Robertson (2010b) —. 2010b, ApJ, 713, 1266 Robertson et al. (2010) Robertson, B. E., Ellis, R. S., Dunlop, J. S., McLure, R. J., & Stark, D. P. 2010, Nature, 468, 49 Rogers et al. (2012) Rogers, A. B., McLure, R. J., & Dunlop, J. S. 2012, arXiv:1209.4636 Salpeter (1955) Salpeter, E. E. 1955, ApJ, 121, 161 Salvaterra et al. (2011) Salvaterra, R., Ferrara, A., & Dayal, P. 2011, MNRAS, 414, 847 Santos (2004) Santos, M. R. 2004, MNRAS, 349, 1137 Santos et al. (2004) Santos, M. R., & others. 2004, ApJ, 606, 683 Schaerer (2003) Schaerer, D. 2003, A&A, 397, 527 Schaerer & de Barros (2009) Schaerer, D., & de Barros, S. 2009, A&A, 502, 423 Schaerer & de Barros (2010) —. 2010, A&A, 515, A73 Schechter (1976) Schechter, P. 1976, ApJ, 203, 297 Schenker et al. (2012a) Schenker, M. A., et al. 2012a, arXiv:1212.4819 Schenker et al. (2012b) Schenker, M. A., Stark, D. P., Ellis, R. S., Robertson, B. E., Dunlop, J. S., McLure, R. J., Kneib, J.-P., & Richard, J. 2012b, ApJ, 744, 179 Schroeder et al. (2012) Schroeder, J., Mesinger, A., & Haiman, Z. 2012, arXiv:1204.2838 Shull et al. (2012) Shull, J. M., Harness, A., Trenti, M., & Smith, B. D. 2012, ApJ, 747, 100 Simcoe et al. (2012) Simcoe, R. A., Sullivan, P. W., Cooksey, K. L., Kao, M. M., Matejek, M. S., & Burgasser, A. J. 2012, Nature, 492, 79 Sokasian et al. (2003) Sokasian, A., Abel, T., Hernquist, L., & Springel, V. 2003, MNRAS, 344, 607 Songaila & Cowie (2010) Songaila, A., & Cowie, L. L. 2010, ApJ, 721, 1448 Spergel et al. (2003) Spergel, D. N., et al. 2003, ApJS, 148, 175 Stark et al. (2010) Stark, D. P., Ellis, R. S., Chiu, K., Ouchi, M., & Bunker, A. 2010, MNRAS, 408, 1628 Stark et al. (2012) Stark, D. P., Schenker, M. A., Ellis, R. S., Robertson, B., McLure, R., & Dunlop, J. 2012, arXiv:1208.3529 Totani et al. (2006) Totani, T., Kawai, N., Kosugi, G., Aoki, K., Yamada, T., Iye, M., Ohta, K., & Hattori, T. 2006, PASJ, 58, 485 Trac et al. (2008) Trac, H., Cen, R., & Loeb, A. 2008, ApJ, 689, L81 Treu et al. (2012) Treu, T., Trenti, M., Stiavelli, M., Auger, M. W., & Bradley, L. D. 2012, ApJ, 747, 27 Willott et al. (2010) Willott, C. J., et al. 2010, AJ, 139, 906 Wyithe & Loeb (2003) Wyithe, J. S. B., & Loeb, A. 2003, ApJ, 586, 693 Zahn et al. (2012) Zahn, O., et al. 2012, ApJ, 756, 65
Changes in the structure of the accretion disc of V2051 Ophiuchi through the outburst cycle R. F. Santos${}^{1}$, R. Baptista${}^{2}$, M. Faundez-Abans${}^{3}$ Abstract We present the results of the analysis of light curves of V2051 Oph through an outburst with eclipse mapping techniques. ${}^{1}$${}^{2}$Universidade Federal de Santa Catarina, Brazil ${}^{3}$LNA, Brazil 1. Introduction Dwarf novae show recurrent outbursts (of 2-5 mags on timescales from weeks to months) powered by a sudden increase in mass inflow in the accretion disc around the white dwarf. Currently, there are two competing models to explain the cause of the sudden increase in mass accretion. In the mass transfer instability model (MTIM), the outburst is the time dependent response of a viscous accretion disc to a burst of matter transferred from the secondary star. In the disc instability model (DIM), matter is transferred at a constante rate to a low viscosity disc and accumulates in an annulus until a critical configuration switches the disc to a high viscosity regime and the gas diffuses rapidly inwards and onto the white dwarf. Tracking the evolution of a dwarf novae accretion disc along an outburst cycle with eclipse mapping techniques (EMT) provides a valuable opportunity to test these models againt observations. 2. Observations and data analysis V2051 Oph is a short period dwarf nova ($P_{orb}=90\,min$) with deep eclipses ($B\simeq 2.5\,mag$). It went in outburst between the nights of 2000 July 30 and 31. B band light curves of V2051 Oph were obtained with the 1.60 m telescope of Laboratorio Nacional de Astrofisica (LNA/Brazil) on 2000 July 28/August 02 covering the nights before the onset of the outburst, the short ($\approx 1$ day) outburst maximum and 2 days along the decline from maximum. The disc radius shrinks from $0.47\,R_{L1}$ in quiescence to $0.40\,R_{L1}$ at the night before maximum, in agreement with the expectation of the MTIM ($R_{L1}$ is the distance from the disc centre to the inner Lagrangian point). 3. Results EMT were used to solve for a map of the disc brightness distribuition and for the flux of an addional uneclipsed component (Baptista & Steiner 1993). The sequence of eclipse maps reveal that the outburst starts with a decrease in the brightness of the disc and confirms that the disc shrinks at the night before maximum. The map of this night is dominated by emission along the gas stream with no evidence of an increase in the brightness of the bright spot. The brightness of the inner disc regions remains constant during outburst maximum and along decline, while the brightness of the outer disc regions progressively decreases with the inward propagation of a cooling wave. The disc becomes fainter through the decline phase, leaving the bright spot progressively more perceptible at the outer edge of the disc. The maximum fractional contribution of the uneclipsed component (9 % of the total flux in the B band) occurs on the night before outburst maximum. This suggests the development of a vertically-extended disc wind before outburst maximum. We quantify the changes in disc size during outburst by analyzing the radial intensity distribution of the total map. We define the outer disc radius in each map as the radial position at which the intensity distribution falls below a reference level corresponding to the intensity of the bright spot in the eclipse map in quiescence. The disc expands from $0.40\,R_{L1}$ at the night before maximum to $0.76\,R_{L1}$ at outburst maximum. Two days after maximum, the disc radius reduces to $0.67\,R_{L1}$. The radial intensity distribution of the symmetric disc component shows an outward moving heating wave at the rise to outburst maximum with a speed of $v_{f(hot)}$$\,\geq\,1.73\,km\,s^{-1}$. The speed of the inward moving cooling wave are  $-0.24\,km\,s^{-1}$ and  $-0.91\,km\,s^{-1}$, respectively 1 and 2 days after maximum. We observe an acceleration of the cooling wave as it travels across the disc, in contradiction with the prediction of the DIM. From $v_{f(hot)}$ we derive a viscosity parameter $\alpha_{hot}\simeq$ 0.14 , comparable to the derived viscosity in quiescence $\alpha_{cool}\simeq$ 0.16 (Baptista & Bortoletto 2004). The radial brightness temperature distribution is flatter than the $T\propto R^{-3/4}$ law expected for steady-state discs both in quiescence and in outburst, leading to larger mass accretion rates in the outer disc ($0.3-0.4\,R_{L1}$) than in the inner disc regions ($0.1\,R_{L1}$). If we assume a distance of 146 pc to the binary (Vrielmann, Stiening & Offutt 2002), the brightness temperatures in quiescence range from 5800 K in the outer disc to 9200 K in the inner disc; at outburst maximum the temperatures range from 9600 K at $0.4\,R_{L1}$ to 12200 K in the inner disc. The inferred temperatures of the outbursting disc are higher than the critical temperature $T_{crit}$ above which the disc gas should remain while in the high viscosity branch of the thermal-viscous limit cycle of the DIM. However, if the distance is 100 pc (see Baptista & Bortoletto 2004), the inferred disc temperature are lower and do not exceed $T_{crit}$ even at outburst maximum. Our next step is to derive the distance to the binary from the measured B and V white dwarf fluxes in quiescence. References (Baptista & Bortoletto) Baptista, R. & Bortoletto, A. 2004 AJ, 411, 128 Baptista & Steiner (1993) Baptista, R. & Steiner, J. E. 1993, A&A, 277, 331 Vrielmann, Stiening, & Offutt (2002) Vrielmann, S., Stiening, R. F., & Offutt, W. 2002, MNRAS, 334, 608
A symmetry for vanishing cosmological constant: Another realization Recai Erdem recaierdem@iyte.edu.tr Department of Physics, İzmir Institute of Technology Gülbahçe Köyü, Urla, İzmir 35430, Turkey (December 4, 2020) Abstract A more conventional realization of a symmetry which had been proposed towards the solution of cosmological constant problem is considered. In this study the multiplication of the coordinates by the imaginary number $i$ in the literature is replaced by the multiplication of the metric tensor by minus one. This realization of the symmetry as well forbids a bulk cosmological constant and selects out $2(2n+1)$ dimensional spaces. On contrary to its previous realization the symmetry, without any need for its extension, also forbids a possible cosmological constant term which may arise from the extra dimensional curvature scalar provided that the space is taken as the union of two $2(2n+1)$ dimensional spaces where the usual 4-dimensional space lies at the intersection of these spaces. It is shown that this symmetry may be realized through spacetime reflections that change the sign of the volume element. A possible relation of this symmetry to the E-parity symmetry of Linde is also pointed out. ††preprint: IZTECH-P-2006-03 Recently a symmetry Erdem ; Nobbenhuis ; tHooft which may give insight to the origin of the extremely small value PDG of the cosmological constant compared to its theoretical value Wein was proposed. As in the usual symmetry arguments the symmetry forces the cosmological constant vanish and the small value of the cosmological constant is attributed to the breaking of the symmetry by a small amount. In Erdem the symmetry is realized by imposing the invariance of action functional under a transformation where all coordinates are multiplied by the imaginary number $i$. It was found that this symmetry select out the dimensions $D$ obeying $D=2(2n+1)$ $n=0,1,...$, that is, $D=2,6,10,....$ and it gives some constraints on the form of the possible Lagrangian terms as well. Moreover that symmetry has more chance to survive in quantum field theory when compared to the usual scaling symmetry because the n-point functions are invariant under this symmetry. In this paper we study a symmetry transformation where the coordinates remain the same while the metric tensor is multiplied by minus one. We show that this symmetry is equivalent to the one given in Erdem . Although its results are mainly the same as Erdem it is more conventional in its form, in the sense that the space-time coordinates remain real. On contrary to Erdem we use the same symmetry to forbid 4-dimensional cosmological constant as well as to forbid a bulk cosmological constant. Moreover we show that the multiplication of the metric tensor by minus one may be related to a parity-like symmetry in the extra dimensions. We also discuss the relation of this symmetry to the anti-podal symmetry of Linde Linde ; Kaplan ; Moffat , whose relation to the previous realization of the present symmetry is discussed also in tHooft for the 4-dimensional case. The symmetry principle given in Erdem may be summarized as follows: The transformation $$\displaystyle x_{A}\rightarrow i\,x_{A}$$ (1) implies $$\displaystyle R\rightarrow-R~{}~{}~{}~{},~{}~{}~{}~{}~{}~{}\sqrt{g}\,d^{D}x% \rightarrow(i)^{D}\,\sqrt{g}\,d^{D}x$$ (3) $$\displaystyle ds^{2}\,=\,g_{AB}dx^{A}\,dx^{B}~{}\rightarrow~{}~{}-\,ds^{2}$$ where $A,B=0,1,2,....,D-1$ and $D=1,2,..$ is the dimension of the spacetime. The requirement of the invariance of the gravitational action functional $$S_{R}=\frac{1}{16\pi\,G}\int\sqrt{g}\,R,d^{D}x$$ (4) under (1) selects out the dimensions $$D=2(2n+1)~{}~{}~{},~{}~{}~{}~{}n=0,1,2,3,....$$ (5) and forbids a bulk cosmological constant $\Lambda$ in the action $$S_{C}=\frac{1}{16\pi\,G}\int\sqrt{g}\,\Lambda\,d^{D}x$$ (6) Extension of this symmetry to the full action requires that the Lagrangian should transform in the same way as the curvature scalar, that is, $${\cal L}\,\rightarrow\,-{\cal L}~{}~{}~{}~{}~{}~{}\mbox{as}~{}~{}~{}~{}~{}~{}x% _{A}\rightarrow i\,x_{A}~{}~{}~{}~{},~{}~{}~{}~{}\mbox{for}~{}~{}~{}~{}D=2(2n+% 1)~{},~{}~{}~{}~{}~{}A=0,1,....D-1$$ (7) (i.e. for the dimensions given by Eq.(5) ). The kinetic terms of the scalar and vector fields automatically satisfy Eq.(7) while the potential terms ( e.g. $\phi^{4}$ term) are, in general, allowed in the lower dimensional sub-branes. The fermionic part of the Lagrangian does not satisfy (7) in general so fermionic fields may live only on a lower dimensional subspace (brane). For example the free fermion Lagrangian is allowed on a $4m+1$ dimensional subspace of the $2(2n+1)$ dimensional space, where $m\,\leq\,n$ $n,m=0,1,2,...$. Although the transformation rules for the fields are similar to the ones for scale transformations ( where the scale parameter is replaced by the imaginary number $i$) this symmetry has a better chance of surviving after quantization because the two point functions ( e.g. $<0|T\phi(x)\phi(y)|0>$ for scalars and $<0|T\psi(x)\bar{\psi}(y)|0>$ for fermions), which are the basic building blocks for connected Feynman diagrams, are invariant under this symmetry transformation. Now I introduce a symmetry transformation which is essentially equivalent to (1) while formulated in a more conventional form, that is, $$g_{AB}\rightarrow-\,g_{AB}$$ (8) Eq.(8) induces $$\displaystyle R\rightarrow-R~{}~{}~{},~{}~{}~{}~{}~{}~{}\sqrt{g}\,d^{D}x% \rightarrow(\pm\,i)^{D}\,\sqrt{g}\,d^{D}x$$ (9) $$\displaystyle ds^{2}\,=\,g_{AB}dx^{A}\,dx^{B}~{}\rightarrow~{}~{}-\,ds^{2}$$ (10) The requirement of the invariance of the gravitational action (16) under the transformation (8) selects out the dimensions given by $$D=2(2n+1)~{}~{}~{},~{}~{}~{}~{}n=0,1,2,3,....$$ (11) as in Eq.(5), and for $D=2(2n+1)$, $n=0,1,2,3,...$ Eqs.(9,10) become identical with Eqs.(3,3) Duff . Moreover one notices that the requirement of the invariance of the action functional under (8) forbids a bulk cosmological constant term (given by (6)) in $2(2n+1)$ dimensions. In other words the requirements of the invariance of the action functional under (8) and non-vanishing of its gravitational piece (5) implies $D=2(2n+1)$ and the vanishing of the bulk cosmological constant as in Erdem . Although the implications of this symmetry for Lagrangian are similar to those of Erdem there are some differences. We find it more suitable to consider this point after we consider the realization of this symmetry through reflections in extra dimensions in the paragraph after the next paragraph. We have shown that the invariance of the gravitational action under Eq.(8) requires the vanishing of the bulk cosmological constant. The next step is to show that Eq.(8) results in the vanishing of the possible contributions due to extra dimensional curvature scalar as well so that the 4-dimensional cosmological constant vanishes altogether. On contrary to Erdem we use the same symmetry ( ( i.e. (8) ) that we have used to forbid the bulk cosmological constant) to forbid a possible 4-dimensional cosmological constant induced by extra dimensional curvature scalar as well. To this end we take the 4-dimensional space-time be the intersection of two $2(2n+1)$ dimensional spaces; one with the dimension $2(2n+1)$ (e.g. 6) and the other with the dimension $2(2m+1)$ (e.g. 6) so that the total dimension of the space being $2(2m+1)+2(2n+1)-4=4(n+m)$ (e.g. 8). Then Eq.(8) takes the following form $$\displaystyle g_{AB}\rightarrow-\,g_{AB}~{}~{}~{}~{}~{},~{}~{}~{}~{}A,B=0,1,2,% 3,4^{\prime}.....D^{\prime}-1~{}~{},~{}~{}~{}D^{\prime}=2(2n+1)$$ (12) $$\displaystyle g_{CD}\rightarrow-\,g_{CD}~{}~{}~{}~{}~{},~{}~{}~{}~{}C,D=0,1,2,% 3,4^{\prime\prime},.....D^{\prime\prime}-1~{}~{},~{}~{}~{}D^{\prime\prime}=2(2% m+1)$$ (13) which transforms the metric and the curvature scalar as $$\displaystyle ds^{2}\,=\,g_{MN}dx^{M}\,dx^{N}=g_{\mu\nu}dx^{\mu}\,dx^{\nu}+g_{% ab}dx^{a}\,dx^{b}\rightarrow\,g_{\mu\nu}dx^{\mu}\,dx^{\nu}-g_{ab}dx^{a}\,dx^{b}$$ $$\displaystyle R_{4}\rightarrow R_{4}~{}~{}~{}~{},~{}~{}~{}~{}~{}~{}R_{e}% \rightarrow-R_{e}~{}~{}~{}~{},~{}~{}~{}~{}~{}~{}\sqrt{g}\,d^{D}x\rightarrow\,% \sqrt{g}\,d^{D}x$$ (14) where $R_{4}=g^{\mu\nu}R_{\mu\nu}$ stands for the 4-dimensional part of the curvature scalar and $R_{e}=g^{ab}R_{ab}$ stands for the extra dimensional part of the curvature scalar and $$\displaystyle\mu\nu=0,1,2,3~{}~{}~{}~{},~{}~{}~{}~{}~{}~{}a,b=4^{\prime},5^{% \prime},....D^{\prime}-1,4^{\prime\prime},5^{\prime\prime},.....,D^{\prime% \prime}-1$$ It is evident that the extra dimensional part of the gravitational action, that is, $$S_{Re}=\frac{1}{16\pi\,G}\int\sqrt{g}d^{D}x\,R_{e}$$ (15) is forbidden by (14). So only $$S_{R4}=\frac{1}{16\pi\,G}\int\sqrt{g}\,R_{4}\,d^{D}x$$ (16) may survive. In other words the requirement of the invariance of the action under (12) and (13) separately insures the vanishing of the bulk cosmological constant while the requirement of the invariance of the action under the simultaneous applications of (12) and (13) insures the vanishing extra dimensional curvature scalar. Now I take the discrete symmetry in (8) ( or (12) and (13) ) be a realization of a reflection symmetry in extra dimensions and study its implications. The simplest setup is to realize (12) and (13) by two reflections in two extra dimensions. To be more specific consider the following metric ( where 4-dimensional Poincare invariance is taken into account Rubakov ) $$\displaystyle ds^{2}$$ $$\displaystyle=$$ $$\displaystyle\Omega_{1}(y)\Omega_{2}(z)g_{\mu\nu}(x)\,dx^{\mu}dx^{\nu}\,+% \Omega_{1}(y)g_{AB}(w)\,dx^{A}dx^{B}\,+\,\Omega_{2}(z)g_{CD}(w)\,dx^{C}dx^{D}$$ (17) where $$\displaystyle~{}~{}~{}x=x^{\mu}~{},~{}~{}y=x^{A}~{},~{}~{}z=x^{C}~{},~{}~{}w=y,z$$ (19) $$\displaystyle\mu,\nu=0,1,2,3~{}~{};~{}~{}~{}~{}A,B\,=\,4^{\prime},5^{\prime},.% ...D^{\prime}-1~{}~{};~{}~{}~{}C,D\,=\,4^{\prime\prime},5^{\prime\prime},....D% ^{\prime\prime}-1$$ $$\displaystyle D^{\prime}=2(2n+1)~{}~{},~{}~{}~{}D^{\prime\prime}=2(2m+1)~{}~{}% ~{}~{}~{}~{}n,m=1,2,3,.....$$ and $\Omega_{1}(y)$, $\Omega_{2}(z)$ are odd functions of $y$, $z$; respectively, under some reflection; and $\tilde{g}_{AB}$, $\tilde{g}_{CD}$, are even functions of $y$, $z$. For simplicity we assume that $\Omega_{1}$ and $\Omega_{2}$, each depends only on one dimension, that is, $$\Omega_{1}(y)=\Omega_{1}(y_{1})~{}~{}~{}\mbox{and}~{}~{}~{}~{}\Omega_{2}(z)=% \Omega_{2}(z_{1})$$ (20) where $y_{1}$ is one of $x^{A}$ and $y_{1}$ is one of $x^{C}$. For definiteness one may assume that $y_{1}=x^{A}=x^{4^{\prime}}$ and $z_{1}=x^{C}=x^{4^{\prime\prime}}$. In other words $y_{1}=x^{A}=x^{4^{\prime}}$ and $z_{1}=x^{C}=x^{4^{\prime\prime}}$ are taken as the directions where $\sqrt{g}\,d^{D}x$ changes sign under (a set of) spacetime reflections in that direction. The volume element and the curvature scalar corresponding to (17) are $$\displaystyle\sqrt{g}\,d^{D}x\,=\,\Omega_{1}^{2n+1}(y_{1})\Omega_{2}^{2m+1}(z_% {1})\sqrt{\tilde{g}}\,d^{D}x$$ (21) $$\displaystyle R$$ $$\displaystyle=$$ $$\displaystyle(\Omega_{1}\Omega_{2})^{-1}[R_{4}+\tilde{R}_{e}-(D-1)(\tilde{g}^{% 4^{\prime}4^{\prime}}\frac{d^{2}(ln(\Omega_{1})}{dy_{1}^{2}}+\tilde{g}^{4^{% \prime\prime}4^{\prime\prime}}\frac{d^{2}(ln(\Omega_{2})}{dz_{1}^{2}})$$ (23) $$\displaystyle-\frac{(D-1)(D-2)}{4}(\tilde{g}^{4^{\prime}4^{\prime}}(\frac{d\,% ln(\Omega_{1})}{dy_{1}})^{2}+\tilde{g}^{4^{\prime\prime}4^{\prime\prime}}(% \frac{d\,ln(\Omega_{2})}{dz_{1}})^{2})]$$ $$\displaystyle\mbox{where}~{}~{}~{}~{}D=D^{\prime}+D^{\prime\prime}-4=2(2n+1)+2% (2m+1)-4\,=\,4(n+m)$$ $$\displaystyle~{}~{}~{}~{}\tilde{g}^{MN}=\Omega_{1}(y_{1})\Omega_{2}(z_{1})\,g^% {MN}~{}~{}~{}\tilde{g}^{4^{\prime}4^{\prime}}=\Omega_{2}(z_{1})\,g^{4^{\prime}% 4^{\prime}}~{},~{}~{}~{}~{}\tilde{g}^{4^{\prime\prime}4^{\prime\prime}}=\Omega% _{1}(y_{1})\,g^{4^{\prime\prime}4^{\prime\prime}}$$ $R_{4}(x)=g^{\mu\nu}R_{\mu\nu}$ and $\tilde{R}_{e}$ are the curvature scalars of the metrics; $g_{\mu\nu}(x)\,dx^{\mu}\,dx^{\nu}$ and $\tilde{g}_{AB}(y,z)\,dx^{A}\,dx^{B}$+ $\tilde{g}_{CD}(y,z)\,dx^{C}\,dx^{D}$ =$\Omega_{2}^{-1}(z_{1})g_{AB}(y,z)\,dx^{A}\,dx^{B}$+ $\Omega_{1}^{-1}(y_{1})g_{CD}(y,z)\,dx^{C}\,dx^{D}$; respectively. The action corresponding to (21) and (23) is $$\displaystyle S_{R}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{16\pi\,G}\int\Omega_{1}^{2n}(y_{1})\Omega_{2}^{2m}(z_{1}% )\sqrt{\tilde{\tilde{g}}}\,d^{D}x\,\tilde{R}$$ $$\displaystyle\mbox{where}~{}~{}~{}~{}~{}~{}~{}~{}\tilde{R}=[R_{4}+\tilde{R}_{e% }-(D-1)(\tilde{g}^{4^{\prime}4^{\prime}}\frac{d^{2}(ln(\Omega_{1})}{dy_{1}^{2}% }+\tilde{g}^{4^{\prime\prime}4^{\prime\prime}}\frac{d^{2}(ln(\Omega_{2})}{dz_{% 1}^{2}})$$ $$\displaystyle-\frac{(D-1)(D-2)}{4}(\tilde{g}^{4^{\prime}4^{\prime}}(\frac{d\,% ln(\Omega_{1})}{dy_{1}})^{2}+\tilde{g}^{4^{\prime\prime}4^{\prime\prime}}(% \frac{d\,ln(\Omega_{2})}{dz_{1}})^{2})]$$ $$\displaystyle~{}\mbox{and}~{}~{}~{}~{}\tilde{\tilde{g}}=det(g_{\mu\nu})det(g_{% AB})det(g_{CD})$$ One notices that all terms in $\tilde{R}$ in Eq.(A symmetry for vanishing cosmological constant: Another realization) except $R_{4}$ are odd either in $y_{1}$ or in $z_{1}$ provided that $\Omega_{1}$ is odd ( about some point) in $y_{1}$ and $\Omega_{2}$ is odd ( about some point) in $z_{1}$ and all other terms in (A symmetry for vanishing cosmological constant: Another realization) are even. So all terms in (A symmetry for vanishing cosmological constant: Another realization) except the $R_{4}$ contribution of $\tilde{R}$ vanish after integration. In other words the symmetry imposed ( which makes $\Omega_{1(2)}$ odd in $y_{1(z_{1})}$ ) guarantees the absence of cosmological constant. For example consider $$\Omega_{1}\,=\,\cos{k_{1}\,x_{5^{\prime}}}~{},~{}~{}~{}~{}\Omega_{2}\,=\,\cos{% k_{2}\,x_{6^{\prime\prime}}}$$ (26) Because $\Omega_{1}$, $\Omega_{2}$ in (26) are odd under the parity operator about the point, $k_{1(2)}\,x_{5^{\prime}(6^{\prime\prime})}\,=\,\frac{\pi}{2}$ defined by $$k_{1(2)}\,x_{5^{\prime}(6^{\prime\prime})}\,\rightarrow\,\pi-k_{1(2)}\,x_{5^{% \prime}(6^{\prime\prime})}$$ (27) and $\frac{d^{2}(ln(\Omega_{1(2)})}{dy_{1}(z_{1})^{2}})$ and $(\frac{d\,ln(\Omega_{1(2)})}{dy_{1}(z_{1})})^{2}$ are even hence the $\Omega_{1(2)}$ dependent terms in (A symmetry for vanishing cosmological constant: Another realization) are odd. By the same reason $\tilde{R}_{e}$ is odd as well. So the only even term in $\tilde{R}$ is $R_{4}$. So there is no contribution to the cosmological from the bulk cosmological constant or from the extra dimensional part of the curvature scalar. One may consider other types of spaces as well; for example one may take the parity operator be defined by $x_{D-1}\rightarrow\,-x_{D-1}$ about the point $x_{D-1}=0$ and either of $\Omega_{1(2)}$ or both of them change sign under the parity operator (for example, as $\Omega=\sin{kx_{D-1})}$). In fact one may consider a more restricted parity transformation which effectively corresponds to the interchange of two branes in the $x_{D-1}$-direction. For example one may take some dimensions, say the $x_{D-1}$’th dimension, be identified by the closed line interval described by $S^{1}/Z_{2}$ so that $\Omega\,=\,\cos{|k\,x_{D-1}|}$, and there are two branes located at $x_{D-1}=0$ and $kx_{D-1}=\pi$. Then the transformation in (8) is effectively induced by the interchange of the two branes. The transformation rule for the Lagrangian under the requirement of the invariance of the action functional (where the metric (17) is considered for simplicity) $$\displaystyle S_{L}$$ $$\displaystyle=$$ $$\displaystyle\int\sqrt{g}\,d^{D}x\,{\cal L}=\int\Omega_{1}^{2n+1}(y_{1})\Omega% _{2}^{2m+1}(z_{1})\sqrt{\tilde{\tilde{g}}}\,d^{D}x\,{\cal L}$$ $$\displaystyle\mbox{where}~{}~{}~{}~{}~{}\tilde{\tilde{g}}=det(g_{\mu\nu})det(g% _{AB})det(g_{CD})$$ under Eq.(8) ( or under (12) and (13) ) results in $${\cal L}\,\rightarrow\,-{\cal L}~{}~{}~{}~{}~{}~{}\mbox{as}~{}~{}~{}~{}~{}~{}g% _{MN}\rightarrow-\,g_{MN}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}M,N=0,1,2,3,......D-1$$ (29) which is similar to the condition obtained in Erdem . To be more specific we consider the metrics of the form of Eq.(17). Then (29) becomes $${\cal L}\,\rightarrow\,-{\cal L}~{}~{}~{}~{}~{}~{}\mbox{as}~{}~{}~{}~{}~{}~{}% \Omega_{1}\,\rightarrow\,-\Omega_{1}~{}~{}~{}~{}~{}~{}\mbox{and/or}~{}~{}~{}~{% }~{}~{}\Omega_{2}\,\rightarrow\,-\Omega_{2}$$ (30) After one considers the kinetic part of the Lagrangian for the scalar fields $$2{\cal L}_{k}=g_{MN}(\partial_{M}\phi)^{\dagger}\partial_{N}\phi=\Omega_{1}% \Omega_{2}g_{\mu\nu}\,\partial_{\mu}\phi)^{\dagger}\partial_{\nu}\phi+\Omega_{% 1}g_{AB}(\partial_{A}\phi)^{\dagger}\partial_{B}\phi+\Omega_{2}g_{CD}(\partial% _{C}\phi)^{\dagger}\partial_{D}\phi$$ (31) one notices that only the 4-dimensional part of (31) transforms as in the required form, (29) under both of $\Omega_{1(2)}\rightarrow-\Omega_{1(2)}$. So the extra dimensional piece of the kinetic Lagrangian for scalar fields is forbidden by this symmetry. In other words the extra dimensional part of the kinetic Lagrangian vanishes after integration. The scalar field is allowed to transform as $$\phi\,\rightarrow\,\pm\phi$$ (32) If adopt the plus sign in (32) then no potential term is allowed in the bulk (if we impose the symmetry) while the terms localized on branes may be allowed. However introducing potential terms in the bulk is not problematic once the symmetry is identified by reflections in extra dimensions because these terms cancel out after integration over the directions where the volume element is odd under these reflections. Therefore such terms are not dangerous and no restriction is put on them in this set-up while some restrictions were obtained for such terms in the case of Erdem . Hence the only term which may survive after integration over the extra dimensions is the 4-dimensional piece of the kinetic term and it does not contribute to the cosmological constant since it depends on 4-dimensional coordinates non-trivially. So the realization of the symmetry introduced here gives more freedom for model building than its previous realization which introduced some constrains on the form of the potential terms and the dimensions where they may live. Similar conclusions are valid for the vector fields as well. The case of fermion fields is more involved. The potential term of the fermionic Lagrangian is not allowed in the bulk ( i.e. in each of the $2(2n+1)$ dimensional spaces) by Eqs.(12,13). However if Eqs.(12,13)are identified as the results of reflections in extra dimensions as given in (27) then the potential terms cancel out after integration because they are even under (27). So potential terms do not pose a problem. Kinetic term of the fermionic Lagrangian is neither odd nor even under the separate applications of Eqs.(12,13) so it does not seem to cancel out after integration. To see this better consider the specific case where the metric is in the form of (17) and $\Omega_{1(2)}$ are given as in (26) with $$\displaystyle g_{MN}$$ $$\displaystyle=$$ $$\displaystyle\Omega\;\eta_{MN}~{}~{}~{}~{}\mbox{where}~{}~{}~{}~{}\Omega=% \begin{array}[]{ll}\Omega_{1}(u_{1})\Omega_{2}(u_{2})&~{}\mbox{for}~{}~{}~{}~{% }M,N=\mu,\nu\\ \Omega_{1}(u_{1})&~{}\mbox{for}~{}~{}~{}~{}M,N=A,B\\ \Omega_{2}(u_{2})&~{}\mbox{for}~{}~{}~{}~{}M,N=C,D\end{array}$$ (33) where $\mu$,$\nu$, $A$, $B$, $C$, $D$ stand for the coordinate indices defined in Eq.(19); $u_{1(2)}$ stands for $k_{1(2)}\,x_{5^{\prime}(6^{\prime\prime})}$, and $\eta_{MN}$ is the D-dimensional flat metric containing the usual 4-dimensional Minkowski metric. The corresponding Lagrangian and action functionals are $$\displaystyle{\cal L}_{fk}=i\bar{\psi}\Gamma^{\mu}\partial_{\mu}\psi+i\bar{% \psi}\Gamma^{a1}\partial_{a1}\psi+i\bar{\psi}\Gamma^{a2}\partial_{a2}\psi$$ (35) $$\displaystyle\mbox{where}~{}~{}~{}~{}~{}\Gamma^{\mu}=[(\,\cos{\frac{u_{1}}{2}}% \,\tau_{3}\,+\,i\,\sin{\frac{u_{1}}{2}}\,\tau_{1}\,)(\,\cos{\frac{u_{2}}{2}}\,% \tau_{3}\,+\,i\,\sin{\frac{u_{2}}{2}}\,\tau_{1}\,)]^{-1}\otimes\,\gamma^{% \tilde{\mu}}$$ $$\displaystyle~{}~{}\Gamma^{a1(2)}=(\,\cos{\frac{u_{1(2)}}{2}}\,\tau_{3}\,+\,i% \,\sin{\frac{u_{1(2)}}{2}}\,\tau_{1}\,)^{-1}\otimes\,\gamma^{\tilde{a1(2)}}$$ $$\displaystyle\{\Gamma^{M},\Gamma^{N}\}=2\,g^{MN}~{}~{}~{}~{}~{},~{}~{}~{}~{}\{% \gamma^{\tilde{M}},\gamma^{\tilde{N}}\}=2\,\eta^{\tilde{M}\tilde{N}}~{}~{}~{}~% {}~{}~{}~{}M,N=\mu,a1,a2$$ $$\displaystyle S_{Lfk}$$ $$\displaystyle=$$ $$\displaystyle\int\sqrt{g}\,d^{D}x\,{\cal L}_{fk}=\int\Omega_{1}^{2n}(y_{1})% \Omega_{2}^{2m}(z_{1})\sqrt{\tilde{\tilde{g}}}\,d^{D}x\,{\cal L}^{\prime}_{fk}$$ (38) $$\displaystyle\mbox{where}~{}~{}~{}~{}{\cal L}^{\prime}_{fk}={\cal L}^{\prime}_% {fk}=i\bar{\psi}\Gamma^{\prime\mu}\partial_{\mu}\psi+i\bar{\psi}\Gamma^{\prime a% 1}\partial_{a1}\psi+i\bar{\psi}\Gamma^{\prime a2}\partial_{a2}\psi$$ $$\displaystyle\Gamma^{\prime\mu}=\Omega_{1}\Omega_{2}\Gamma^{\mu}=\cos{u_{1}}% \cos{u_{2}}[(\,\cos{\frac{u_{1}}{2}}\,\tau_{3}\,+\,i\,\sin{\frac{u_{1}}{2}}\,% \tau_{1}\,)(\,\cos{\frac{u_{2}}{2}}\,\tau_{3}\,+\,i\,\sin{\frac{u_{2}}{2}}\,% \tau_{1}\,)]^{-1}\otimes\,\gamma^{\tilde{\mu}}$$ $$\displaystyle~{}~{}\Gamma^{\prime a1(2)}=\Omega_{1(2)}\Gamma^{a1(2)}=\cos{u_{1% (2)}}(\,\cos{\frac{u_{1(2)}}{2}}\,\tau_{3}\,+\,i\,sin{\frac{u_{1(2)}}{2}}\,% \tau_{1}\,)^{-1}\otimes\,\gamma^{\tilde{a1(2)}}$$ where $\gamma^{\tilde{A}}$ are the usual gamma matrices corresponding to $\eta_{MN}$ in (33); $\tau_{1}$, $\tau_{3}$ are the Pauli sigma matrices; $\otimes$ denotes tensor product. Notice that the number of spinor components for the fermions and hence the size of gamma matrices are doubled by the introduction of the Pauli sigma matrices in (35) and (38). This choice is more advantageous than the gamma matrices containing the standard vielbeins involving $\sqrt{g_{MN}}\propto\sqrt{\Omega}=\sqrt{\cos{u_{1(2)}}}$ since $\sqrt{\cos{u_{1(2)}}}$ is ill defined under (30) while the gamma matrices introduced above do not pose such a problem. One notices that (38) is multiplied by $\pm\,i$ under $$u_{1(2)}\rightarrow\,\pi-u_{1(2)}$$ (39) so the argument of (38) is neither odd nor even under (39) on contrary to the scalar case, (31). Hence at first sight it seems that the method employed here to make possible extra dimensional contribution from the fermionic kinetic term does not work. However one notices that $\cos{u_{1(2)}}(\,\cos{\frac{u_{1(2)}}{2}}\,\tau_{3}\,+\,i\,\sin{\frac{u_{1(2)}% }{2}}\,\tau_{1}\,)^{-1}$ is odd under a parity operation about the point $u_{1(2)}=2\pi$ defined by $$u_{1(2)}\rightarrow\,2\pi-u_{1(2)}$$ (40) and the other terms in (38) are even under (40) so that $S_{Lfk}$ vanishes after integration. Therefore if a fermion lives in the whole bulk then its contribution to the vacuum energy ( and hence to the cosmological constant) is zero if adopt the spaces (of the form of (17) and (26)) whose volume elements are odd under spacetime reflections (of some of the extra dimensions). If one wants to avoid this result then the fermions must be confined into a subspace where (38) is invariant under (12) and (13), that is, the fermions must be localized in the directions (e.g. $y_{1}$ and/or $z_{1}$ in (21) ) where the volume element is odd under their reflections so that the fermions live in a $4(n+m)-1$ or $4(n+m)-2$ dimensional subspace of the bulk. Now we want to point out the relation between this scheme and the E-parity model of Linde Linde . In Linde’s model the total universe consists of two universes; the usual one and ghost particles universe. The corresponding action functional is taken as $$\displaystyle S=N\,\int\,d^{4}x\,d^{4}y\,\sqrt{g(x)}\sqrt{g(y)}[\frac{M_{Pl}^{% 2}}{16\pi}\,R(x)+{\cal L}(\psi(x))-\frac{M_{Pl}^{2}}{16\pi}\,R(y)-{\cal L}(% \hat{\psi}(y))]$$ (41) where $\psi$ and $\hat{\psi}$ stand for the usual particles and ghost particles, $R(x)$, $R(y)$ are the scalar curvatures of the usual and the ghost parts of the universe with the metric tensors $g_{\mu\nu}$ and $\hat{g}_{\mu\nu}$; respectively. If one imposes the symmetry $$\displaystyle P:~{}g_{\mu\nu}\,\leftrightarrow\,\hat{g}_{\mu\nu}~{},~{}~{}~{}~% {}~{}P:~{}\psi\,\leftrightarrow\,\hat{\psi}$$ (42) then any constant which may be induced through Lagrangians is canceled by the symmetry so that the vacuum energy hence the cosmological constant is zero. In this scenario two universes are assumed to be non-interacting (which is a rather strong condition). Other variants and refinements of this model are proposed in Kaplan ; Moffat . However the main idea of the scheme is preserved in these studies as well. So we do not consider them separately. I think the symmetry proposed by Linde has some relation with the symmetry studied in this paper. The parity-odd part of the actional function in this paper is forbidden or cancels out (depending on if you just impose the symmetry or identify it as reflection in extra dimension(s)). So as long as the vanishing part of the action functional is concerned, the action functional in this study transforms as in Eq.(41). In the present scheme $$\displaystyle g_{\mu\nu}\rightarrow-\,g_{\mu\nu}~{}~{}~{}~{}~{}\mbox{implies}~% {}~{}~{}~{}\begin{array}[]{l}\mbox{if}~{}~{}~{}~{}R,{\cal L}\rightarrow\,-R,-{% \cal L}~{}~{}~{}\mbox{and}~{}~{}~{}S\rightarrow\,-S~{}~{}~{}~{}\mbox{then}~{}~% {}~{}S=0\\ \mbox{if}~{}~{}~{}~{}R,{\cal L}\rightarrow\,-R,-{\cal L}~{}~{}~{}\mbox{and}~{}% ~{}~{}S\rightarrow\,S~{}~{}~{}~{}\mbox{then}~{}~{}~{}S\neq\,0\\ \mbox{if}~{}~{}~{}~{}R,{\cal L}\rightarrow\,R,{\cal L}~{}~{}~{}\mbox{and}~{}~{% }~{}S\rightarrow\,-S~{}~{}~{}~{}\mbox{then}~{}~{}~{}S=0\\ \mbox{if}~{}~{}~{}~{}R,{\cal L}\rightarrow\,R,{\cal L}~{}~{}~{}\mbox{and}~{}~{% }~{}S\rightarrow\,S~{}~{}~{}~{}\mbox{then}~{}~{}~{}S\neq\,0\end{array}$$ (44) In other words the vanishing cosmological constant is related to $S\rightarrow\,-S$ in the present study as well. As long as the cosmological constant is concerned the conclusion of both schemes are similar. The relation between two schemes can be seen better if one considers two branes in a space respecting this symmetry. Let us consider a space whose metric tensor transforms like (12), 13) and whose volume element is odd under reflections in the direction of the $x_{4^{\prime}\,4^{\prime\prime}}$’th dimension(s) and forms a closed line interval described by $S^{1}/Z_{2}$ with the metric of the form of (17) where $\Omega_{1(2)}\,=\,\cos{|k_{1(2)}\,x_{4^{\prime}(4^{\prime\prime})}|}$. Hence there are two branes (for each direction) located at $x_{4^{\prime}(4^{\prime\prime})}=0$ and at $x_{4^{\prime}(4^{\prime\prime})}=\pi$. Then under the transformation given in Eqs.(12) and(13) two branes are interchanged and for the even terms in $R$ and ${\cal L}$ (e.g. for cosmological constant), $S\rightarrow-S$ so that the contribution of the branes cancel each other after integration in a way similar to the Linde’s model. Of course there are essential differences between the two models. The space-time in Linde’s model is 4-dimensional and the volume element of the space was taken to be not effected by the symmetry while the space-time in the present model is higher dimensional and its volume element is odd under the symmetry transformation. So in our model the parts of $R$, ${\cal L}$ which are even under the symmetry cancel out to maintain the cosmological constant zero while in Linde’s model $R$ and ${\cal L}$ ( or at least ${\cal L}$) is odd under the symmetry to make the cosmological constant zero. In Linde’s model symmetry is ad hoc while in the present study the symmetry arises from $g_{AB}\rightarrow\,-g_{AB}$, which can be identified by reflection symmetry in extra dimensions. In this study we have studied the symmetry induced by reversal of the sign of the metric tensor. We have identified this symmetry by reflections in extra dimensions. In this way we may find some higher dimensional spaces which satisfy the symmetry and forbid both bulk and 4-dimensional cosmological constants. We have also discussed the relation between this symmetry and the E-parity symmetry of Linde. Another point worth to mention is that throughout this study we take the gravity propagate in the whole extra dimensions while standard model particles are localized in a brane (or branes) in the bulk so that the contribution of the curvature scalar and the Lagrangian terms in the bulk which depend on only extra dimensions cancel out while the standard model effects survive. References [1] R. Erdem, A symmetry for vanishing cosmological constant in an extra dimensional toy model, Phys. Lett. B 621, 11 (2005), hep-th/0410063 [2] S. Nobbenhuis, Categorizing Different Approaches to the Cosmological Constant Problem, Found. Phys. 36, (2006), gr-qc/0411093 [3] G. ’t Hooft and S. Nobbenhuis, Invariance under complex transformations, and its relevance to the cosmological constant problem, ArXiv, gr-qc/0602076 [4] S. Eidelman et al. (Particle Data Group), Phys. Lett. B 592, 1 (2004) [5] S. Weinberg, Rev. Mod. Phys. 61, 1 (1989); S. Nobbenhuis, Categorizing Different Approaches to the Cosmological Constant Problem, Found. Phys. 36, (2006), gr-qc/0411093 [6] A.D. Linde, The Universe Multiplication and the Cosmological Constant Problem, Phys. Lett. B 200, 272 (1988); A.D. Linde, Inflation, Quantum Cosmology and the Anthropic Principle, in Science and Ultimate Reality, Editors, J.D. Barrow et al., 426-458 ( Cambridge Univ. Press, 2003) [7] D.E. Kaplan, R. Sundrum, A Symmetry for the cosmological constant, arXiv, hep-th/0505265 [8] J.W. Moffat, Charge Conjugation Invariance of the Vacuum and the Cosmological Constant Problem, Phys. Lett. B 627, 9 (2005), hep-th/0507020 [9] After the appearance of first version of the e-print version of this paper (i.e. gr-qc/060380), the following papers that study the additional implications of signature reversal have appeared in the ArXiv: M.J. Duff and J. Kalkkinen, Signature reversal invariance, hep-th/0605273; M.J. Duff and J. Kalkkinen, Metric and coupling reversal in string theory, hep-th/0605274 [10] V.A. Rubakov, M.E. Shaposhnikov Extra Space-Time Dimensions: Towards a Solution of the Cosmological Constant Problem, Phys. Lett. B 125, 139 (1983)
Two-dimensional short-range interacting attractive and repulsive Fermi gases at zero temperature. Gianluca Bertaina gianluca.bertaina@zoho.com Institute of Theoretical Physics, Ecole Polytechnique Fédérale de Lausanne EPFL, CH-1015 Lausanne, Switzerland Abstract We study a two-dimensional two-component Fermi gas with attractive or repulsive short-range interactions at zero temperature. We use Diffusion Monte Carlo with Fixed Node approximation in order to calculate the energy per particle and the opposite-spin pair distribution functions. We show the relevance of beyond mean field effects and verify the consistency of our approach by using Tan’s Contact relations. 1 Introduction Low dimensional configurations of degenerate Fermi and Bose gases are being object of many experimental and theoretical studies RMP as a means of simulating strongly correlated systems and as interesting systems per se, from a fundamental point of view. Many important areas of investigation on Fermi gases are being extended to two-dimensional (2D) configurations, which have already been pursued in the three-dimensional (3D) case, such as the BCS-BEC crossover in a superfluid gas with resonantly enhanced interactions and the possible onset of itinerant ferromagnetism in a gas with repulsive interactions. Other interesting phenomena that should be prominent in the 2D case are the Fulde-Ferrell-Larkin-Ovchinnikov superfluid state, the ultracold gases analogue of the quantum Hall effect in presence of non-abelian gauge fields and the long range correlations induced by dipolar interactions. In particular, 2D ultracold Fermi gases have received a lot of attention in recent years, having been realized using highly anisotropic pancake-shaped potentials. Density profiles of the clouds have been measured using in situ imaging Turlapov ; Orel ; the single particle spectral function has been measured by means of rf spectroscopy Feld , the presence of many-body polaronic states and their transition to molecular states have been characterized koschorreck . On the theoretical side, the solution of the BCS equations for the 2D attractive Fermi gas has been first investigated in Miyake83 and later in Randeria . The perturbative analysis of the 2D repulsive Fermi gas has been performed in Bloom75 . Recent experiments are in a regime where beyond mean-field contributions become relevant. In such a respect, many results are now available especially for the highly imbalanced case polaron , both for the ground-state and for the so-called Upper Branch (UB). The 2D case is particularly challenging from the point of view of theory, being a marginal case in field theories, when the leading dependence of the observables on the coupling is non algebraic. Recently we have obtained the first determination using Quantum Monte Carlo methods of the equation of state at $T=0$ of a balanced homogeneous 2D Fermi gas in the BCS-BEC crossover bertaina . Such an equation of state has been positively compared to experiments using the local density approximation Orel , it has been used to discuss recent results on collective modes in a 2D Fermi gas collective and to judge the temperature dependence of the Contact in 2D trapped gases Frohlich2012 . In this article we report new results concerning the equation of state of the repulsive gas and its density-density correlation function. Moreover we emphasize the need of a consistent choice for the coupling constants and the coefficients of the beyond mean-field contributions to energy, both in the strongly interacting molecular regime and in the weakly interacting Fermi liquid regime. In Section 2 we introduce the model potentials and we discuss the trial nodal surfaces used in the Quantum Monte Carlo method; in Section 3 we present the equation of state of the weakly attractive or repulsive gases, we show results for the Upper Branch of the attractive gas and we discuss the equation of state of the composite bosons in the molecular regime; finally in Section 4 we discuss the extraction of the Contact from the density-density correlation function and we show results for the Contact of the repulsive gas. 2 Method We consider a homogeneous two-component Fermi gas in 2D described by the Hamiltonian $$H=-\frac{\hbar^{2}}{2m}\left(\sum_{i=1}^{N_{\uparrow}}\nabla^{2}_{i}+\sum_{i^{% \prime}=1}^{N_{\downarrow}}\nabla^{2}_{i^{\prime}}\right)+\sum_{i,i^{\prime}}V% (r_{ii^{\prime}})\;,$$ (1) where $m$ denotes the mass of the particles, $i,j,...$ and $i^{\prime},j^{\prime},...$ label, respectively, spin-up and spin-down particles and $N_{\uparrow}=N_{\downarrow}=N/2$, $N$ being the total number of atoms. We model the interspecies interatomic interactions using three different types of model potentials: an attractive square-well (SW) potential $V(r)=-V_{0}$ for $r<R_{0}$ ($V_{0}>0$), and $V(r)=0$ otherwise; a repulsive soft-disk (SD) potential $V(r)=V_{1}$ for $r<R_{1}$ ($V_{1}>0$), and $V(r)=0$ otherwise; and a hard-disk (HD) potential $V(r)=\infty$ for $r<R_{2}$ and $V(r)=0$ otherwise. Due to the logarithmic dependence on energy of the phase shifts in 2D, different definitions of the scattering length have been used Petrov03 ; we define the scattering length $a_{2D}=R_{2}$ in the case of the HD potential, so that for the SD potential one gets $a_{2D}=R_{1}\;e^{-I_{0}(\kappa_{1})/\kappa_{1}I_{1}(\kappa)}$ and for the SW potential $a_{2D}=R_{0}\;e^{J_{0}(\kappa_{0})/\kappa_{0}J_{1}(\kappa_{0})}$, where $J_{0(1)}$ and $I_{0(1)}$ are the Bessel and modified Bessel functions of first kind and $\kappa_{0}=\sqrt{V_{0}mR_{0}^{2}/\hbar^{2}}$, $\kappa_{1}=\sqrt{V_{1}mR_{1}^{2}/\hbar^{2}}$. In order to ensure the diluteness of the attractive gas we use $nR_{0}^{2}=10^{-6}$, where $n$ is the gas number density. The Fermi wave vector is defined as $k_{F}=\sqrt{2\pi n}$, and provides the energy scale $\varepsilon_{F}=\hbar^{2}k_{F}^{2}/2m$. In order to assure universality in the repulsive case we check that the results for the HD potential and for the SD potential with different values of $R_{1}$ are compatible. For the attractive SW potential in 2D the scattering length is always non negative and diverges at null depth and at the zeros of $J_{1}$, corresponding to the appearance of new two-body bound states in the well. For a SW potential therefore a bound state is always present, no matter how small the attraction is. The shallow dimers have size of order $a_{2D}$ and their binding energy is given by $\varepsilon_{b}=-4\hbar^{2}/(ma_{2D}^{2}e^{2\gamma})$, where $\gamma\simeq 0.577$ is Euler-Mascheroni’s constant. A different and widely used definition of the 2D scattering length is $b=a_{2D}e^{\gamma}/2$, such that $\varepsilon_{b}=-\hbar^{2}/mb^{2}$, analogously to the 3D case. The dependence of $a_{2D}$ on the depth $V_{0}$ in the region where the well supports only one bound state is shown in Fig. 1; the dashed line indicates the value of scattering length at which the bare molecules have a size comparable to the mean interparticle distance $1/k_{F}$. The region $k_{F}a_{2D}\gg 1$ corresponds to the BCS regime where interactions are weak and dimers are large and weakly bound, while $k_{F}a_{2D}\ll 1$ corresponds to the BEC regime of tightly bound composite bosons. The regime in which the scattering length diverges and a bound state appears (unitarity limit) is trivial in 2D because it corresponds to the non-interacting case; instead the resonant regime corresponds to the region $k_{F}a_{2D}\sim 1$; this can be seen at the two-body level by considering the low-energy scattering amplitude $f(k)=2\pi/[\log(2/ka_{2D}e^{\gamma})+i\pi/2]$ Petrov03 , which is enhanced for $k\sim 1/a_{2D}$ (with logarithmic accuracy). For the repulsive SD potential the scattering length is always positive and smaller than $R_{1}$, like in 3D (see Fig. 1). Although the real physical potentials always have an attractive part, the purely repulsive models that we use can be useful in describing the cases in which the scattering length is small and positive compared to the mean interparticle distance, when it is possible to prepare metastable repulsive gas-like states without a significant production of molecules koschorreck . 2.1 Monte Carlo simulations We use the fixed-node diffusion Monte Carlo (FN-DMC) method. This numerical technique solves the many-body Schroedinger equation by an imaginary time projection of an initial guess of the wavefunction. Provided that the initial guess has a finite overlap with the true ground-state, this method provides the exact energy of a systems of bosons, with a well controllable statistical error. For fermions, FN-DMC yields an upper bound for the ground-state energy of the gas, resulting from an ansatz for the nodal surface of the many-body wave function that is kept fixed during the calculation (see Refs. QMC ). The fixed-node condition is enforced using an initial and guiding trial function that we choose of the standard form $\psi_{T}({\bf R})=\Phi_{S}({\bf R})\Phi_{A}({\bf R})$, namely the product of a purely symmetric and a purely antisymmetric term. $\Phi_{A}$ satisfies the fermionic antisymmetry condition and determines the nodal surface of $\psi_{T}$, while $\Phi_{S}$ is a positive function of the particle coordinates and is symmetric in the exchange of particles with equal spin (Jastrow function). Two opposite regimes are described by the $\Phi_{A}$ component. The deep attractive BEC regime, where the opposite-spin fermions are expected to pair into a condensate of dimers, can be described by an antisymmetrized product $\Phi_{A}({\bf R})={\cal A}\left(\phi(r_{11^{\prime}})\phi(r_{22^{\prime}})...% \phi(r_{N_{\uparrow}N_{\downarrow}})\right)$ of pairwise orbitals $\phi$ corresponding to the two-body bound state of the potential $V(r)$. This wavefunction has been proposed by Leggett as a projection of the grand-canonical BCS function to a state with a finite number of particles, later extended to the polarized case Bouchaud88 and extensively used in 3D Quantum Monte Carlo simulations QMC . The weakly interacting regime, where a Fermi liquid description is expected to be valid, can be instead described by a typical Jastrow-Slater (JS) function with $\Phi_{A}({\bf R})=D_{\uparrow}(N_{\uparrow})D_{\downarrow}(N_{\downarrow})$, namely the product of the plane-wave Slater determinants for spin-up and spin-down particles. This description is expected to hold both in the weakly repulsive branch and in the attractive BCS regime of a weakly interacting gas where the effect of pairing on the ground-state energy is negligible. The symmetric part is chosen of the Jastrow form $\Phi_{S}({\bf R})=J_{\uparrow\downarrow}({\bf R})J_{\uparrow\uparrow}({\bf R})% J_{\downarrow\downarrow}({\bf R})$. The diluteness of the gas allows us to consider just two-body Jastrow functions, with $J_{\uparrow\downarrow}({\bf R})=\prod_{i,i^{\prime}}f_{\uparrow\downarrow}(r_{% i,i^{\prime}})$, $J_{\uparrow\uparrow}({\bf R})=\prod_{i<j}f_{\uparrow\uparrow}(r_{ij})$, $J_{\downarrow\downarrow}({\bf R})=\prod_{i^{\prime}<j^{\prime}}f_{\downarrow% \downarrow}(r_{i^{\prime}j^{\prime}})$, where two-body correlation functions of the interparticle distance have been introduced, which aim at reducing the statistical noise by fulfilling the cusp conditions, namely the exact behavior that is expected when two particles come close together. In particular in the weakly interacting attractive or repulsive regimes we set $f_{\downarrow\downarrow}=f_{\uparrow\uparrow}=1$, while $f_{\uparrow\downarrow}$ is set equal to the analytical ground-state solution of the two-body Schroedinger equation in the center-of-mass frame, with the bare two-body potentials. In the strongly attractive regime instead $f_{\uparrow\downarrow}=1$, since the opposite-spin short-range correlation is already accounted for in the BCS orbitals; for the parallel spin correlations we use the ground-state solution of an effective two-body problem, consisting of a SD interaction with scattering length $a_{eff}=0.6a_{2D}$, motivated by the expected interaction between molecules Petrov03 . This choice greatly reduces the variance of the sampled energy. In this paper we also study the Upper Branch in the strongly attractive regime, which corresponds to a (metastable) gas of repulsive fermions; being a many-body excited state, we only perform a Variational Monte Carlo (VMC) simulation, without imaginary time projection. In this case the wavefunction is taken of the JS type, with the $f_{\uparrow\downarrow}$ functions equal to the first excited state of the two-body SW problem, aiming at enforcing orthogonality to the molecular ground-state. The nodal surface in this case arises also from the “Jastrow” function. This unusual non positive definite factor has already been used in the context of 3D Fermi gases for the investigation of itinerant ferromagnetism Pilati2010 . Simulations are carried out in a square box of area $L^{2}=N/n$ with periodic boundary conditions (PBC). In order to fulfill those, all radially symmetric two-body functions have zero derivative at $r=L/2$, both in the Jastrow factors and in the molecular orbitals in the BCS wavefunction; this is obtained by smoothly matching each of the previously described functions to a sum of exponentials $f_{e}(r)=c_{1}+c_{2}[\exp(-\mu(L-r))+\exp(-\mu r)]$ at some healing length $\bar{R}$, with $\mu$ and $\bar{R}$ parameters to be optimized. The PBC also select a specific set of compatible plane-waves to be used in the JS wavefunction. It is known that finite-size effects of the JS wavefunction can be strongly depressed by considering numbers of fermions which provide a maximally symmetric Fermi surface (closed shells): we choose $N/2=13$ and $N/2=49$. Moreover we add a correction which can be justified with the theory of Fermi liquids (see Ceperley1987 ), namely the energy difference between the finite and infinite system is assumed to be the same as for the noninteracting case. Since this cannot assure the elimination of all finite-size effects (and in particular we do not consider the role of the effective mass), we use this correction also to assess the error-bars, on top of the statistical error. No significant finite-size effect is seen when using the BCS trial function. 3 Energy 3.1 Weakly interacting regime The Fermi Liquid Theory (FL) prediction for the zero temperature equation of state of a weakly short-range interacting 2D gas Bloom75 is the following: $$E_{FL}/N=E_{\text{FG}}\left[1+2g+(3-4\log 2)g^{2}\right]\;,$$ (2) where $E_{FG}=\hbar^{2}k_{F}^{2}/4m=\varepsilon_{F}/2$ is the energy per particle of the noninteracting gas and $g$ is the coupling constant. The peculiar logarithmic dependence of the 2D scattering amplitude on the available kinetic energy leads to an arbitrary dependence of the $2D$ coupling constant $g=1/\log(E_{A}/E_{K})$ on a reference energy $E_{K}$, the $E_{A}$ parameter being relative to the specific choice of the potential. When finite-range effects are negligible $E_{A}=4\hbar^{2}/(ma_{2D}^{2}e^{2\gamma})$, which is equal to $|\varepsilon_{b}|$ in case of an attractive potential. Since we consider a weakly interacting Fermi liquid, we make the appropriate choice $E_{K}=2\varepsilon_{F}$. Therefore we use the coupling constant $g=-1/\log(na_{2D}^{2}c_{0})=-1/2\log(k_{F}b)$ with $c_{0}=\pi e^{2\gamma}/2$. An important remark is in order: while the coefficient of the first order term is independent of $c_{0}$, the reported value of the coefficient of $g^{2}$ depends on the chosen $c_{0}$, since a modification of $c_{0}$ gives rise to higher order contributions to the equation of state. In Fig. 2 we show the FN-DMC results with the JS nodal surface. Negative couplings $g<0$ correspond to the ground-state in the weakly attractive regime, while positive couplings correspond to either the ground-state of the HD potential or to the Upper Branch of the attractive system. In the inset we present the energy, while in the main figure we show the energy with the mean-field result $E_{FG}(1+2g)$ subtracted, in order to emphasize beyond mean-field contributions. Although the three series of data obviously confirm the mean-field prediction, interesting different behaviors are found when checking the residual contributions to energy. In the $g<0$ regime we fit the coefficient $A_{2}$ of the second order term obtaining $A_{2}=0.69(4)$, to be compared to $3-4\log 2=0.227$. We find $A_{2}\approx 3-4\log 2$ only if we set $c_{0}=2\pi$ (as has been done in bertaina ). On the opposite side $g>0$, the Upper Branch (UB-VMC) results are approximately close to the second order fit done for the $g<0$ data, at least for small $g$. The discrepancy is due to the lack of optimization of the UB wavefunction, which is problematic since there is not a variational principle for this state. The JS results for the SD potential with $R_{1}=2a_{2D}$ are compatible with those with the HD potential. We also perform simulations with shallower SD potentials ($R_{1}=4a_{2D}$), which result in departures from universality at $g\approx 0.25$. Agreement between eq. (2) and the HD JS-DMC results is found up to $g\approx 0.1$, which corresponds to $na^{2}\approx 10^{-3}$. The beyond mean-field contributions in this paramagnetic phase soon start to decrease. In a related system a ferromagnetic transition has been excluded Drummond2011 ; further investigation concerning the strongly repulsive regime is however beyond the scope of this paper. The intriguing difference in the beyond mean-field terms, between the attractive case and the purely repulsive case, would also deserve further investigation, hopefully with an analytical treatment. 3.2 BCS-BEC crossover We now pass to discuss the main result published in bertaina , namely the characterization of the composite bosons equation of state in the 2D BCS-BEC crossover. The 2D mean-field BCS equations can be analytically solved Miyake83 ; Randeria along the BCS-BEC crossover, in terms of the interaction coefficient $x=|\varepsilon_{b}|/2\varepsilon_{F}$. For the BCS order parameter one obtains $\Delta=2\varepsilon_{F}\sqrt{x}$, while for the energy per particle one obtains $E/N=E_{FG}(1-2x)$ and for the chemical potential $\mu=\varepsilon_{F}(1-x)$. For small binding energy one recovers the non interacting limit; when $x~{}\sim~{}1$, the chemical potential becomes zero and then negative, so that the role of the dimers becomes more important; for very strong binding the chemical potential of the fermions is equivalent to half the binding energy of a molecule, so the system is made of non interacting bosons. Although very useful for providing a global self-consistent picture and for setting a stringent variational upper bound to the energy per particle, the BCS solution fails in various aspects. In the BCS regime it neglects the Hartree-Fock contributions to energy (discussed in the previous Section), which are dominant, since the gap is small. In the BEC regime it misses the correct interaction energy between the bosons. In general it is not able to reproduce the logarithmic dependence of energy on the density, which is typical of 2D. In Fig. 3 we show the FN-DMC results (from bertaina ) for the equation of state of a short-range attractively interacting 2D gas as a function of the interaction parameter $\eta=\log(k_{F}a_{2D})$ in units of $E_{FG}$, with $\varepsilon_{b}/2$ subtracted. The BCS wavefunction provides a lower energy for values of the interaction parameter $\eta\lesssim 1$, while the JS function is more favorable for larger values of $\eta$. In the deep BEC regime, besides the molecular contribution, the remaining fraction of energy corresponds to the interaction energy of the bosonic dimers. In the BEC regime the FN-DMC results are fitted with the equation of state of a 2D gas of composite bosons. The functional form is derived from the analysis of Beane Bose2D , in the framework of quantum field theory; it also corresponds to previous analytical and DMC studies (see Bose2D and references therein). Following Beane we introduce the running coupling $g_{d}(\lambda)$ in terms of the scattering length of the bosons $a_{d}$ and the particle density of the bosons $n_{d}$: $g_{d}(\lambda)=-1/\log{(n_{d}\lambda\pi^{2}e^{2\gamma}a_{d}^{2})}$, where $\lambda$ is an arbitrary dimensionless cutoff parameter, which is present due to the truncation in the perturbative expansion; it is the analogous of the $c_{0}$ parameter discussed in Subsection 3.1. In the following $m_{d}$ is the mass of the bosons. Up to the second order in the running coupling, one can express the energy density in the following way $${\cal E}=\frac{2\pi\hbar^{2}n_{d}^{2}}{m_{d}}g_{d}(\lambda)\Bigg{[}1+g_{d}(% \lambda)\Bigl{(}\log{(\pi g_{d}(\lambda))}-\log\lambda\pi^{2}+\frac{1}{2}\Bigr% {)}\Bigg{]}.$$ (3) It is evident from the above expression that fixing the scattering length $a_{d}$ which appears in the definition of $g_{d}(\lambda)$ is not sufficient for determining the energy density if one does not declare its choice for $\lambda$, that is the form of the coupling constant. Notice again that the coefficient of the second order term does depend on the choice of $\lambda$. A convenient choice for simplifying the expression is to set $\lambda=e^{-2\gamma}/\pi^{2}$, so that we can introduce the coupling $g_{d}=-1/\log{(n_{d}a_{d}^{2})}$ and we obtain $${\cal E}=\frac{2\pi\hbar^{2}n_{d}^{2}}{m_{d}}g_{d}\Bigg{[}1+g_{d}\Bigl{(}\log{% (\pi g_{d})}+2\gamma+\frac{1}{2}\Bigr{)}\Bigg{]}.$$ (4) Now let us consider the case when the bosons are dimers, consisting of two paired fermions with mass $m=m_{d}/2$ and particle density $n=2n_{d}$. There must exist a regime where the binding is so tight that the equation of state of such composite bosons is the same as in the case of point-like bosons, with the simple replacement $a_{d}\to\alpha a_{2D}$. Let us therefore introduce $\eta=\log{(k_{F}a_{2D})}$ in the previous expression, so that the composite bosons coupling turns to be $g_{d}=-1/\log{(n\alpha^{2}a_{2D}^{2}/2)}=1/(\log{4\pi}-2\eta-2\log\alpha)$. In such a situation the energy per fermion can be written in the following way: $$\frac{E}{N_{F}}=-\frac{|\varepsilon_{b}|}{2}+\frac{\varepsilon_{F}}{2}\frac{1}% {2}g_{d}\Bigg{[}1+g_{d}\Bigl{(}\log{(\pi g_{d})}+2\gamma+\frac{1}{2}\Bigr{)}% \Bigg{]},$$ (5) where $-|\varepsilon_{b}|/2$ is the contribution from the binding energy of the dimers. From a fit of this expression we obtain $a_{d}=0.55(4)a_{2D}$, in agreement with the four-body calculation in Ref. Petrov03 . In order to exemplify the importance of the second order expansion (5), in Fig. 3 we also show the leading linear contribution alone, with the same choice of the coupling constant. Without the second order term it would be impossible to determine $a_{d}$ with accuracy. To our knowledge a theoretical analysis for the running coupling of a 2D fermionic non-relativistic fluid, analogous to the one in Beane Bose2D for the bosonic counterpart, is still lacking. It would be highly useful for interpreting the results of Subsection 3.1, putting the comparison of the Quantum Monte Carlo data and the Fermi liquid equation of state on firmer grounds. The solid curve in Fig. 3 is a global fit of the data; the function to be fitted is a piece-wise defined function of $\eta$, the matching point being a fitting parameter and the two matched functions being the ratio of second order polynomials of $\eta$ such that the global function is continuous at the matching point up to the second derivative and the extreme regimes coincide with the known perturbative results. 4 Contact parameter The Contact parameter $C$ is a property of short-range interacting gases, measuring the amount of pairing between the particles and relating a large number of observables with each other Tan08 . For example it can be obtained from the derivative of the equation of state with respect to the coupling parameter $C=(2\pi m/\hbar^{2})d(nE/N)/d(\log k_{F}a_{2D})$ or from the short-range behavior of the antiparallel pair distribution function $g_{\uparrow\downarrow}(r)\underset{r\to 0}{\to}4C/k_{F}^{4}\log^{2}(r/a_{2D})$ Werner (where $r\to 0$ means $R<r\ll 1/k_{F}$, $R$ being the range of the potential). We calculate the $g_{\uparrow\downarrow}$ correlation function along the crossover using FN-DMC and extract the Contact by means of the following more refined formula, which better describes the behavior at small $r$ (see Fig. 4): $$g_{\uparrow\downarrow}(r)\underset{r\to 0}{\to}\frac{4C}{k_{F}^{4}}\left[-\log% \left(\frac{r}{a_{2D}}\right)\left(1+\left(\frac{r}{a_{2D}e^{\gamma}}\right)^{% 2}\right)+\left(\frac{r}{a_{2D}e^{\gamma}}\right)^{2}\right]^{2}\;.$$ (6) The additional terms permit a more precise determination of the Contact in the deep BEC regime, where the peak in $g_{\uparrow\downarrow}$ at $r\to 0$ is very narrow since $a_{2D}$ is quite small with respect to the mean interparticle distance. The results for the BCS-BEC crossover have already been presented in Fig. 4 of Ref. bertaina , where they have also been compared to the Contact extracted form the derivative of the global fit to the energy. The overall agreement between the two determinations of $C$ is a useful check of the accuracy of the trial wavefunctions used in the FN-DMC approach. Small deviations in the region $\log(k_{F}a_{\text{2D}})\sim 1$ point out the need of a better optimization of the trial wavefunctions, indicating that in the resonance region the JS wavefunction possesses too little pairing while the BCS wavefunction too much. In Fig. 4 we also demonstrate the determination of $C$ from the $g_{\uparrow\downarrow}$ of the repulsive HD gas. The short-range details are model dependent, nevertheless it is still possible to find an intermediate region which is universal. In this case the leading order formula for $g_{\uparrow\downarrow}$ is used, since eq. (6) is valid only for $r\ll a_{2D}$, which corresponds to the non universal region for the repulsive gas. In Fig. 5 we compare $C$ extracted from $g_{\uparrow\downarrow}$ to its analytical expression obtained from eq. (2): $C/k_{F}^{4}=[1+(3+4\log 2)g]g^{2}$. Similarly to Fig. 2 deviations appear around $g=0.2$. 5 Conclusions To conclude we have detailed the functional forms needed for accurately extract useful information from the FN-DMC simulations of 2D Fermi gases. Both for the energy in the weakly interacting case and in the BEC regime and for the $g_{\uparrow\downarrow}$ correlation function the knowledge of the next-to-leading order correction is crucial in order to avoid ambiguity in the measured properties. We have also shown new results on the equation of state and the Contact of the repulsive gas in the weakly interacting regime. Acknowledgements. I want to acknowledge the support of the University of Trento and the INO-CNR BEC Center in Trento, where a large part of the results have been obtained, and the valuable discussions with S. Giorgini. References (1) S. Giorgini, L.P. Pitaevskii and S. Stringari, Rev. Mod. Phys. 80, (2008) p. 1215; I. Bloch, J. Dalibard and W. Zwerger, Rev. Mod. Phys. 80, (2008) p. 885. (2) K. Martiyanov, V. Makhalov and A. Turlapov, Phys. Rev. Lett. 105, (2010) p. 030404. (3) A. Orel et al., New J. Phys. 13, (2011) p. 113032. (4) B. Fröhlich et al., Phys. Rev. Lett. 106, (2011) p. 105301; M. Feld, et al., Nature 480, (2011) p. 75; A. Sommer et al., Phys. Rev. Lett. 108, (2012) p. 045302. (5) M. Koschorreck et al., Nature 485, (2012) p. 619. (6) K. Miyake, Prog. Theor. Phys. 69, (1983) p. 1794. (7) M. Randeria, J.-M. Duan and L.-Y. Shieh, Phys. Rev. Lett. 62, (1989) p. 981; Phys. Rev. B 41, (1990) p. 327. (8) G. Bertaina and S. Giorgini, Phys. Rev. Lett. 106, (2011) p. 110403. (9) M. Parish, Phys. Rev. A 83, (2011) p. 051603, S. Zöllner, G. Bruun, and C. Pethick, Phys. Rev. A 83, (2011) p. 021603, M. Klawunn and A. Recati, Phys. Rev. A 84, (2011) p. 033607, R. Schmidt et al., Phys. Rev. A. 85, (2012) p. 021602. (10) E. Vogt et al., Phys. Rev. Lett. 108 (2012) p. 070404, J. Hofmann, Phys. Rev. Lett. 108, (2012) p. 185303, E. Taylor and M. Randeria, Phys. Rev. Lett. 109 (2012) p. 135301. (11) Fröhlich, B. et al., Phys. Rev. Lett. 109 (2012) p. 130403. (12) D.S. Petrov, M.A. Baranov and G.V. Shlyapnikov, Phys. Rev. A 67, (2003) p. 031601(R); D.S. Petrov and G.V. Shlyapnikov, Phys. Rev. A. 64, (2001) p. 012706. (13) A.J. Leggett, Modern Trends In The Theory Of Condensed Matter, Lecture Notes In Physics Vol. 115, (Springer-Verlag, Berlin 1980) p. 13, J.P. Bouchaud, A. Georges and C. Lhuillier, J. Phys. (Paris) 49, (1988) p. 553. (14) P.J. Reynolds et al., J. Chem. Phys. 77, (1982) p. 5593; G.E. Astrakharchik et al., Phys. Rev. Lett. 93, (2004) p. 200404; S.-Y. Chang et al., Phys. Rev. A 70, (2004) p. 043602; G.E. Astrakharchik et al., Phys. Rev. Lett. 95, (2005) p. 230405. (15) Pilati, S. et al., Phys. Rev. Lett. 105 (2010) p. 030405; Chang, S.-Y. et al. Proc. Natl. Acad. Sci. 108 (2011) p. 51. (16) Ceperley, D. M., Phys. Rev. B 18 (1978) p. 3126; Ceperley, D. M. and Alder, B. J. , Phys. Rev. B 36 (1987) p. 2092. (17) P. Bloom, Phys. Rev. B 12, (1975) p. 125; J.R. Engelbrecht, M. Randeria and L. Zhang, Phys. Rev. B 45, (1992) p. 10135; J.R. Engelbrecht and M. Randeria, Phys. Rev. B 45, (1992) p. 12419. (18) Drummond, N. et al., Phys. Rev. B 83 (2011) p. 195429. (19) G.E. Astrakharchik et al., Phys. Rev. A 79, (2009) p. 051602(R); C. Mora and Y. Castin, Phys. Rev. Lett. 102, (2009) p. 180404; S.R. Beane, Phys. Rev. A 82, (2010) p. 063610. (20) S. Tan, Ann. Phys. 323, (2008) 2952 and ibid, (2008) p. 2971. (21) F. Werner and Y. Castin, Phys. Rev. A. 86, (2012) p. 013626.
Global sensitivity analysis of natural convection in porous enclosure: Effect of thermal dispersion, anisotropic permeability and heterogeneity N. Fajraoui Chair of Risk, Safety and Uncertainty Quantification, ETH Zurich, Stefano-Franscini-Platz 5, 8093 Zurich, Switzerland M. Fahs LHyGeS, UMR-CNRS 7517, Université de Strasbourg/EOST, 1 rue Blessig, 67084 Strasbourg, France A. Younes B. Sudret Chair of Risk, Safety and Uncertainty Quantification, ETH Zurich, Stefano-Franscini-Platz 5, 8093 Zurich, Switzerland () Abstract In this paper, global sensitivity analysis (GSA) and uncertainty quantification (UQ) have been applied to the problem of natural convection (NC) in a porous square cavity. This problem is widely used to provide physical insights into the processes of fluid flow and heat transfer in porous media. It introduces however several parameters whose values are usually uncertain. We herein explore the effect of the imperfect knowledge of the system parameters and their variability on the model quantities of interest (QoIs) characterizing the NC mechanisms. To this end, we use GSA in conjunction with the polynomial chaos expansion (PCE) methodology. In particular, GSA is performed using Sobol’ sensitivity indices. Moreover, the probability distributions of the QoIs assessing the flow and heat transfer are obtained by performing UQ using PCE as a surrogate of the original computational model. The results demonstrate that the temperature distribution is mainly controlled by the longitudinal thermal dispersion coefficient. The variability of the average Nusselt number is controlled by the Rayleigh number and transverse dispersion coefficient. The velocity field is mainly sensitive to the Rayleigh number and permeability anisotropy ratio. The heterogeneity slightly affects the heat transfer in the cavity and has a major effect on the flow patterns. The methodology presented in this work allows performing in-depth analyses in order to provide relevant information for the interpretation of a NC problem in porous media at low computational costs. Keywords: Global sensitivity Analysis – Natural convection problem – Porous media – Polynomial Chaos Expansions 1 Introduction Natural convection (NC) in porous media can take place over a large range of scales that may go from fraction of centimeters in fuel cells to several kilometers in geological strata Nield and Bejan (2012). This phenomenon is related to the dependence of the saturating fluid density on the temperature and/or compositional variations. A comprehensive bibliography about natural convection due to thermal causes can be found in the textbooks and handbooks by Nield and Bejan Nield and Bejan (2012), Ingham and Pop Ingham and Pop (2005), Vafai Vafai (2005) and Vadasz Vadász (2008). Comprehensive reviews on NC due to compositional effects have been provided by Diersch and Kolditz Diersch and Kolditz (2002), Simmons et al Simmons et al. (2001), Simmons Simmons (2005) and Simmons et al Simmons et al. (2010). NC in porous media can be encountered in a multitude of technological and industrial applications such as building thermal insulation, heating and cooling processes in solid oxide fuel cells, fibrous insulation, grain storage, nuclear energy systems, catalytic reactors, solar power collectors, regenerative heat exchangers, thermal energy storage, among others Nield and Bejan (2012); Ingham and Pop (2005); Ingham (2004). Important applications can be also found in hydro-geology and environmental fields such as in geothermal energy Al-Khoury (2011); Carotenuto et al. (2012), enhanced recovery of petroleum reservoirs Almeida and Cotta (1995); Riley and Firoozabadi (1998); Chen (2007), geologic carbon sequestration Farajzadeh et al. (2007); Class et al. (2009); Islam et al. (2013, 2014); Vilarrasa and Carrera (2015), saltwater intrusion in coastal aquifers Werner et al. (2013) and infiltration of dense leachate from underground waste disposal Zhang and Schwartz (1995). Numerical simulation has emerged as a key approach to tackle the aforementioned applications in the last two decades. This is today a powerful and irreplaceable tool for understanding and predicting the behavior of complex physical systems. The literature concerning the numerical modeling and simulation of convective flow in porous media is abundant Holzbecher (1998); Pop and Ingham (2001); Viera et al. (2012); Miller et al. (2013); Su and Davidson (2015); Kolditz et al. (2015). The NC in porous media is usually described by the conservation equations of fluid mass, linear momentum and energy, respectively. Either Darcy or Brinkman models are used as linear momentum conservation laws. Darcy model is a simplification of the Brinkman model by neglecting the effect of viscosity. This simplification is valid for low permeable porous media. For high permeable porous media Brinkman model is more suitable because the effective viscosity is about 10 times the fluid viscosity Givler and Altobelli (1994); Falsaperla et al. (2012); Shao et al. (2016). In the traditional modeling analysis of NC in porous media, the governing equations are solved under the assumption that all the parameters are known. However, in real applications, the determination of the input parameters may be difficult or inaccurate. For instance, in the simulation of geothermal reservoirs, the physical parameters (i.e. hydraulic conductivity and porosity) are subject to significant uncertainty because they are usually obtained by model calibration procedures, that are often carried out with relatively insufficient historical data O’Sullivan et al. (2001). The uncertainties affecting the model inputs may have major effects on the model outputs. Typical examples about the significance of these effects (that are not exhaustive) can be found in the design of clinical devices or biomedical applications where small overheating can lead to unexpected serious disasters Davies et al. (1997); Ooi and Ng (2011); Wessapan and Rattanadecho (2014). Hence, the evaluation of how the uncertainty in the model inputs propagates and leads to uncertainties in the model outputs is an essential issue in numerical modeling. In this context, uncertainty quantification (UQ) has become a must in all branches of science and engineering Brown and Heuvelink (2005); Sudret (2007); De Rocquigny (2012). It provides a rigorous framework for dealing with the parametric uncertainties. In addition, one wants to quantify how the uncertainty in the model outputs is due to the variance of each model input. This kind of studies is usually known as sensitivity analysis (SA) Saltelli (2002). UQ aims at quantifying the variability of a given response of interest as a function of uncertain input parameters, whereas GSA allows to determine the key parameters responsible for this variability. UQ and GSA are usually conducted by a multi-step analysis. The first step consists on the identification of model inputs that are uncertain and modelling them in a probabilistic context by means of statistical methods using data from experiments, legacy data or expert judgment. The second step consists in propagating the uncertainty in the input through the model. Finally, sensitivity analysis is carried out by ranking the input parameters according to their impact onto the prediction uncertainty. UQ and GSA have proven to be a powerful approach to assess the applicability of a model, for fully understanding the complex processes, designing, risk assessment and making decisions. They have been extensively investigated in the literature (e.g. Saltelli et al. (1999); Sudret (2008); Xiu and Karniadakis (2003); Fesanghary et al. (2009); Blackwell and Beck (2010); Ghommem et al. (2011); Sarangi et al. (2014); Zhao et al. (2015); Mamourian et al. (2016); Shirvan et al. (2017); Rajabi and Ataie-Ashtiani (2014)). In the frame of flow and mass transfer in porous media, UQ and GSA have been applied to problems dealing with saturated/unsaturated flow Younes et al. (2013), solute transport Fajraoui et al. (2011); Ciriello et al. (2013) and density driven flow Rajabi and Ataie-Ashtiani (2014); Riva et al. (2015). A careful literature review shows that the investigation of sensitivity analysis for NC in porous media has been limited to some special applications Shirvan et al. (2017). To the best of our knowledge, these analyses have never been performed for a problem involving NC within a porous enclosure. Yet, NC in porous enclosure has been largely investigated for several purposes Oztop et al. (2009); Das et al. (2017) and several authors have contributed important results for such a configuration Bejan (1979); Prasad and Kulacki (1984); Beckermann et al. (1986); Gross et al. (1986); Moya et al. (1987); Lai and Kulacki (1988); Baytaş (2000); Saeid and Pop (2004); Saeid (2007); Oztop et al. (2009); Sojoudi et al. (2014); Chou et al. (2015); Mansour and Ahmed (2015). Hence, keeping in view the various applications of NC in porous enclosure and the importance of uncertainty analysis in numerical modeling, a complete analysis involving GSA and UQ study is developed in this work to address this gap. The considered problem deals with the square porous cavity. Such a problem is widely used as a benchmark for numerical code validation due to the simplicity of the boundary conditions Walker and Homsy (1978); Manole and Lage (1993); Misra and Sarkar (1995); Baytaş (2000); Alves and Cotta (2000); Fahs et al. (2015a); Shao et al. (2016, 2016); Zhu et al. (2017). It is also widely used to provide physical insights and better understanding of NC processes in porous media Getachew et al. (1996); Baytaş (2000); Saeid and Pop (2004); Leong and Lai (2004); Mahmud and Pop (2006); Choukairy and Bennacer (2012); Malashetty and Biradar (2012) . As model inputs, we consider the physical parameters characterizing the porous media and the saturating fluid as the permeability, porosity, thermal diffusivity and thermal expansion. All these parameters can be described by the Rayleigh number which represents the ratio between the buoyancy and diffusion effects. A common simplification for NC in porous media is to consider the saturated porous media with an equivalent thermal diffusivity (based on the porosity) and neglect the key process of heat mixing related to velocity dependent dispersion. Yet several studies have found that thermal dispersion plays an important role in NC systems Howle and Georgiadis (1994); Metzger et al. (2004); Pedras and de Lemos (2008); Jeong and Choi (2011); Yang and Vafai (2011); Kumar and Bera (2009); Sheremet et al. (2016); Plumb and Huenefeld (1981); Cheng (1981); Hong and Tien (1987); Cheng and Vortmeyer (1988); Amiri and Vafai (1994); Shih-Wen et al. (1992) and applications related to transport in natural porous media Abarca et al. (2007); Jamshidzadeh et al. (2013); Fahs et al. (2016). Hence, main attention is given here to understand the impact of anisotropic thermal dispersion by including the longitudinal and transverse dispersion coefficients in the model inputs. Furthermore, anisotropy in the hydraulic conductivity is acknowledged as it is one of the properties of porous media which is a consequence of asymmetric geometry and preferential orientation of the solid grains Shao et al. (2016). Finally heterogeneity of the porous media is considered as a source of uncertainty as it has a significant impact on NC in porous media Simmons et al. (2001); Nield and Simmons (2007); Kuznetsov and Nield (2010); Zhu et al. (2017). As model outputs, we consider different quantities that are often used to assess the flow and the heat transfer processes in porous cavity as the temperature spatial distribution, the Nusselt number and the maximum velocity components. In this work, we perform a global sensitivity analysis using a variance-based technique. In this particular context, the Sobol’ sensitivity indices Sobol’ (1993); Homma and Saltelli (1996); Sobol’ (2001) are widely used as sensitivity metrics, because they do not rely on any assumption regarding the linearity or monotonous behavior of the physical model. Various techniques have been proposed in the literature for computing the Sobol’ indices, see e.g. Archer et al. (1997); Sobol’ (2001); Saltelli (2002); Sobol’ and Kucherenko (2005); Saltelli et al. (2010). Monte Carlo (MC) is one of the most commonly used methods. However, it might become impractical, because of the large number of repeated simulations required to attain statistical convergence of the solution, especially for complex problem (e.g., Sudret (2008); Ballio and Guadagnini (2004) and references therein). In this context, new approaches based on advanced sampling strategies have been introduced to reduce the computational burden associated with Monte Carlo simulations. Among different alternatives, Polynomial Chaos Expansions (PCE) have been shown to be an efficient method for UQ and GSA Blatman and Sudret (2010b, a, 2011). In PCE, the key idea is to expand the model response in terms of a set of orthonormal multivariate polynomials orthogonal with respect to a suitable probability measure Ghanem and Spanos (1991). They allow one to uncover the relationship between different input parameters and how they affect the model outputs. Once a PCE representation is available, the Sobol’ sensitivity indices are then obtained via a straightforward post-processing analysis without any additional computational cost Sudret (2008). It can also be used to perform an uncertainty quantification using Monte Carlo analysis at a significantly reduced computational cost (see, e.g., Fajraoui et al. (2011) and references therein). The structure of the present study is as follows. Section 2 is devoted to the description of the benchmark problem and the governing equations. Section 3 describes the numerical model. Section 4 describes the sensitivity analysis procedure using Sparse PCE. Section 5 discusses the GSA and UQ results for homogeneous and heterogeneous porous media. Finally, a summary and conclusions are given in Section 6. 2 Problem statement and mathematical model The system under consideration is a square porous enclosure of length $H$ filled by a saturated heterogeneous porous medium. The properties of the fluid and the porous medium are assumed to be independent on the temperature. The porous medium and the saturating fluid are locally in thermal equilibrium. We assume that the Darcy and Boussinesq approximations are valid and that the inertia and the viscous drag effects are negligible. Under these conditions, the fluid flow in anisotropic porous media can be described using the continuity equation and Darcy’s law written in Cartesian coordinates as follows Mahmud and Pop (2006): $$\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}=0,$$ (1) $$u=-\frac{k_{x}}{\mu}\frac{\partial p}{\partial x},$$ (2) $$v=-\frac{k_{y}}{\mu}\left(\frac{\partial p}{\partial y}+\left(\rho-\rho_{c}% \right)g\right),$$ (3) where $u$ and $v\left[LT^{-1}\right]$ are the fluid velocity components in the $x$ and $y$ directions; $p\left[ML^{-1}T^{-2}\right]$ is the total pressure (fluid pressure and gravitational head); $k_{x}$ and $k_{y}\left[L^{2}\right]$ are the permeability components in the $x$ and $y$ directions; $\mu\left[ML^{-1}T^{-1}\right]$ is the dynamic viscosity; $\rho$ and $\rho_{c}\left[ML^{-1}\right]$ being respectively the density of the mixed fluid and density of the cold fluid; and $g\left[LT^{-2}\right]$ is the gravitational constant. The heat transfer inside the cavity is modeled using the energy equation written as: $$\frac{\partial T}{\partial t}+u\frac{\partial T}{\partial x}+v\frac{\partial T% }{\partial y}=\alpha_{m}\left(\frac{\partial^{2}T}{\partial x^{2}}+\frac{% \partial^{2}T}{\partial y^{2}}\right)+\frac{\partial}{\partial x}\left(\alpha^% {xx}_{disp}\frac{\partial T}{\partial x}+\alpha^{xy}_{disp}\frac{\partial T}{% \partial y}\right)+\frac{\partial}{\partial y}\left(\alpha^{xy}_{disp}\frac{% \partial T}{\partial x}+\alpha^{yy}_{disp}\frac{\partial T}{\partial y}\right).$$ (4) Here $T\left[\Theta\right]$ is the temperature; $\alpha_{m}\left[L^{2}T^{-1}\right]$ is the the effective thermal diffusivity and $\boldsymbol{\alpha_{disp}}\left[L^{2}T^{-1}\right]$ is the thermal dispersion tensor. In this work, we use the nonlinear model with anisotropic tensor as in Howle and Georgiadis (1994), that is defined as follows: $$\alpha^{xx}_{disp}=\left(\alpha_{L}-\alpha_{T}\right)\frac{u^{2}}{\sqrt{u^{2}+% v^{2}}}+\alpha_{T}\sqrt{u^{2}+v^{2}},$$ (5) $$\alpha^{xy}_{disp}=\left(\alpha_{L}-\alpha_{T}\right)\frac{u\cdot v}{\sqrt{u^{% 2}+v^{2}}},$$ (6) $$\alpha^{yy}_{disp}=\left(\alpha_{L}-\alpha_{T}\right)\frac{v^{2}}{\sqrt{u^{2}+% v^{2}}}+\alpha_{T}\sqrt{u^{2}+v^{2}},$$ (7) where $\alpha_{L}$ and $\alpha_{T}\left[L\right]$ are respectively the longitudinal and transversal dispersivity coefficient, which are considered uniform in the system. No-flow boundary conditions are assumed across all boundaries. The left and right vertical walls are maintained at constant temperatures $T_{h}$ and $T_{c}$ such that $(T_{h}>T_{c})$, respectively. The horizontal surfaces are assumed to be adiabatic (Fig. 1). The system (1)-(7) is completed by specifying a constitutive relationship between fluid properties $\rho$ and the temperature $T$. The density of the mixed fluid is assumed to vary with temperature as a first-order polynomial, that is: $$\rho=\rho_{c}[1-\beta(T-T_{c})],$$ (8) where $\beta$ is the coefficient of thermal expansion. Special attention is given in the literature to the NC in heterogeneous porous media because of the nonuniformity of the permeability and/or the thermal diffusivity affect significantly the overall rate of the heat transfer. The effect of heterogeneity is especially challenging in geothermal applications since hydraulic properties, such as permeability, can vary by several orders of magnitude over small spatial scales Fahs et al. (2015a). In this context, the impact of heterogeneity has been studied for both external and internal natural convection using different heterogeneity configurations (stratified, horizontal, vertical, random, and periodic) Kuznetsov and Nield (2010) and references therein. In this work, the heterogeneity of the porous media is described via the exponential model as in Fahs et al. (2015b); Shao et al. (2016); Zhu et al. (2017). Based on the exponential model, the permeability in the $x-$ and $y-$ directions are given by: $$k_{x}=k_{x0}e^{\sigma y},$$ (9) $$k_{y}=k_{y0}e^{\sigma y},$$ (10) where $k_{x0}$ and $k_{y0}$ are respectively the permeability at $y=0$ and $x=0$; $\sigma$ is the rate of change of $\ln(k)$ in the $y$ direction. The system (1)-(3) can be reformulated in dimensionless form as: $$\frac{\partial u^{*}}{\partial x^{*}}+\frac{\partial v^{*}}{\partial y^{*}}=0,$$ (11) $$u^{*}=-e^{\sigma^{*}y^{*}}\frac{\partial p^{*}}{\partial x^{*}},$$ (12) $$v^{*}=-r_{k}e^{\sigma^{*}y^{*}}\frac{\partial p^{*}}{\partial y^{*}}+Ra\cdot T% ^{*},$$ (13) where $p^{\ast}=\dfrac{k_{x0}}{\mu\alpha_{m}}p$; $u^{\ast}=\dfrac{H}{\alpha_{m}}u$; $v^{\ast}=\dfrac{H}{\alpha_{m}}v$; $T^{\ast}=\dfrac{T-T_{c}}{\Delta T}$; $\Delta T=T_{h}-T_{c}$ is the temperature difference between hot and cold walls; $x^{\ast}=\dfrac{x}{H}$; $y^{\ast}=\dfrac{y}{H}$;$\sigma^{\ast}=\sigma\cdot H;r_{k}=\dfrac{k_{y0}}{k_{x0}}$. $Ra$ represents the Rayleigh number, that is given by: $$Ra=\dfrac{k_{y}\cdot\rho_{c}\cdot\beta\cdot g\cdot\Delta T\cdot H}{\mu\alpha_{% m}}.$$ (14) The steady-state energy equation can be written in dimensionless form as: $$u^{*}\frac{\partial T^{*}}{\partial x^{*}}+v^{*}\frac{\partial T^{*}}{\partial y% ^{*}}=\left(\frac{\partial^{2}T^{*}}{\partial x^{*}{{}^{2}}}+\frac{\partial^{2% }T^{*}}{\partial y^{*}{{}^{2}}}\right)+\frac{\partial}{\partial x^{*}}\left(% \alpha^{xx,*}_{disp}\frac{\partial T^{*}}{\partial x^{*}}+\alpha^{xy,*}_{disp}% \frac{\partial T^{*}}{\partial y^{*}}\right)+\frac{\partial}{\partial y^{*}}% \left(\alpha^{xy,*}_{disp}\frac{\partial T^{*}}{\partial x^{*}}+\alpha^{yy,*}_% {disp}\frac{\partial T^{*}}{\partial y^{*}}\right),$$ (15) $$\alpha^{xx,*}_{disp}=\left(\alpha_{L}^{*}-\alpha_{T}^{*}\right)\frac{\left(u^{% *}\right)^{2}}{\sqrt{\left(u^{*}\right)^{2}+\left(v^{*}\right)^{2}}}+\alpha_{T% }\sqrt{\left(u^{*}\right)^{2}+\left(v^{*}\right)^{2}},$$ (16) $$\alpha^{xy,*}_{disp}=\left(\alpha_{L}^{*}-\alpha_{T}^{*}\right)\frac{u^{*}% \cdot v^{*}}{\sqrt{\left(u^{*}\right)^{2}+\left(v^{*}\right)^{2}}},$$ (17) $$\alpha^{yy,*}_{disp}=\left(\alpha_{L}^{*}-\alpha_{T}^{*}\right)\frac{\left(v^{% *}\right)^{2}}{\sqrt{\left(u^{*}\right)^{2}+\left(v^{*}\right)^{2}}}+\alpha_{T% }^{*}\sqrt{\left(u^{*}\right)^{2}+\left(v^{*}\right)^{2}},$$ (18) where $\alpha^{\ast}_{L}=\dfrac{\alpha_{L}}{H}$; $\alpha^{\ast}_{T}=\dfrac{\alpha_{T}}{H}$. 3 The numerical model Numerical simulation of thermal-driven transfer problem is highly sensitive to discretization errors. Furthermore, hydraulic anisotropy, heterogeneity and anisotropic thermal dispersion render the numerical solution more challenging as they require specific numerical techniques. Therefore, it is extremely important to select the appropriate numerical methods for solving the governing equations. In this work, we use the advanced numerical model developed by Younes et al., Younes et al. (2009) and Younes and Ackerer Younes and Ackerer (2008). In this model, appropriate techniques for both time integration and spatial discretization are used to simulate coupled flow and heat transfer. For the spatial discretization, a specific method is used to achieve high accuracy for each type of equation. Thus, the Mixed Hybrid Finite method is used for the discretization of the flow equation. This method produces accurate and consistent velocity fields even for highly heterogeneous domains Farthing et al. (2002); Durlofsky (1994). The heat transfer equation is discretized through a combination of a discontinuous Galerkin (DG) and Multipoint flux approximation (MPFA) methods. For the convective part, the DG method is used because it provides robust and accurate numerical solutions for problems involving steep fronts Younes and Ackerer (2008); Tu et al. (2005). For the diffusive part, the MPFA method is used because it allows for the handling of anisotropic heterogeneous domains and can be easily combined with the DG method Younes and Ackerer (2008). The method of lines (MOL) is used for the time integration. This method improves the accuracy of the solution through the use of adaptive higher-order time integration schemes with formal error controls. The numerical model has been validated against lab experimental data for variable density flow Konz et al. (2009). It has been also validated by comparison against semi-analytical solutions for NC in porous square cavity Fahs et al. (2014, 2014), seawater intrusion in heterogeneous coastal aquifer Younes and Fahs (2015) and seawater intrusion in anisotropic dispersive porous media Fahs et al. (2016). We should note that the numerical model allows for transient simulations while in the GSA study we consider the steady state solutions. Hence transient solutions are performed until a long nondimensional time to ensure steady conditions 4 Polynomial Chaos Expansion for Sensitivity Analysis GSA is a useful tool that aims at quantifying which input parameters or combinations thereof contribute the most to the variability of the model responses, quantified in terms of its total variance. Variance-based sensitivity method have gained interest since the mid 90’s, in this particular context. Here, we base our analysis on the the Sobol’ indices, which are widely used as sensitivity metrics Sobol’ (1993) and do not rely on any assumption regarding the linearity or monotonous behavior of the physical model. In the sequel, we consider $Y=\mathcal{M}\left(\boldsymbol{X}\right)$, a mathematical model that describe a scalar output of the considered physical system, which depends on $M$-uncertain input parameters. $\mathcal{M}$ may, represent a scalar model response. In the case of vector-valued response, i.e.; $\{Y\in\mathbb{R}^{N},N>1\}$, the following approach may be applied component-wise. We consider the uncertain parameters as independent random variables gathered into a random vector $\boldsymbol{X}=\{X_{1},...,X_{M}\}$ with joint probability density function (PDF) $f_{\boldsymbol{X}}$ and marginal PDFs $\{f_{X_{i}}(x_{i}),i=1,...,M\}$. Within this context, we will introduce next the variance-based Sobol’ indices. The interested reader is referred to Sudret (2008); Le Gratiet et al. (2016) for a deeper insight into the details. 4.1 Anova-based sensitivity indices Provided that the function $\mathcal{M}$ is square-integrable with respect to the probability measure associated with $f_{\boldsymbol{X}}$, it can be expanded in summands of increasing dimension as: $${\mathcal{M}}(\boldsymbol{X})={\mathcal{M}}_{0}+\sum_{i=1}^{M}{\mathcal{M}}_{i% }(X_{i})+\sum_{1\leq i<j\leq M}{\mathcal{M}}_{ij}(X_{i},X_{j})+...+{\mathcal{M% }}_{12...M}(\boldsymbol{X}),$$ (19) where ${\mathcal{M}}_{0}$ is the expected value of ${\mathcal{M}}(\boldsymbol{X})$, and the integrals of the summands ${\mathcal{M}}_{i_{1},i_{2},...,i_{s}}$ with respect to their own variables is zero, that is: $$\int_{\mathcal{D}_{X_{i_{k}}}}{\mathcal{M}}_{i_{1},i_{2},...,i_{s}}(% \boldsymbol{X}_{i_{1},i_{2},...,i_{s}})f_{X_{i_{k}}}(X_{i_{k}})=0\quad\text{~{% }for~{}}1\leq k\leq s,$$ (20) where $\mathcal{D}_{X_{i_{k}}}$ and $f_{X_{i_{k}}}(X_{i_{k}})$ respectively denote the support and marginal PDF of $X_{i_{k}}$. Eq (19) can be written equivalently as: $${\mathcal{M}}(\boldsymbol{X})={\mathcal{M}}_{0}+\sum_{\boldsymbol{u}\neq 0}{% \mathcal{M}}_{u}(\boldsymbol{X}_{\boldsymbol{u}}).$$ (21) Here $\boldsymbol{u}=\{i_{1},i_{2},...,i_{M}\}\subseteq\{1,2,...,M\}$ are index sets and $\boldsymbol{X}_{\boldsymbol{u}}$ are subvectors containing only those components of which the indices belong to ${\boldsymbol{u}}$. This representation is called the Sobol’ decomposition. It is unique under the orthogonality conditions between summand, namely: $$\mathbb{E}[{\mathcal{M}}_{\boldsymbol{u}}(\boldsymbol{X}_{\boldsymbol{u}}){% \mathcal{M}}_{\boldsymbol{v}}(\boldsymbol{X}_{\boldsymbol{v}})]=0.$$ (22) Thanks to the uniqueness and orthogonality properties, it is straightforward to decompose the total variance of $Y$, denoted $D$ in a sum of partial variance $D_{u}$: $$D={\rm Var}\left[{\mathcal{M}}(\boldsymbol{X})\right]=\sum_{\boldsymbol{u}\neq 0% }D_{u}=\sum_{\boldsymbol{u}\neq 0}{\rm Var}\left[{\mathcal{M}}_{\boldsymbol{u}% }(\boldsymbol{X}_{\boldsymbol{u}})\right],$$ (23) where: $$D_{\boldsymbol{u}}={\rm Var}\left[{\mathcal{M}}_{\boldsymbol{u}}(\boldsymbol{X% }_{\boldsymbol{u}})\right]=\mathbb{E}[{\mathcal{M}}_{\boldsymbol{u}}^{2}(% \boldsymbol{X}_{\boldsymbol{u}})].$$ (24) This leads to a natural definition of the Sobol’ Indices $S_{\boldsymbol{u}}$: $$S_{\boldsymbol{u}}=D_{\boldsymbol{u}}/D,$$ (25) which measures the amount of the total variance due to the contribution of the subset $\boldsymbol{X}_{\boldsymbol{u}}$. In particular, the first-order sensitivity index is defined by: $$S_{i}=D_{i}/D.$$ (26) The first-order sensitivity indices $S_{i}$ measures the amount of variance of $Y$ that is due to the parameter $X_{i}$ considered separately. The overall contribution of a parameter $X_{i}$ to the response variance including its interactions with the other parameters is then given by the total sensitivity indices. They include the main effects $S_{i}$ and all the joint terms involving parameter $X_{i}$, i.e. $$S_{i}^{T}=\sum\limits_{\mathcal{I}_{i}}D_{\boldsymbol{u}}/D,\quad\mathcal{I}_{% i}=\{\boldsymbol{u}\supset i\},$$ (27) In principle, one should rely upon the total sensitivity index to infer the relevance of the parameters Saltelli and Tarantola (2002). The higher $S_{i}^{T}$, the more $X_{i}$ is an important parameter for the model response. In contrast, $X_{i}$ is termed unimportant (in terms of probabilistic modelling) if $S_{i}^{T}=0$. The evaluation of Sobol’ indices requires the computation of $2^{M}$ Monte Carlo integrals of the model response $\mathcal{M}\left(\boldsymbol{X}\right)$. This can be costly to manage, especially when dealing with time-consuming computational models. Fortunately, the Sobol’ indices can easily be computed using the Polynomial Chaos Expansion (PCE) technique Sudret (2008). They are analytically obtained via a straightforward post-processing of the expansion. The PCE will be described in the next section. 4.2 Polynomial chaos expansion The model response can be casted into a set of orthonormal multivariate polynomial as: $$Y={\mathcal{M}}(\boldsymbol{X})=\sum\limits_{\boldsymbol{\alpha}\in\mathcal{A}% }y_{\boldsymbol{\alpha}}{\Psi}_{\boldsymbol{\alpha}}(\boldsymbol{X}),$$ (28) where $\mathcal{A}$ is a multi-index $\boldsymbol{\boldsymbol{\alpha}}=\{{\boldsymbol{\alpha}_{1},...,\boldsymbol{% \alpha}_{M}}\}$, $\{y_{\boldsymbol{\alpha}},\boldsymbol{\boldsymbol{\alpha}}\in\mathcal{A}\}$ are the expansion coefficients to be determined, $\{\Psi_{\boldsymbol{\alpha}}(\boldsymbol{X}),\boldsymbol{\boldsymbol{\alpha}}% \in\mathcal{A}\}$ are multivariate polynomials which are orthonormal with respect to the joint pdf $f_{\boldsymbol{X}}$ of $\boldsymbol{X}$, i.e $\mathbb{E}[\Psi_{\boldsymbol{\alpha}}(\boldsymbol{X})\Psi_{\boldsymbol{\beta}}% (\boldsymbol{X})]=1$ if $\boldsymbol{\alpha}=\boldsymbol{\beta}$ and 0 otherwise. The multivariate polynomials $\Psi_{\boldsymbol{\alpha}}$ are assembled as the tensor product of their appropriate univariate polynomials, i.e $$\Psi_{\boldsymbol{\alpha}}(\boldsymbol{x})=\prod_{i=1}^{M}\phi^{(i)}_{\alpha_{% i}}(x_{i}),$$ (29) where $\phi^{(i)}_{\alpha_{i}}$ is a polynomial in the $i$-th variable of degree ${\alpha_{i}}$. These bases are chosen according to the distributions associated with the input variables. For instance, if the input random variables are standard normal, a possible basis is the family of multivariate Hermite polynomials, which are orthogonal with respect to the Gaussian measure. Other common distributions can be used together with basis functions from the Askey scheme Xiu and Karniadakis (2002). A more general case can be treated through an isoprobabilistic transformation of the input random vector $\boldsymbol{X}$ into a standard random vector. The set of multi-indices $\mathcal{A}$ in Eq. (28) is determined by an appropriate truncation scheme. In the present study, a hyperbolic truncation scheme Blatman and Sudret (2011) is employed, which consists in selecting all polynomials satisfying the following criterion: $$\lvert\boldsymbol{\boldsymbol{\alpha}}\rVert_{q}=\left(\sum_{i=1}^{M}% \boldsymbol{\alpha}_{i}^{q}\right)^{1/q}\leq p,$$ (30) with $p$ being the highest total polynomial degree, $0<q\leq 1$ being the parameter determining the hyperbolic truncation surface. This truncation scheme allows for retaining univariate polynomials of degree up to $p$, whereas limiting the interaction terms. The next step is the computation of the polynomial chaos coefficients $\{y_{\boldsymbol{\alpha}},\boldsymbol{\alpha}\in\mathcal{A}\}$. Several intrusive (e.g. Galerkin scheme) or non-intrusive approaches (e.g. stochastic collocation, projection, regression methods) Sudret (2008); Xiu (2010) are proposed in the literature. We herein focus our analysis on the regression methods also known as least-square approaches. A set of $N$ realization of the input vector, $\mathcal{X}=\{\boldsymbol{x}^{(1)},...,\boldsymbol{x}^{(N)}\}$, is then needed, called experimental design (ED). The set of coefficient are then computed by means of the least-square minimization method, that is: $$\hat{\boldsymbol{y}}_{\boldsymbol{\alpha}}=\underset{\boldsymbol{y}_{% \boldsymbol{\alpha}}\in\mathbb{R}^{\text{card}\mathcal{A}}}{\mathrm{argmin}}% \frac{1}{N}\sum_{i=1}^{N}\left(\mathcal{M}\left(\boldsymbol{x^{(i)}}\right)-% \sum\limits_{\boldsymbol{\alpha}\in\mathcal{A}}y_{\boldsymbol{\alpha}}{\Psi}_{% \boldsymbol{\alpha}}(\boldsymbol{x^{(i)}})\right)^{2}.$$ (31) The number of terms in Eq. (28) may be unnecessarily large, thus a sparse PCE can be more efficient to capture the behavior of the model by disregarding insignificant terms from the set of regressors. We herein adopt the least angle regression (LAR) method proposed in Blatman and Sudret (2011) which involves a sparse representation containing only a small number of regressors compared to the classical full representation. The reader is referred to Efron et al. (2004) for more details on the LARS technique and to Blatman and Sudret (2011) for its implementation in the context of adaptive sparse PCE. It can be worth noting that the constructed PCE can also be employed as a surrogate model of the target output in cases when evaluating a large number of model responses is not affordable. It is thus important to assess its quality. A good measure of the accuracy is the Leave-One-Out (LOO) error, which allows a fair error estimation at an affordable computational cost Blatman and Sudret (2010a). The relative LOO error is defined as: $$\epsilon_{LOO}={\sum\limits_{i=1}^{N}\left(\frac{{\mathcal{M}}(\boldsymbol{x}^% {(i)})-{\mathcal{M}}^{PC}(\boldsymbol{x}^{(i)})}{1-h_{i}}\right)^{2}}\bigg{/}{% \sum\limits_{i=1}^{N}\left({\mathcal{M}}(\boldsymbol{x}^{(i)})-\hat{\mu}_{Y}% \right)^{2}},$$ (32) where $h_{i}$ is the $i^{th}$ diagonal term of matrix $\boldsymbol{\Psi}(\boldsymbol{\Psi}^{T}\boldsymbol{\Psi})^{-1}\boldsymbol{\Psi% }^{T}$, where $\boldsymbol{\Psi}=\{\boldsymbol{\Psi}_{ij}=\Psi_{j}\left(\boldsymbol{X}^{i}% \right)\}$ and $\hat{\mu}_{Y}=\frac{1}{N}\sum_{i=1}^{N}{\mathcal{M}}(\boldsymbol{x}^{(i)})$. 4.3 Polynomial chaos expansions for sensitivity analysis Once the PCE is built, the mean $\mu$ and the total variance $D$ can be obtained using properties of the orthogonal polynomials Sudret (2008), such that: $$\mu=y_{0},$$ (33) $$D=\sum\limits_{\boldsymbol{\alpha}\in\mathcal{A}\setminus{0}}y_{\boldsymbol{% \alpha}}^{2}.$$ (34) As mentioned above, the Sobol’ indices of any order can be computed in a straightforward manner. The first order and total Sobol’ indices are then given by Sudret (2008): $$S_{i}=\sum\limits_{\boldsymbol{\alpha}\in\mathcal{A}_{i}}y_{\boldsymbol{\alpha% }}^{2}/D,\quad\mathcal{A}_{i}=\{\boldsymbol{\alpha}\in\mathcal{A}:\alpha_{i}>0% ,\alpha_{j\neq i}=0\},$$ (35) and $$S_{i}^{T}=\sum\limits_{\boldsymbol{\alpha}\in\mathcal{A}_{i}^{T}}y_{% \boldsymbol{\alpha}}^{2}/D,\quad\mathcal{A}_{i}^{T}=\{\boldsymbol{\alpha}\in% \mathcal{A}:\alpha_{i}>0\}.$$ (36) Of particular interest is the marginal effect (also called univariate effect, see Deman et al. (2016)) of the parameters $X_{i}$, which enables investigation of the range of variation across which the model response is most sensitive to $X_{i}$. It corresponds to the sum of the mean values and first-order summands comprising univariate polynomials only, i.e.: $$\mathbb{E}[\mathcal{M}(\boldsymbol{X})\mid X_{i}=x_{i}]=\mathcal{M}_{0}+\sum_{% \alpha\in\mathcal{A}_{i}}y_{\boldsymbol{\alpha}}\Psi_{\boldsymbol{\alpha}}(% \boldsymbol{x}_{i}).$$ (37) 5 Results and discussions The PCEs presented in the previous section are used to perform UQ and GSA for the problem of natural convection in porous square cavity. The dimensionless form of the governing equations leads to define the model input parameters as follows: • The average Rayleigh number ($\overline{Ra}$): the Rayleigh number represents the ratio between the buoyancy and the diffusion effects. It depends on the porous media properties (porosity, thermal diffusivity and permeability), fluid properties (thermal diffusivity, viscosity, density and thermal expansion), the characteristic domain length and the temperature gradient. For isotropic porous media, the Rayleigh number is defined based on the scalar permeability of the porous media. For the general case of an anisotropic porous media, $Ra$ is defined based on the permeability in the vertical direction ($k_{y}$) Bennacer et al. (2001). In this work, we are concerned with anisotropic heterogeneous porous media. Thus, we distinguish between local Rayleigh number based on the local permeability (see Eq. (14)) and the average Rayleigh number based on the overall average permeability Fahs et al. (2015a). The local $Ra$ number can be formulated as follows: $${Ra}=Ra_{0}e^{\sigma^{\ast}y^{\ast}},$$ (38) where $Ra_{0}$ is the local Rayleigh number at the bottom of the domain. The average Rayleigh number $\overline{Ra}$ is then obtained by integrating the local Ra overall the domain, that is given by: $$\overline{Ra}=Ra_{0}\int_{0}^{1}e^{\sigma^{\ast}y^{\ast}}\,dy^{\ast}=\dfrac{e^% {\sigma^{\ast}}-1}{\sigma^{\ast}}Ra_{0}.$$ (39) The range of variability of the average Rayleigh number $\overline{Ra}$ is from 0 to 1000. This range of variation is physically plausible. • The permeability anisotropy ratio ($r_{k}$): this ratio is commonly used to describe the hydraulic anisotropy of the porous media Abarca et al. (2007); Bennacer et al. (2001). In this work, the model used to describe the heterogeneity of the porous media leads to a constant anisotropy ratio, calculated based on the permeability on the $x$ and $y$ directions at the bottom of the domain ($k_{x_{0}}$ and $k_{y_{0}}$). As a common practice in porous media, the range of variability of $r_{k}$ is considered to be between 0 and 1 Abarca et al. (2007). • The non-dimensional dispersion coefficients ($\alpha_{L}^{\ast}$ and $\alpha_{T}^{\ast}$): these parameters correspond to the longitudinal and transverse thermal dispersion coefficients. They account for the enhancement of heat transfer due to hydrodynamic dispersion. The longitudinal dispersion $\alpha_{L}^{\ast}$ corresponds to the heat transfer along the local (Darcy) velocity vector while the transverse dispersion $\alpha_{T}^{\ast}$ acts normally to the local velocity. A detailed review about the physical understanding of longitudinal and transverse thermal dispersion is given in Howle and Georgiadis Howle and Georgiadis (1994). According to Howle and Georgiadis Howle and Georgiadis (1994), Abarce et al., Abarca et al. (2007) and Fahs et al., Fahs et al. (2016), $\alpha_{L}^{\ast}$ was varied between $0.1$ and $1$ and $\alpha_{T}^{\ast}$ between $0.01$ and $0.1$. • The rate of heterogeneity variation ($\sigma_{z}^{\ast}$): this parameter is used to quantify the effect of the heterogeneity distribution on the model outputs. In fact, the geometrical distribution of the heterogeneity in the computational domain is not often well-defined. For instance, in hydrogeology, the heterogeneity distribution cannot be clearly described because hydraulic parameters do not correlate well with lithology. As in Fahs et al., Fahs et al. (2015a) and shao et al., Shao et al. (2016), the range of variability of ($\sigma_{z}^{\ast}$) is assumed to be from $0$ to $4$. Uncertainty in these parameters is related to our imperfect knowledge of the porous media properties (porosity, permeability tensor, heterogeneity distribution) and thermo-physical parameters of both porous media grains and saturating fluid (thermal conductivity and dispersion). Without further information, and in view of drawing general conclusions, uniform distributions are selected for all parameters. Moreover, the parameters are assumed to be statistically independent. The results of the numerical model will be analyzed using several quantities of interest (QoI) which are controlled by the model inputs. To describe flow process we use the maximum dimensionless velocity components ($u^{\ast}_{max}$ and $v^{\ast}_{max}$). For the heat transfer process, the assessment is based on the spatial distribution of the dimensionless temperature ($T^{\ast}$). In addition, and as it is customary for the cavity problem, the heat processes are assessed using the wall average Nusselt number $\overline{Nu}$ given by: $$\overline{Nu}=\int_{0}^{1}Nu(y^{\ast})\,dy^{\ast},$$ (40) where $Nu$ is the local Nusselt number. The local Nusselt number represents the net dimensionless heat transfer at a local point on the hot wall. It is defined as the ratio of the total convective heat flux to its value in the absence of convection. When thermal dispersion is considered, the local Nusselt number is defined as follows Howle and Georgiadis (1994); Sheremet et al. (2016): $$Nu=\left(1+\alpha_{T}^{\ast}\sqrt{\left(u^{\ast}\right)^{2}+\left(v^{\ast}% \right)^{2}}\right)\frac{\partial T^{\ast}}{\partial x^{\ast}}\bigg{|}_{x^{% \ast}=0}.$$ (41) 5.1 Homogeneous case 5.1.1 Numerical details Preliminary simulations were performed for different grid size in order to test the influence of grid discretization on the QoIs. They were performed under regular triangular mesh obtained by subdividing square elements into four equal triangles (by connecting the center of each square to its four nodes). Regular grids are used here to avoid instabilities and inaccuracies that can be caused by the change in mesh sizes within irregular grids. The most challenging configuration of the uncertain parameters is considered. This corresponds to the case with the highest Rayleigh number ($\overline{Ra}=1,000$) and anisotropy ratio ($r_{k}=1$) and lowest values of longitudinal and transverse thermal dispersion coefficients. For such a case, the heat transfer process is mainly dominated by the buoyancy effects which are at the origin of the rotating flow within the cavity. As a consequence, the steady state isotherms are sharply distributed and they have a spiral shape as they follow the flow structure due to the small thermal diffusivity. A relatively fine mesh should be used in this case to obtain a mesh independent solution. Several simulations are performed by increasing progressively the mesh refinement and by comparing the solution for two consecutive levels of grid. The tests revealed that the uniform grid formed by $40,000$ elements is adequate to render accurate results and capture adequately the flow and heat transfer processes. All simulations were run for $8$ minutes because this is the required time for the homogeneous problem to reach the steady state solution. These discretization parameters are kept fixed in subsequent simulations. In view of computing the PCE expansion of the model outputs in terms of the $4$ input random variables $\boldsymbol{X}=\{\overline{Ra},r_{k},\alpha_{L}^{\ast},\alpha_{T}^{\ast}\}$, several sets of parameter values sampled according to their respective pdf’s are needed. For this purpose, we use an experimental design of size $N=150$ drawn with Quasi Monte Carlo sampling (QMC). It is a well-known technique for obtaining deterministic experimental designs that covers at best the input space ensuring uniformity of each sample on the margin input variables. In particular, Sobol’ sequences are used. PCE meta-models is constructed by applying the procedure described in Section $3$ for the considered QoI’s. In the case of multivariate output (temperature), a PCE is constructed component-wise (i.e.; for each points of the grid). The candidate basis is determined using a standard truncation scheme (see Eq. (27)) with $q=1$. The maximum degree $p$ is varied from $1$ to $20$ and the optimal sparse PCE is selected by means of the corrected relative $LOO$ error (see Eq. (29)). The corresponding results (e.g. polynomial degree giving the best accuracy, relative $LOO$ error and number of retained polynomials) of the PCE are given in Table 1 for the three scalar output $\overline{Nu}$, $u^{\ast}_{max}$ and $v^{\ast}_{max}$. For instance, when the average Nusselt number is considered, the optimal PCE is obtained for $p=4$ and the corresponding $LOO$ error is $err_{LOO}=8.6\times 10^{-4}$. The sparse meta-model includes $63$ basis elements, whereas the size of full basis is $70$. In Fig. 2, we compare the values of the PCE with the respective values of the physical model at a validation set consisting of $1,000$ MC simulations. We note that these simulations do not coincide with the ED used for the construction of the PCE. An excellent match is observed for both $\overline{Nu}$ and $v^{\ast}_{max}$; which is also illustrated by a small LOO error (less than $0.001$). Discrepancies between PCE and true model are observed for $u^{\ast}_{max}$; especially for larger values of $u^{\ast}_{max}$. The related LOO error is equal to $5.81\times 10^{-2}$. 5.1.2 Global sensitivity analysis This section is devoted to GSA in order to identify the most influential parameters and to understand the marginal effect of the parameters onto the model outputs. Depending on the output QoI’s, a different behaviour of the parameters is observed. The first and total Sobol’ indices are computed, as well as second-order one, based on the obtained PCEs of the various QoI’s. Referring to the results in Table 1, the relative LOO error varies from $0.04$% to $5.8$%. It is important to emphasize that excellent GSA results are obtained by PCE as soon as $err_{LOO}<10^{-3}$. Moreover, the results obtained for $u^{\ast}_{max}$ are also deemed acceptable. • GSA of the temperature distribution Fig. 3a illustrates the spatial distribution of the mean of the temperature based on PCE. In this case, the presented approach is applied component-wise. Indeed, the PCE of a numerical model with many outputs is carried-out by metamodelling independently each model output. Fig. 3a shows that the distribution of the mean temperature reflects the general behavior of the heat transfer in the case of NC in square porous cavity. The isotherms are not vertical as they are affected by the circulation of the fluid saturating the porous media. In order to evaluate how far the temperatures are spread out from their mean, we plot in Fig. 3b the distribution of the temperature variance. As a general comment, a symmetrical behavior of the variance around the center point is observed. The temperature variance is negligible in the thermal boundary layers of the hot and cold walls (deterministic boundary conditions) and in the relatively slow-motion rotating region at the core of the square. It becomes significant at the horizontal top and bottom surfaces of the porous cavity. The largest variance values are located toward the cold wall at the top surface and the hot wall at the bottom surface. In these zones the flow is nearly horizontal. The fluid is cooled down (resp. heated) at the top (resp. bottom) by the effect of the cold (resp. hot) wall. The sensitivity of the temperature field to the variability of the random parameters can be assessed by means of spatial maps of the Sobol’ indices. Fig. 4 shows the spatial distribution of total Sobol’ indices due to uncertainty in $\overline{Ra}$, $r_{k}$, $\alpha_{L}^{\ast}$ and $\alpha_{T}^{\ast}$. We recall that the total Sobol’ indices involve the total effect of a parameter including nonlinearities as well as interactions. Thus, they allow us to rank the parameters according to their importance. Focusing on Fig. 4, we can see that the most influential parameters are $\alpha_{L}^{\ast}$ and $\alpha_{T}^{\ast}$. A complementary effect between these parameters is observed. The effect of $\alpha_{L}^{\ast}$ is more pronounced than that for $\alpha_{T}^{\ast}$ as its zone of influence is located in the region where the temperature variance is maximum. It is worth nothing that complementary effect between the influence of $\alpha_{L}^{\ast}$ and $\alpha_{T}^{\ast}$ can be explained by reformulating the dispersive heat flux in terms of the dot product of the velocity and temperature gradient vectors. Considering Eqs. (5)-(7) the thermal dispersive flux $\boldsymbol{q_{disp}}$ can be rearranged as follows: $$\left[\begin{array}[]{l}q_{disp}^{x}\\ q_{disp}^{z}\end{array}\right]=\left[\begin{array}[]{l}\alpha_{T}\left(\mid% \boldsymbol{V}\mid\dfrac{\partial\boldsymbol{T}}{\partial x}-\dfrac{u}{\mid% \boldsymbol{V}\mid}\left(\boldsymbol{V}\cdot\nabla\boldsymbol{T}\right)\right)% +\dfrac{u\alpha_{L}}{\mid\boldsymbol{V}\mid}\left(\boldsymbol{V}\cdot\nabla% \boldsymbol{T}\right)\\ \alpha_{T}\left(\mid\boldsymbol{V}\mid\dfrac{\partial\boldsymbol{T}}{\partial y% }-\dfrac{v}{\mid\boldsymbol{V}\mid}\left(\boldsymbol{V}\cdot\nabla\boldsymbol{% T}\right)\right)+\dfrac{v\alpha_{L}}{\mid\boldsymbol{V}\mid}\left(\boldsymbol{% V}\cdot\nabla\boldsymbol{T}\right)\end{array}\right].$$ (42) where $\boldsymbol{V}$ is the velocity vector. This equation reveals that the longitudinal (resp. transverse) dispersion is an increasing (resp. decreasing) function of $\boldsymbol{V}\cdot\nabla T$. Around the top and bottom surfaces of the cavity, the velocity is almost horizontal and parallel to the thermal gradient. Hence, $\boldsymbol{V}\cdot\nabla T$ exhibits its maximum value and by consequence temperature distribution in these zones is mainly controlled by $\alpha_{L}^{\ast}$ (see Fig. 4). The velocity near the vertical walls is vertical and relatively perpendicular to the thermal gradient. $\boldsymbol{V}\cdot\nabla T$ tends towards zero (minimum value). Hence, the temperature distribution in these regions is mainly controlled by $\alpha_{T}^{\ast}$ (see Fig. 4d). The sensitivity of the temperature field due to $\overline{Ra}$ is less important than that of $\alpha_{L}^{\ast}$ and $\alpha_{T}^{\ast}$ (Fig. 4a). Its zone of influence expands along the cavity diagonal bisector with increasing magnitude toward the cavity center and near the corners. The effect of $\overline{Ra}$ on the temperature distribution is related to the heat mixing by thermal diffusion and/or the convection due to the buoyancy effects. Around the top right and bottom left corners the fluid velocity is very small as it should be zero right on the corners to satisfy the boundary conditions. Hence heat mixing between hot and cold fluids by thermal dispersion is almost negligible. In addition, around these corners the thermal boundary layers are very thin. Thus, the temperature gradient is very important so that mixing by thermal diffusion is dominating and by consequence temperature distribution is sensitive to $\overline{Ra}$. In the center of the cavity the thermal dispersion tensor is also negligible because the rotating flow is relatively slow. Hence mixing is mainly related to diffusion and by consequence sensitivity to the $\overline{Ra}$ is relatively important. Fig. 4b indicates that the temperature distribution is slightly sensitive to the permeability ratio. The zone of influence of $r_{k}$ matches well with the region in which the flow is strongly bidirectional. This is physically understandable, since $r_{k}$ expresses the ability of a porous media to transmit fluid in a direction perpendicular to the main flow. Hence $r_{k}$ is a non-influent parameter in the zones where the flow is almost unidirectional. • GSA of the scalar QoIs Fig. 5 shows bar-plots of the first order and total Sobol’ indices of the average Nusselt number $\overline{Nu}$. Inspection of the sensitivity indices showed that the variability of $\overline{Nu}$ is mainly due to the principal effects of $\overline{Ra}$ and $\alpha_{T}^{\ast}$. The most influential parameter is $\overline{Ra}$ with $S^{T}_{\overline{Ra}}=0.8$. A small influence of $r_{k}$ and $\alpha_{L}^{\ast}$ is also observed. Interactions between the random parameters are not significant (not shown). The maximum value obtained is $S_{\overline{Ra},\alpha_{T}^{\ast}}=0.035$. The results are summarized in Table 2. To further elaborate our investigation, we examine the marginal effect of the uncertain parameters on the model response. This effect corresponds to the evolution of the model output with respect to a single parameter averaged on the other parameters (Eq. (37)). Indeed, if a parameter is sensitive, significant variations (positive or negative slopes) are expected whereas a weakly sensitive parameter results in small variations of the model responses. Results for the marginal effect of the parameters on $\overline{Nu}$ are shown in Fig. 6. Different scales are observed indicating their level of influence. In general, the marginal effects are in agreement with the global sensitivity analysis. Fig. 6a demonstrates that $\overline{Nu}$ increases with $\overline{Ra}$. Indeed, the increase of $\overline{Ra}$ enhances the buoyancy effects and reduces the thickness of the thermal boundary layer in the lower part of the hot wall. This leads to higher values of $\overline{Nu}$ as the temperature distribution becomes steep near the hot wall, especially around the bottom corner. Same behavior of $\overline{Nu}$ is observed against $\alpha_{T}^{\ast}$ (Fig. 6d). This parameter affects slightly the velocity field (as it will be shown later in this paper). It slightly affects the temperature distribution at the hot wall as we can see in the Figures 3 and 4d. Hence considering the expression of $\overline{Nu}$ (Eq. (41) ) it is logical that $\overline{Nu}$ increases with $\alpha_{T}^{\ast}$. Similar results have been reported in Hong and Tien (1987) where authors showed that when the transverse dispersion effect dominates, the heat transfer is greatly increased. This is also in agreement with the results reported by Sheremet et al., Sheremet et al. (2016) for natural convection in a porous cavity filled with a nanofluid. Fig. 6b shows that $\overline{Nu}$ decreases with the increase of the permeability anisotropy ratio ($r_{k}$). This is consistent with the results obtained in Bennacer et al., Bennacer et al. (2001) and Ni and Beckermann Ni and Beckermann (1991) for natural convection (without thermal dispersion) and for equivalent ranges of parameters. This behavior can be explained by the fact that at a constant value of the Rayleigh number (i.e. $k_{y}$ is constant), the increase of $r_{k}$ can be interpreted as a decrease of the permeability in the horizontal direction $k_{x}$. This entails a weaker convective flow with more expanded thermal boundary layers ( in the lower part of the hot wall) and by consequence smaller $\overline{Nu}$. The marginal effect of $\alpha_{L}^{\ast}$ (Fig. 6c) indicates the existence of two regimes for the evolution of the Nusselt number. Thus, we have a decreasing $\overline{Nu}$ for $\alpha_{L}^{\ast}<0.25$ and an inverse behavior for $\alpha_{L}^{\ast}>0.25$. Indeed, when $\alpha_{L}^{\ast}$ is increased, the mixing zone between the hot and cold fluids expands in the direction of the flow. This will push the highest and lowest isotherms towards the vertical walls and increase by consequence the thermal gradient in the vertical boundary layers. On the other hand, this redistribution of the thermal gradient leads to an attenuation of the rotating flow within the cavity. Thus, referring to the expression of the Nusselt number (Eq. (41)), we can deduce that, for small values of $\alpha_{L}^{\ast}$ $(<0.25)$, $\overline{Nu}$ decreases as the velocity variation is predominating. For large values of $\alpha_{L}^{\ast}$ $(>0.25)$, $\overline{Nu}$ increases because the effect of the thermal gradient becomes significant and more important than the velocity. Fig. 7 shows bar-plots of the first order and total Sobol’ indices of the maximum velocity $u_{max}^{\ast}$. Results indicate that the variability of $u_{max}^{\ast}$ is mainly controlled by $\overline{Ra}$ and $r_{k}$. Interactions between $\overline{Ra}$ and $r_{k}$ are also observed. They explain $14.5\%$ of the total variance of $u_{max}^{\ast}$. The total effect of $\alpha_{T}^{\ast}$ accounts for approximately $1.0\%$. In Fig. 8, we display the marginal effect of the uncertain parameters on $u_{max}^{\ast}$. One can observe that $u_{max}^{\ast}$ increases with the increase of Rayleigh number $\overline{Ra}$, indicating that the buoyancy-induced flow along the horizontal surfaces becomes much stronger as $\overline{Ra}$ is increased. On the contrary, we note that $u_{max}^{\ast}$ decreases with the increase of $r_{k}$, as the latter corresponds to the decrease of the permeability in the horizontal direction. We can also note that small variations of $u_{max}^{\ast}$ are observed when $r_{k}>0.2$. Figs. 8c-d confirm that $u_{max}^{\ast}$ is slightly sensitive to $\alpha_{L}^{\ast}$ and $\alpha_{T}^{\ast}$. One can observe that an increase of $\alpha_{L}^{\ast}$ is associated with a decrease of $u_{max}^{\ast}$. The effect of $\alpha_{L}^{\ast}$ on the velocity can be understood with the help of the stream function form of the flow equation. This form can be obtained by applying the curl operator on the Darcy’s law. It is a Poisson equation with the horizontal component of the temperature gradient as source term. This equation is subject to zero stream function as boundary conditions. The corresponding solution is concentric streamlines. The shape of these streamlines (center, orientation, spacing and density) depends on the source function. For the problem of natural convection in square cavity, the maximum horizontal temperature gradient is located at the right bottom and left top corners. The resulting streamlines have a concentric ellipsoidal shape with focal axis oriented in the direction of the line connecting the maximum gradient points (the cavity first bisector). The increase of $\alpha_{L}^{\ast}$ leads to the enhancement of the heat mixing by longitudinal dispersion in the zones where the velocity is parallel to the temperature gradient (outside the boundary layers of the hot and cold walls). By consequence, the horizontal temperature gradient decreases in this zone and increases outsides. This implies that, on the one hand, the reorientation of the focal axis of the ellipsoidal shaped streamlines in the horizontal direction and, on the other hand, an attenuation of the stream function maximum value. Hence, the rotating flow decelerates, $u_{max}^{\ast}$ decreases and the point of maximum velocity moves toward the center of the cavity surfaces. Reverse behavior can be observed when $\alpha_{T}^{\ast}$ is increased. In such a case the transverse heat mixing is enhanced in the zone where the velocity is orthogonal to the temperature gradient (within the boundary layers). Transverse heat flux is horizontal in this case. Hence, the horizontal temperature gradient decreases within the boundary layers and increases outside. The direction of the focal axis moves toward the first bisector and the maximum value of the stream function increases. The streamlines becomes more spaced at the vertical walls and more closed at the top surfaces. The value of $u_{max}^{\ast}$ increases and its location at the top (resp. bottom) surface moves toward the cold (resp. hot) wall. Fig. 9 shows the bar-plots of the first order and total Sobol’ indices of the maximum velocity $v_{max}^{\ast}$. Results shows that the variability of $v_{max}^{\ast}$ is mainly due to the principal effects of $\overline{Ra}$. Unlike $u_{max}^{\ast}$, $v_{max}^{\ast}$ is only slightly sensitivity to $r_{k}$. The differences between the first-order and total indices are negligible, indicating insignificant interactions between the parameters. Fig. 10 represents the marginal effect of the different parameters on the $v_{max}^{\ast}$. As expected, a different magnitude of variations is obtained indicating the level of influence of the parameters. The largest variation is obtained with the Rayleigh number $\overline{Ra}$. Fig. 10a shows that $v_{max}^{\ast}$ increases with the increase of $\overline{Ra}$ as this latter intensify the rotating flow within the cavity. The marginal effect of $v_{max}^{\ast}$ to the permeability ratio $r_{k}$ is slightly flatter (see Fig. 10b) and confirm the weak sensitivity of $v_{max}^{\ast}$ to $r_{k}$. The obtained negative slope is the consequence of the horizontal velocity reduction caused by the decrease of $k_{x}$. This finding is consistent with the results obtained in Bennacer et al., Bennacer et al. (2001). As expected, the marginal effects of $v_{max}^{\ast}$ to $\alpha_{L}^{\ast}$ and $\alpha_{T}^{\ast}$ are nearly flat (Fig. 10(c-d)). A rather negative slope of $v_{max}^{\ast}$ versus $\alpha_{L}^{\ast}$ is observed indicating that the enhancement of the longitudinal dispersive mixing between the hot and cold fluid leads to an attenuation of the convective flow. Fig. 10d shows that $v_{max}^{\ast}$ decreases with the increase of $\alpha_{T}^{\ast}$ due to the redistribution of the temperature gradient. 5.1.3 Uncertainty quantification The constructed PCE can be employed as a surrogate model of the target output. Statistical analysis is then affordable by performing a large Monte Carlo simulations on the PCE approximations at a very low additional computational cost upon sampling the random input parameter space. We depict in Fig. 11 the probability density function (PDF) of the scalar QoIs. The PDFs computed by relying on the PCE using $10^{5}$ Monte Carlo simulations are compared with the PDFs computed with the $1,000$ Monte Carlo (MC) simulations of the physical model. Note that we limit our comparison to $1,000$ MCs of the complete model because of the computational burden. A number of conclusions can be drawn with respect to Fig. 11. The marginal PDFs resulting from the PCEs compare very well with the PDF obtained by relying on numerical MC simulations at a total cost of $150$ simulations though. Positively skewed distributions, with longer tails toward larger values are observed for both output $\overline{Nu}$ and $u_{max}^{\ast}$ . The long tail of the PDF of the $\overline{Nu}$ is associated with setting characterized by low values of $\alpha_{T}^{\ast}$, whereas the long tail of $u_{max}^{\ast}$ is associated with setting characterized by low values of $r_{k}$. A flat distribution with short tails is obtained for $v_{max}^{\ast}$ . This is due to the fact that $v_{max}^{\ast}$ is mainly sensitive to the Rayleigh number $\overline{Ra}$ and that they are linearly related. In the following, we investigate the level of correlation between QoI’s pairs by determining the correlation coefficient, defined as the covariance of the two output variables divided by the product of their standard deviation. Table 3(a) lists the correlation coefficient evaluated using $1,000$ MC simulations. To further elaborate on the accuracy of the PCE, the correlation coefficient results obtained by relying on PCE are also shown (see Table 3(b)). Again, the agreement between the full model and the PCE is quite remarkable. A strong positive correlation between $u_{max}^{\ast}$ and $v_{max}^{\ast}$ is observed. This is related to the fact that $u_{max}^{\ast}$ and $v_{max}^{\ast}$ are both affected by the Rayleigh number $\overline{Ra}$ and they both increase with its increase. 5.2 Effect of heterogeneity In this section, an heterogeneous porous media will be considered. The heterogeneity is assumed to follow the exponential model given in Eq. (9) and (10). The effect of heterogeneity is expressed in terms of the rate of change of the permeability $\sigma^{\ast}$. Consequently, five independent input random variables $\boldsymbol{X}=\{\overline{Ra},r_{k},\alpha_{L}^{\ast},\alpha_{T}^{\ast},% \sigma^{\ast}\}$ are now considered uncertain with uniform marginal distributions. The spatial discretization required to reach the converged numerical solution is highly dependent on the degree of heterogeneity. Indeed, an increase in the heterogeneity degree results in an increase of the local Rayleigh number, leading to a locally steeper and rougher solution than for the homogeneous case Shao et al. (2016). Consequently, finer meshes are required to obtain a mesh independent solution. As for the homogeneous case, the most challenging configurations of parameters are thoroughly tested. A special irregular mesh is used to obtain the converged finite element solution. This mesh involves local refinement on the high-permeability zones where the buoyancy effects are more significant. A non uniform grid of $64,000$ nodes is used. All simulations were performed for $8[t]$ in order to be sure that the steady state solution is reached. These discretization parameters are kept fixed in subsequent simulations. In view of computing the PCE expansion of the model outputs, an experimental design drawn with QMC of size $N=150$ is considered. As in the previous case, the candidate basis is determined using a standard truncation scheme with $q=1$ for all the outputs. The corresponding results (e.g. polynomial degree giving the best accuracy, relative $LOO$ error and number of retained polynomials) of the PCE are given in Table 4 for the three scalar output $\overline{Nu}$, $u_{max}^{\ast}$ and $v_{max}^{\ast}$. An accurate PCE is obtained for both $\overline{Nu}$ and $v_{max}^{\ast}$, where $LOO$ error is about $1$%. A less accurate PCE is obtained for $u_{max}^{\ast}$, where $LOO$ error is larger than $0.1$. • GSA of the temperature field Fig. 12a illustrates the spatial distribution of the mean temperature by relying on the PCE. It shows that the isotherms are more affected by the circulating flow than that of the homogeneous case; especially in the high permeable zones (near the top surface of the cavity). Indeed, the local Rayleigh number exceeds the average value $\overline{Ra}$, leading eventually to a more intense convective flow. The corresponding depictions of the temperature variance is shown in Fig. 12b. First, we observe is that the spatial distribution of the variance is not anymore symmetric as in the homogeneous case. The heterogeneity results in a different distribution of the variance while maintaining the same level of variability. The smallest variation zone is located at the boundary layer of the vertical walls and in the slow-motion region, as for the homogeneous case. It should be noted here that, due to the effect of increased permeability, the slow-motion region expands horizontally and moves up toward the right corner Fahs et al. (2015a). The zone of low temperature variance exhibits similar behavior. The largest variance zone moves to the low permeable layers and it is shifted toward the hot wall. The high permeability at the top layers leads to a reduction of the variance of the temperature. The spatial maps of the total Sobol’ indices are depicted in Fig. 13. It demonstrates that, due to the heterogeneity, the symmetry of the Sobol’ indices spatial distribution around the center of the cavity is completely destroyed. Fig. 13 indicates that $\alpha_{L}^{\ast}$, $\alpha_{T}^{\ast}$ and $\sigma^{\ast}$ have significant influence on the temperature distribution. A closer look to the temperature variance confirms that, as for the homogeneous case, $\alpha_{L}^{\ast}$ is the most sensitive parameter since its zone of influence intersects well with the zone of maximum temperature variance. Comparing to the homogeneous case, the zone of influence of $\alpha_{L}^{\ast}$ expands in the zones of low permeability (toward the bottom surface of the cavity) and contracts in the high permeable zones (at the top surface). Indeed, when the slow-motion region moves up toward the right corner, the zones in which the velocity vector is horizontal (parallel to the gradient) shrinks near the top surface and grows in the lower part of the cavity. The reverse is true for the zone of influence of $\alpha_{T}^{\ast}$. This zone expands vertically near the hot wall and contracts near the cold wall. This behavior is also attributable to the shifting of the slow-motion region toward the top right corner due to heterogeneity. The sensitivity of the temperature distribution to the rate of change of heterogeneity $\sigma^{\ast}$ is mainly important around the zone of slow rotating motion as it can be seen in Fig. 13e. The zone of influence of $\overline{Ra}$ expands around the right bottom corner (Fig. 13a). This behavior is related to the expansion of the zone in which the velocity is relatively weak as a consequence of the low permeability near the cavity bottom surface. Conversely, around the top left corner the high permeability induces faster convective flow associated with higher dispersion tensor. Hence, mixing by thermal dispersion dominates and temperature distribution becomes less sensitive to $\overline{Ra}$. Fig. 13b shows that in the case of heterogeneous porous media, $r_{k}$ becomes more influential on the temperature distribution than for the homogeneous case. Its zone of influence expands around the top left corner. • GSA of the scalar QoIs Fig. 14 shows bar-plots of the first order and total Sobol’ indices of the three model outputs ( $\overline{Nu}$, $u_{max}^{\ast}$ and $v_{max}^{\ast}$). This figure allows drawing the following conclusions: • The heterogeneity of the porous media does not affect the rank of the parameters regarding their influences on the model outputs. $\overline{Ra}$ and $\alpha_{T}^{\ast}$ remain the most influential parameters for $\overline{Nu}$, $\overline{Ra}$ and $r_{k}$ for $u_{max}^{\ast}$, and $\overline{Ra}$ for $v_{max}^{\ast}$. • The uncertainty associated with the rate of change of the heterogeneity $\sigma^{\ast}$ has no effect on $\overline{Nu}$. This is coherent with the results obtained for the temperature distribution that shows that the effect of $\sigma^{\ast}$ is located relatively far from the hot wall in the slow-motion region. • The influence of $\sigma^{\ast}$ is more important on $u_{max}^{\ast}$ and $v_{max}^{\ast}$, which is reasonable because the velocity is directly related to the permeability level of heterogeneity. • One can also observe that the heterogeneity renders $\overline{Nu}$ and $v_{max}^{\ast}$ more sensitive to $r_{k}$. In fact, in vertically stratified heterogeneous domains (as it is the case here), the anisotropy can be at the origin of a vertical flow that can transmit the fluid from a certain layer to lower or upper ones with different permeability. Hence any change of $r_{k}$ may have a strong effect on the flow as different layers of heterogeneity can be involved. The investigation of the marginal effects of the parameters on the model outputs showed that the conclusions drawn for the homogeneous case still hold for the heterogeneous porous media. This is why we hereby discuss only the marginal effect of $\sigma^{\ast}$. The results show a slight variation of $\overline{Nu}$ against $\sigma^{\ast}$. Indeed, the increase of sigma leads to the attenuation of the local velocity at the lower part of the hot wall and an intensification in the upper part. The attenuation of the horizontal velocity around the bottom surface is associated to a reduction of the thermal dispersion and by consequence a decrease of the temperature gradient. Reverse process occurs at the upper surface and leads to the increase of the thermal gradient. As a consequence, the local Nusselt number decreases in the lower part of the vertical wall and increases at the upper part. Upper and lower local Nusselt numbers tend to balance out each other and lead to small variation of the $\overline{Nu}$. Marginal effects of $\sigma^{\ast}$ on $u_{max}^{\ast}$ and $v_{max}^{\ast}$ are given in Fig. 15. This figure confirms the high sensitivity of $u_{max}^{\ast}$ and $v_{max}^{\ast}$ to sigma. It indicates also an increasing variation of $u_{max}^{\ast}$ and $v_{max}^{\ast}$ against $\sigma^{\ast}$ as a result of the permeability increasing at the top surface of the cavity. 6 Summary and conclusions The proposed study handled the topic of natural convection problem in heterogeneous porous media including velocity-dependent dispersion. The considered benchmark is the popular square cavity filled with a saturated porous medium and subject to differentially heated vertical walls and adiabatic horizontal surfaces. The simplicity of the geometry and the boundary conditions renders this problem especially suitable for testing numerical models but also useful to provide physical insights into the involving processes. It introduces however several uncertain parameters, namely: the average Rayleigh number ($\overline{Ra}$), the permeability anisotropy ratio ($r_{k}$), the non-dimensional dispersion coefficients ($\alpha_{L}^{\ast}$ and $\alpha_{T}^{\ast}$). The imperfect knowledge of the system parameters and their variability significantly affect the flow and heat transfer patterns. It is thus of utmost importance to properly account for the aforementioned uncertainties within the frame of uncertainty and sensitivity Analysis. In this work, we analyze the impact of the uncertain parameters on the output quantities of interest (QoIs) allowing assessment of flow and heat transfer. To describe the flow process, we use the maximum dimensionless velocity components ($u^{\ast}_{max}$ and $v^{\ast}_{max}$). For the heat transfer process, the assessment is based on the spatial distribution of the dimensionless temperature ($T^{\ast}$) and the average Nusselt number $\overline{Nu}$ on the hot wall. The effect of heterogeneity was also investigated by considering a stratified heterogeneous porous media with an exponential distribution of the permeability as a function of depth. The rate of change of the permeability $\sigma^{\ast}$ is then considered uncertain. Herein, we performed a comprehensive global sensitivity analysis and uncertainty quantification through performing the probability distributions by means of surrogate modeling. Sparse PCE are used for this purpose. Our results lead to the following major conclusions. • Sparse PCE proved particularly efficient in providing reliable results at a considerably low computational costs. All results derived for the homogeneous (resp. heterogeneous) case were obtained at the cost of only $150$ simulations of the computational model. Note that those runs could have been carried out in parallel on any distributed computing architecture. Due to the inexpensive-to-evaluate formulation of the sparse PCE meta-model, the probability density functions (PDF’s) of the QoIs have been accurately estimated. An excellent agreement between sparse PCE and MC results is obtained. • The Sobol’ indices of the temperature distribution allow specifying the spatial zone of influence of each parameter. Results showed that the variability of the temperature distribution is largely influenced by the effect of $\alpha_{L}^{\ast}$ and $\alpha_{T}^{\ast}$. Nevertheless, the effect of $\alpha_{L}^{\ast}$ on the temperature distribution is more pronounced than that for $\alpha_{T}^{\ast}$ as its zone of influence is located in the region where the variance is maximum. • The variability of $\overline{Nu}$ is mainly due to main effects of $\overline{Ra}$ and $\alpha_{T}^{\ast}$. Indeed, the Rayleigh number dramatically influence the flow profile and heat transfer within the cavity, as well as the thermal boundary layer thickness. The variability of $u_{max}^{\ast}$ is mainly due to main effects of the ratio of anisotropy $r_{k}$, and $\overline{Ra}$ while the variability of $v_{max}^{\ast}$ is mainly controlled by $\overline{Ra}$. • The effect of the heterogeneity results in a different distribution of the variance of the temperature while maintaining the same level of variability. The zone of largest temperature variance becomes located in the low permeable layer near the bottom surface of the cavity. In this case, the isotherms are more affected by the circulating flow than in the homogeneous case, especially in the high permeable zones, associated to a more intense convective flow. Results shows that the average Nusselt number is not sensitive to the heterogeneity rate while an increase of $\sigma^{\ast}$ is associated to an increase of the maximum velocity $u_{max}^{\ast}$ and $v_{max}^{\ast}$. • Marginal effects of each parameters are also readily obtained from PCE. As opposed to classical ”one-at-a-time” sensitivity analyses, where all parameters but one are frozen so as to study the effect of the remaining one, the univariate effect curves account for the uncertainties in the other parameters. Quantitative conclusions have been drawn, which confirm qualitative interpretations. This study represents a prototype to point out the benefit of GSA and UQ to understand how the complex system of natural convection in porous media behaves. This kind of study shall be useful for safe design and risk assessment of systems involving natural convection in porous enclosure. References Abarca et al. (2007) Abarca, E., J. Carrera, X. Sánchez-Vila, and M. Dentz (2007). Anisotropic dispersive Henry problem. Adv. Water. Resour. 30(4), 913–926. Al-Khoury (2011) Al-Khoury, R. (2011). Computational modeling of shallow geothermal systems. CRC Press. Almeida and Cotta (1995) Almeida, A. R. and R. M. Cotta (1995). Integral transform methodology for convection-diffusion problems in petroleum reservoir engineering. Int. J. Heat. Mass. Transfer. 38(18), 3359–3367. Alves and Cotta (2000) Alves, L. d. B. and R. Cotta (2000). Transient natural convection inside porous cavities: hybrid numerical-analytical solution and mixed symbolic-numerical computation. Numer. Heat. Tr. A-Appl. 38(1), 89–110. Amiri and Vafai (1994) Amiri, A. and K. Vafai (1994). Analysis of dispersion effects and non-thermal equilibrium, non-Darcian, variable porosity incompressible flow through porous media. Int. J. Heat. Mass. Transfer. 37(6), 939–954. Archer et al. (1997) Archer, G., A. Saltelli, and I. Sobol’ (1997). Sensitivity measures, ANOVA-like techniques and the use of bootstrap. J. Stat. Comput. Simul. 58, 99–120. Ballio and Guadagnini (2004) Ballio, F. and A. Guadagnini (2004). Convergence assessment of numerical monte carlo simulations in groundwater hydrology. Water. Resour. Res. (4). Baytaş (2000) Baytaş, A. (2000). Entropy generation for natural convection in an inclined porous cavity. Int. J. Heat. Mass. Transfer. 43(12), 2089–2099. Beckermann et al. (1986) Beckermann, C., R. Viskanta, and S. Ramadhyani (1986). A numerical study of non-Darcian natural convection in a vertical enclosure filled with a porous medium. Numeri. Heat. Transfer. 10(6), 557–570. Bejan (1979) Bejan, A. (1979). On the boundary layer regime in a vertical enclosure filled with a porous medium. Lett. Heat. Mass. Transfer. 6(2), 93–102. Bennacer et al. (2001) Bennacer, R., A. Tobbal, H. Beji, and P. Vasseur (2001). Double diffusive convection in a vertical enclosure filled with anisotropic porous media. Int. J. Thermal. Sci. 40(1), 30–41. Blackwell and Beck (2010) Blackwell, B. and J. V. Beck (2010). A technique for uncertainty analysis for inverse heat conduction problems. Int J. Heat. Mass. Transfer 53(4), 753–759. Blatman and Sudret (2010a) Blatman, G. and B. Sudret (2010a). An adaptive algorithm to build up sparse polynomial chaos expansions for stochastic finite element analysis. Prob. Eng. Mech. 25, 183–197. Blatman and Sudret (2010b) Blatman, G. and B. Sudret (2010b). Efficient computation of global sensitivity indices using sparse polynomial chaos expansions. Reliab. Eng. Sys. Safety 95, 1216–1229. Blatman and Sudret (2011) Blatman, G. and B. Sudret (2011). Adaptive sparse polynomial chaos expansion based on Least Angle Regression. J. Comput. Phys. 230, 2345–2367. Brown and Heuvelink (2005) Brown, J. D. and G. Heuvelink (2005). Assessing uncertainty propagation through physically based models of soil water flow and solute transport. Encyclopedia of hydrological sciences. Carotenuto et al. (2012) Carotenuto, A., N. Massarotti, and A. Mauro (2012). A new methodology for numerical simulation of geothermal down-hole heat exchangers. Appl. Therm. Eng. 48, 225–236. Chen (2007) Chen, Z. (2007). Reservoir simulation: mathematical techniques in oil recovery, Volume 77. Siam. Cheng (1981) Cheng, P. (1981). Thermal dispersion effects in non-Darcian convective flows in a saturated porous medium. Lett. Heat. Mass. Transfer. 8(4), 267–270. Cheng and Vortmeyer (1988) Cheng, P. and D. Vortmeyer (1988). Transverse thermal dispersion and wall channelling in a packed bed with forced convective flow. Chem. Eng. Sci. 43(9), 2523–2532. Chou et al. (2015) Chou, H.-M., H.-W. Wu, I.-H. Lin, W.-J. Yang, and M.-L. Cheng (2015). Effects of temperature-dependent viscosity on natural convection in porous media. Numer. Heat. Tr. A-Appl. 68(12), 1331–1350. Choukairy and Bennacer (2012) Choukairy, K. and R. Bennacer (2012). Numerical and analytical analysis of the thermosolutal convection in an heterogeneous porous cavity. Fluid Dynamics & Materials Processing 8(2), 155–172. Ciriello et al. (2013) Ciriello, V., V. Federico, M. Riva, F. Cadini, J. Sanctis, E. Zio, and A. Guadagnini (2013). Polynomial chaos expansion for global sensitivity analysis applied to a model of radionuclide migration in a randomly heterogeneous aquifer. Stoch. Env. Res. Risk. Ass. 27(4), 945–954. Class et al. (2009) Class, H., A. Ebigbo, R. Helmig, H. K. Dahle, J. M. Nordbotten, M. A. Celia, P. Audigane, M. Darcis, J. Ennis-King, Y. Fan, et al. (2009). A benchmark study on problems related to CO${}_{2}$ storage in geologic formations. Computat. Geosci. 13(4), 409–434. Das et al. (2017) Das, D., M. Roy, and T. Basak (2017). Studies on natural convection within enclosures of various (non-square) shapes–A review. Int J. Heat. Mass. Transfer. 106, 356–406. Davies et al. (1997) Davies, C., G. M. Saidel, and H. Harasaki (1997). Sensitivity analysis of one-dimensional heat transfer in tissue with temperature-dependent perfusion. J. biomech. Eng. 119(1), 77–80. De Rocquigny (2012) De Rocquigny, E. (2012). Modelling under risk and uncertainty: an introduction to statistical, phenomenological and computational methods. John Wiley & Sons. Deman et al. (2016) Deman, G., K. Konakli, B. Sudret, J. Kerrou, P. Perrochet, and H. Benabderrahmane (2016). Using sparse polynomial chaos expansions for the global sensitivity analysis of groundwater lifetime expectancy in a multi-layered hydrogeological model. Reliab. Eng. Sys. Safety 147, 156–169. Diersch and Kolditz (2002) Diersch, H.-J. and O. Kolditz (2002). Variable-density flow and transport in porous media: approaches and challenges. Adv. Water. Resour. 25(8), 899–944. Durlofsky (1994) Durlofsky, L. J. (1994). Accuracy of mixed and control volume finite element approximations to Darcy velocity and related quantities. Water. Resour. Res. 30(4), 965–973. Efron et al. (2004) Efron, B., T. Hastie, I. Johnstone, and R. Tibshirani (2004). Least angle regression. Annals of Statistics 32, 407–499. Fahs et al. (2014) Fahs, H., M. Hayek, M. Fahs, and A. Younes (2014). An efficient numerical model for hydrodynamic parameterization in 2d fractured dual-porosity media. Adv. Water. Resour. 63, 179–193. Fahs et al. (2016) Fahs, M., B. Ataie-Ashtiani, A. Younes, C. T. Simmons, and P. Ackerer (2016). The Henry problem: new semi-analytical solution for velocity-dependent dispersion. Water. Resour. Res. 52(9), 7382–7407. Fahs et al. (2015a) Fahs, M., A. Younes, and A. Makradi (2015a). A reference benchmark solution for free convection in a square cavity filled with a heterogeneous porous medium. Numer. Heat. Tr. B-Fund. 67(5), 437–462. Fahs et al. (2015b) Fahs, M., A. Younes, and A. Makradi (2015b). A reference benchmark solution for free convection in a square cavity filled with a heterogeneous porous medium. Numer. Heat. Tr. B-Fund. 67(5), 437–462. Fahs et al. (2014) Fahs, M., A. Younes, and T. A. Mara (2014). A new benchmark semi-analytical solution for density-driven flow in porous media. Advances in Water Resources 70, 24–35. Fajraoui et al. (2011) Fajraoui, N., F. Ramasomanana, A. Younes, T. Mara, P. Ackerer, and A. Guadagnini (2011). Use of global sensitivity analysis and polynomial chaos expansion for interpretation of nonreactive transport experiments in laboratory-scale porous media. Water. Resour. Res. 47. Falsaperla et al. (2012) Falsaperla, P., A. Giacobbe, and G. Mulone (2012). Double diffusion in rotating porous media under general boundary conditions. Int. J. Heat. Mass. Transfer 55(9), 2412–2419. Farajzadeh et al. (2007) Farajzadeh, R., H. Salimi, P. L. Zitha, and H. Bruining (2007). Numerical simulation of density-driven natural convection in porous media with application for CO${}_{2}$ injection projects. Int. J. Heat. Mass. Transfer. 50(25), 5054–5064. Farthing et al. (2002) Farthing, M. W., C. E. Kees, and C. T. Miller (2002). Mixed finite element methods and higher-order temporal approximations. Adv. Water. Resour. 25(1), 85–101. Fesanghary et al. (2009) Fesanghary, M., E. Damangir, and I. Soleimani (2009). Design optimization of shell and tube heat exchangers using global sensitivity analysis and harmony search algorithm. Appl. Therma. Eng. 29(5), 1026–1031. Getachew et al. (1996) Getachew, D., W. Minkowycz, and D. Poulikakos (1996). Natural convection in a porous cavity saturated with a non-newtonian fluid. J. thermophys. heat. transfer. 10(4), 640–651. Ghanem and Spanos (1991) Ghanem, R. and P. Spanos (1991). Stochastic finite elements – A spectral approach. Springer Verlag. (Reedited by Dover Publications, 2003). Ghommem et al. (2011) Ghommem, M., G. Balasubramanian, M. R. Hajj, W. P. Wong, J. A. Tomlin, and I. K. Puri (2011). Release of stored thermochemical energy from dehydrated salts. Int J. Heat. Mass. Transfer 54(23), 4856–4863. Givler and Altobelli (1994) Givler, R. and S. Altobelli (1994). A determination of the effective viscosity for the Brinkman–Forchheimer flow model. J. Fluid. Mech. 258, 355–370. Gross et al. (1986) Gross, R., M. Bear, and C. Hickox (1986). The application of flux-corrected transport (fct) to high Rayleigh number natural convection in a porous medium. In Proc. 8th Int. Heat Transfer Conf., San Francisco, CA. Holzbecher (1998) Holzbecher, E. O. (1998). Modeling density-driven flow in porous media: principles, numerics, software, Volume 1. Springer Science & Business Media. Homma and Saltelli (1996) Homma, T. and A. Saltelli (1996). Importance measures in global sensitivity analysis of non linear models. Reliab. Eng. Sys. Safety 52, 1–17. Hong and Tien (1987) Hong, J. and C. Tien (1987). Analysis of thermal dispersion effect on vertical-plate natural convection in porous media. Int. J. Heat. Mass. Transfer. 30(1), 143–150. Howle and Georgiadis (1994) Howle, L. and J. Georgiadis (1994). Natural convection in porous media with anisotropic dispersive thermal conductivity. Int. J. Heat. Mass. Transfer. 37(7), 1081–1094. Ingham (2004) Ingham, D. B. (2004). Emerging Technologies and Techniques in Porous Media, Volume 134. Springer Science & Business Media. Ingham and Pop (2005) Ingham, D. B. and I. Pop (2005). Transport phenomena in porous media III, Volume 3. Elsevier. Islam et al. (2014) Islam, A., A. K. N. Korrani, K. Sepehrnoori, and T. Patzek (2014). Effects of geochemical reaction on double diffusive natural convection of CO${}_{2}$ in brine saturated geothermal reservoir. Int. J. Heat. Mass. Transfer. 77, 519–528. Islam et al. (2013) Islam, A. W., M. A. Sharif, and E. S. Carlson (2013). Numerical investigation of double diffusive natural convection of CO${}_{2}$ in a brine saturated geothermal reservoir. Geothermics. 48, 101–111. Jamshidzadeh et al. (2013) Jamshidzadeh, Z., F. T.-C. Tsai, S. A. Mirbagheri, and H. Ghasemzadeh (2013). Fluid dispersion effects on density-driven thermohaline flow and transport in porous media. Adv. Water. Resour. 61, 12–28. Jeong and Choi (2011) Jeong, N. and D. H. Choi (2011). Estimation of the thermal dispersion in a porous medium of complex structures using a lattice boltzmann method. Int. J. Heat. Mass. Transfer. 54(19), 4389–4399. Kolditz et al. (2015) Kolditz, O., H. Shao, W. Wang, and S. Bauer (2015). Thermo-hydro-mechanical-chemical processes in fractured porous media: modelling and benchmarking. Springer. Konz et al. (2009) Konz, M., A. Younes, P. Ackerer, M. Fahs, P. Huggenberger, and E. Zechner (2009). Variable-density flow in heterogeneous porous media: Laboratory experiments and numerical simulations. J. Contam. Hydrol. 108(3), 168–175. Kumar and Bera (2009) Kumar, A. and P. Bera (2009). Natural convection in an anisotropic porous enclosure due to nonuniform heating from the bottom wall. J. Heat. Transfer. 131(7), 072601. Kuznetsov and Nield (2010) Kuznetsov, A. and D. Nield (2010). Natural convective boundary-layer flow of a nanofluid past a vertical plate. Int. J. Thermal. Sci. 49(2), 243–247. Lai and Kulacki (1988) Lai, F. and F. Kulacki (1988). Natural convection across a vertical layered porous cavity. Int. J. Heat. Mass. Transfer. 31(6), 1247–1260. Le Gratiet et al. (2016) Le Gratiet, L., S. Marelli, and B. Sudret (2016). Metamodel-based sensitivity analysis: Polynomial chaos expansions and Gaussian processes. Handbook on Uncertainty Quantification, R. Ghanem, D. Higdon, H. Owhadi (Eds.), Springer. Leong and Lai (2004) Leong, J.-C. and F. C. Lai (2004). Natural convection in rectangular layered porous cavities. J. thermophys. heat. transfer. 18(4), 457–463. Mahmud and Pop (2006) Mahmud, S. and I. Pop (2006). Mixed convection in a square vented enclosure filled with a porous medium. Int. J. Heat. Mass. Transfer. 49(13), 2190–2206. Malashetty and Biradar (2012) Malashetty, M. and B. S. Biradar (2012). Linear and nonlinear double-diffusive convection in a fluid-saturated porous layer with cross-diffusion effects. Transport. Porous. Med. 91(2), 649–675. Mamourian et al. (2016) Mamourian, M., K. M. Shirvan, and S. Mirzakhanlari (2016). Two phase simulation and sensitivity analysis of effective parameters on turbulent combined heat transfer and pressure drop in a solar heat exchanger filled with nanofluid by response surface methodology. Energy 109, 49–61. Manole and Lage (1993) Manole, D. and J. Lage (1993). Numerical benchmark results for natural convection in a porous medium cavity. ASME-Publications-htd 216, 55–55. Mansour and Ahmed (2015) Mansour, M. and S. E. Ahmed (2015). A numerical study on natural convection in porous media-filled an inclined triangular enclosure with heat sources using nanofluid in the presence of heat generation effect. Engineering Science and Technology, an International Journal 18(3), 485–495. Metzger et al. (2004) Metzger, T., S. Didierjean, and D. Maillet (2004). Optimal experimental estimation of thermal dispersion coefficients in porous media. Int. J. Heat. Mass. Transfer. 47(14), 3341–3353. Miller et al. (2013) Miller, C. T., C. N. Dawson, M. W. Farthing, T. Y. Hou, J. Huang, C. E. Kees, C. Kelley, and H. P. Langtangen (2013). Numerical simulation of water resources problems: Models, methods, and trends. Adv. Water. Resour. 51, 405–437. Misra and Sarkar (1995) Misra, D. and A. Sarkar (1995). A comparative study of porous media models in a differentially heated square cavity using a finite element method. Int. J. Numer. Method. H. 5(8), 735–752. Moya et al. (1987) Moya, S. L., E. Ramos, and M. Sen (1987). Numerical study of natural convection in a tilted rectangular porous material. Int J. Heat. Mass. Transfer. 30(4), 741–756. Ni and Beckermann (1991) Ni, J. and C. Beckermann (1991). Natural convection in a vertical enclosure filled with anisotropic porous media. J. Heat Transfer 113(4), 1033–1037. Nield and Simmons (2007) Nield, D. and C. T. Simmons (2007). A discussion on the effect of heterogeneity on the onset of convection in a porous medium. Transport. Porous. med. 68(3), 413–421. Nield and Bejan (2012) Nield, D. A. and A. Bejan (2012). Convection in Porous Media, , 4th edn. Springer, New York. Ooi and Ng (2011) Ooi, E. and E. Ng (2011). Effects of natural convection within the anterior chamber on the ocular heat transfer. Int. J. Numer. Method. Bio-Med. Eng 27(3), 408–423. O’Sullivan et al. (2001) O’Sullivan, M. J., K. Pruess, and M. J. Lippmann (2001). State of the art of geothermal reservoir simulation. Geothermics. 30(4), 395–429. Oztop et al. (2009) Oztop, H. F., Y. Varol, and I. Pop (2009). Investigation of natural convection in triangular enclosure filled with porous medi saturated with water near 4${}^{\circ}$C. Energ. Convers. Manage. 50(6), 1473–1480. Pedras and de Lemos (2008) Pedras, M. H. and M. J. de Lemos (2008). Thermal dispersion in porous media as a function of the solid–fluid conductivity ratio. Int. J. Heat. Mass. Transfer. 51(21), 5359–5367. Plumb and Huenefeld (1981) Plumb, O. and J. Huenefeld (1981). Non-Darcy natural convection from heated surfaces in saturated porous media. Int. J. Heat. Mass. Transfer. 24(4), 765–768. Pop and Ingham (2001) Pop, I. and D. B. Ingham (2001). Convective heat transfer: mathematical and computational modelling of viscous fluids and porous media. Elsevier. Prasad and Kulacki (1984) Prasad, V. and F. Kulacki (1984). Convective heat transfer in a rectangular porous cavity-effect of aspect ratio on flow structure and heat transfer. J. Heat. Transfer. 106(1), 158–165. Rajabi and Ataie-Ashtiani (2014) Rajabi, M. M. and B. Ataie-Ashtiani (2014). Sampling efficiency in Monte Carlo based uncertainty propagation strategies: application in seawater intrusion simulations. Adv. Water. Resour. 67, 46–64. Riley and Firoozabadi (1998) Riley, M. F. and A. Firoozabadi (1998). Compositional variation in hydrocarbon reservoirs with natural convection and diffusion. AIChE J. 44(2), 452–464. Riva et al. (2015) Riva, M., A. Guadagnini, and A. Dell�Oca (2015). Probabilistic assessment of seawater intrusion under multiple sources of uncertainty. Adv. Water. Resour. 75, 93–104. Saeid (2007) Saeid, N. H. (2007). Conjugate natural convection in a porous enclosure: effect of conduction in one of the vertical walls. Int. J. Heat. Mass. Transfer. 46(6), 531–539. Saeid and Pop (2004) Saeid, N. H. and I. Pop (2004). Transient free convection in a square cavity filled with a porous medium. Int. J. Heat. Mass. Transfer. 47(8), 1917–1924. Saltelli (2002) Saltelli, A. (2002). Making best use of model evaluations to compute sensitivity indices. Comput. Phys. Comm. 145, 280–297. Saltelli et al. (2010) Saltelli, A., P. Annoni, V. Azzini, F. Campolongo, M. Ratto, and S. Tarantola (2010). Variance based sensitivity analysis of model output. Design and estimator for the total sensitivity index. Comput. Phys. Comm. 181, 259–270. Saltelli and Tarantola (2002) Saltelli, A. and S. Tarantola (2002). On the relative importance of input factors in mathematical models: safety assessment for nuclear waste disposal. J. Am. Stat. Assoc. 97(459), 702–709. Saltelli et al. (1999) Saltelli, A., S. Tarantola, and K.-S. Chan (1999). A quantitative model-independent method for global sensitivity analysis of model output. Technometrics 41(1), 39–56. Sarangi et al. (2014) Sarangi, S., K. K. Bodla, S. V. Garimella, and J. Y. Murthy (2014). Manifold microchannel heat sink design using optimization under uncertainty. Int J. Heat. Mass. Transfer. 69, 92–105. Shao et al. (2016) Shao, Q., M. Fahs, A. Younes, and A. Makradi (2016). A high-accurate solution for Darcy-Brinkman double-diffusive convection in saturated porous media. Numer. Heat. Tr. B-Fund. 69(1), 26–47. Shao et al. (2016) Shao, Q., M. Fahs, A. Younes, A. Makradi, and T. Mara (2016). A new benchmark reference solution for double-diffusive convection in a heterogeneous porous medium. Numer. Heat. Tr. B-Fund. 70(5), 373–392. Sheremet et al. (2016) Sheremet, M. A., I. Pop, and N. Bachok (2016). Effect of thermal dispersion on transient natural convection in a wavy-walled porous cavity filled with a nanofluid: Tiwari and das� nanofluid model. Int. J. Heat. Mass. Transfer. 92, 1053–1060. Shih-Wen et al. (1992) Shih-Wen, H., P. Cheng, and C. Chao-Kuang (1992). Non-uniform porosity and thermal dispersion effects on natural convection about a heated horizontal cylinder in an enclosed porous medium. Int. J. Heat. Mass. Transfer. 35(12), 3407–3418. Shirvan et al. (2017) Shirvan, K. M., M. Mamourian, S. Mirzakhanlari, R. Ellahi, and K. Vafai (2017). Numerical investigation and sensitivity analysis of effective parameters on combined heat transfer performance in a porous solar cavity receiver by response surface methodology. Int J. Heat. Mass. Transfer 105, 811–825. Simmons et al. (2010) Simmons, C., P. Bauer-Gottwein, T. Graf, W. Kinzelbach, H. Kooi, L. Li, V. Post, H. Prommer, R. Therrien, C. Voss, et al. (2010). Variable density groundwater flow: From modelling to applications. Cambridge University Press Cambridge. Simmons (2005) Simmons, C. T. (2005). Variable density groundwater flow: From current challenges to future possibilities. Hydrogeol. J. 13(1), 116–119. Simmons et al. (2001) Simmons, C. T., T. R. Fenstemaker, and J. M. Sharp (2001). Variable-density groundwater flow and solute transport in heterogeneous porous media: approaches, resolutions and future challenges. J. Contam. Hydrol. 52(1), 245–275. Sobol’ (1993) Sobol’, I. (1993). Sensitivity estimates for nonlinear mathematical models. Math. Modeling. Comp. Exp. 1, 407–414. Sobol’ (2001) Sobol’, I. (2001). Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Math. Comput. Simulat. 55(1-3), 271–280. Sobol’ and Kucherenko (2005) Sobol’, I. and S. Kucherenko (2005). Global sensitivity indices for nonlinear mathematical models. Review. Wilmott magazine 1, 56–61. Sojoudi et al. (2014) Sojoudi, A., S. C. Saha, M. Khezerloo, and Y. Gu (2014). Unsteady natural convection within a porous enclosure of sinusoidal corrugated side walls. Transport. Porous. Med. 104(3), 537–552. Su and Davidson (2015) Su, Y. and J. H. Davidson (2015). Modeling approaches to natural convection in porous media. Springer. Sudret (2007) Sudret, B. (2007). Uncertainty propagation and sensitivity analysis in mechanical models – Contributions to structural reliability and stochastic spectral methods. Université Blaise Pascal, Clermont-Ferrand, France. Habilitation à diriger des recherches, 173 pages. Sudret (2008) Sudret, B. (2008). Global sensitivity analysis using polynomial chaos expansions. Reliab. Eng. Syst. Safe. 93(7), 964–979. Tu et al. (2005) Tu, S., S. Aliabadi, et al. (2005). A slope limiting procedure in discontinuous galerkin finite element method for gasdynamics applications. Int. J. Numer. Anal. Modeling 2(2), 163–178. Vadász (2008) Vadász, P. (2008). Emerging topics in heat and mass transfer in porous media: from bioengineering and microelectronics to nanotechnology, Volume 22. Springer Science & Business Media. Vafai (2005) Vafai, K. (2005). Handbook of Porous Media. CRC Press. Viera et al. (2012) Viera, M. A. D., P. Sahay, M. Coronado, and A. O. Tapia (2012). Mathematical and numerical modeling in porous media: applications in geosciences. CRC Press. Vilarrasa and Carrera (2015) Vilarrasa, V. and J. Carrera (2015). Geologic carbon storage is unlikely to trigger large earthquakes and reactivate faults through which CO${}_{2}$ could leak. Proceedings of the National Academy of Sciences 112(19), 5938–5943. Walker and Homsy (1978) Walker, K. L. and G. M. Homsy (1978). Convection in a porous cavity. J. Fluid. Mech. 87(03), 449–474. Werner et al. (2013) Werner, A. D., M. Bakker, V. E. Post, A. Vandenbohede, C. Lu, B. Ataie-Ashtiani, C. T. Simmons, and D. A. Barry (2013). Seawater intrusion processes, investigation and management: recent advances and future challenges. Adv. Water. Resour. 51, 3–26. Wessapan and Rattanadecho (2014) Wessapan, T. and P. Rattanadecho (2014). Aqueous humor natural convection of the human eye induced by electromagnetic fields: In the supine position. Journal of Medical and Bioengineering Vol 3(4). Xiu (2010) Xiu, D. (2010). Numerical methods for stochastic computations: a spectral method approach. Princeton University Press. Xiu and Karniadakis (2002) Xiu, D. and G. Karniadakis (2002). The Wiener-Askey polynomial chaos for stochastic differential equations. SIAM J. Sci. Comput. 24(2), 619–644. Xiu and Karniadakis (2003) Xiu, D. and G. Karniadakis (2003). A new stochastic approach to transient heat conduction modeling with uncertainty. Int. J. Heat. Mass. Transfer 46, 4681–4693. Yang and Vafai (2011) Yang, K. and K. Vafai (2011). Analysis of heat flux bifurcation inside porous media incorporating inertial and dispersion effects–an exact solution. Int. J. Heat. Mass. Transfer. 54(25), 5286–5297. Younes and Ackerer (2008) Younes, A. and P. Ackerer (2008). Solving the advection–dispersion equation with discontinuous galerkin and multipoint flux approximation methods on unstructured meshes. Int. J. Numer. Methods. Fluids. 58(6), 687–708. Younes and Fahs (2015) Younes, A. and M. Fahs (2015). Extension of the Henry semi-analytical solution for saltwater intrusion in stratified domains. Computat. Geosci. 19(6), 1207–1217. Younes et al. (2009) Younes, A., M. Fahs, and S. Ahmed (2009). Solving density driven flow problems with efficient spatial discretizations and higher-order time integration methods. Adv. Water. Resour. 32(3), 340–352. Younes et al. (2013) Younes, A., T. A. Mara, N. Fajraoui, F. Lehmann, B. Belfort, and H. Beydoun (2013). Use of global sensitivity analysis to help assess unsaturated soil hydraulic parameters. Vadose Zone Journal 12(1). Zhang and Schwartz (1995) Zhang, H. and F. W. Schwartz (1995). Multispecies contaminant plumes in variable density flow systems. Water. Resour. Res. 31(4), 837–847. Zhao et al. (2015) Zhao, S.-y., W.-j. Zhang, X. Lin, J.-j. Li, D.-j. Song, and X.-d. He (2015). Effect of parameters correlation on uncertainty and sensitivity in dynamic thermal analysis of thermal protection blanket in service. Int J. Thermal. Sci. 87, 158–168. Zhu et al. (2017) Zhu, Q., Y. Zhuang, and H. Yu (2017). Entropy generation due to three-dimensional double-diffusive convection of power-law fluids in heterogeneous porous media. Int. J. Heat. Mass. Transfer. 106, 61–82.
Diffractive photoproduction of $\Upsilon$ states at HERA M. F. McDermott In collaboration with M. Strikman and L. Frankfurt Dept. of Physics and Astronomy, Schuster Lab., Brunswick St., University of Manchester, Manchester, England Abstract Cross sections for the diffractive photoproduction of the $\Upsilon$-family at HERA energies, within the framework of the analysis by Frankfurt, Köpf and Strikman [1, 2], are presented. They compare well with the recent preliminary data from ZEUS and H1. Two novel effects lead to a significant enhancement of the original calculation: the non-diagonal (or skewed) kinematics, calculated to leading-log($Q^{2}$) accuracy, and the large magnitude of the real part of the amplitude. A considerably stronger rise in energy is predicted than that found in $J/\psi$-production. 1 Basic Formulae Diffractive heavy vector meson photo- and electro- production are governed by the exchange of two gluons, in a colour singlet, in the $t$-channel (see figure 1). The amplitude for this hard diffractive process involves the overlap of the scattering cross-section for small dipoles (specified by momentum sharing, $z$, and transverse size, $b^{2}$) with the light-cone wavefunctions for the photon and vector meson: $${\cal A}\propto\int dz\int d^{2}b\psi_{\gamma}(z,b)\hat{\sigma}(b^{2})\psi_{V}% (z,b).$$ (1) The universal cross section for the scattering a small dipole off the proton is directly proportional to its transverse size, and the gluon density $$\hat{\sigma}(b^{2})=\frac{\pi^{2}}{3}b^{2}\alpha_{s}(b^{2})xg(x,b^{2})$$ (2) We exploit this universality of $\hat{\sigma}$ to set the relation between transverse dipole size and four-momentum scales by expressing $\sigma_{L}$ as a similar integral involving $\hat{\sigma}$ convoluted with the square of the (known) longitudinally polarized photon wavefunction, $\psi^{L}_{\gamma}(z,b)$. For a given $x,Q^{2}$ we calculate the average $b^{2}$ of this integral and, employing the ansatz $b^{2}=\lambda/Q^{2}$, determine the relationship by an iterative proceedure. It turns out that $\lambda$ has a only weak dependence on $x,Q^{2}$ [3], which gives us faith in our ansatz. In the integral in equation 1, $\alpha_{s}xg$ is a slow function of $b^{2}$, so may be taken out of the integral at an average point. The average $<\!\!b^{2}\!\!>$ is taken to be the median of this integral as explained in [3], the effective four-momentum scale of a particular production process is then $Q^{2}_{eff}=\lambda/\!<\!\!b^{2}\!\!>$. Different wavefunctions weight the integrand differently, yielding a $Q^{2}_{eff}$ which reflects this. The cross section for the photoproduction of heavy vector-meson states is then $$\displaystyle\sigma(\gamma P\rightarrow VP)$$ $$\displaystyle=$$ $$\displaystyle\frac{3\pi^{3}\Gamma M_{V}^{2}(1+\beta^{2})}{64\alpha_{em}(m_{q}^% {2})^{4}B_{D,V}}\times$$ $$\displaystyle C(Q^{2}=0)$$ $$\displaystyle\times$$ $$\displaystyle\left[\alpha_{s}(Q^{2}_{eff})g(x_{1},\delta,Q^{2}_{eff})\right]^{2}$$ (3) where $M_{V},\Gamma,B_{D,V}$ are the mass, decay width to leptons and diffractive slope of the vector meson concerned. The factor $C(Q^{2}=0)<1$ contains the remaining integral (squared) suitably normalised (see [1, 2, 3] for more details). Information about the real part of the amplitude is contained in $\beta={\cal R}e{\cal A}/{\cal I}m{\cal A}$. The fact that one must create a large time-like mass from a real photon ensures that the process of figure 1 is off-diagonal ($x_{1}\neq x_{2}$), as specified by the “skewedness” parameter $\delta=x_{1}-x_{2}=M_{V}^{2}/W^{2}$. This leads to the replacement of the ordinary gluon distribution by its skewed generalization in equation 2. 2 Key Issues The use of the naive QCD cross section in DLLA [4] (static wavefunctions, no skewedness or rescaling, etc) disagrees with the first ZEUS [5] and H1 [6] data by a factor of 5-10, and it is necessary to take corrections to this asymptotic formula into account. The experimental signal is exclusive diffractive: only the muons, to which the $\Upsilon$ states decay, are observed in the main detector. The resolution of the muon chambers is such that the three $S$-states are not resolved, so the actual measurement corresponds to the sum of the products of the production cross sections for each state times their respective branching ratios to muons. In calculating the cross section we use the hard (small average $b^{2}$) hybrid wavefunctions of [2] for the vector mesons. These are given by boosted Schrödinger quarkonium wavefunctions, modified at small $b^{2}$ to impose QCD-type behaviour ($z(1-z)$) (and normalised to the decay width to leptons). These hard wavefunctions weight the $b$-integral in equation 1 towards smaller $b^{2}$ where it is suppressed by the $b^{2}$ factor in equation 2. This leads to a $k_{T}^{2}$-suppression (encoded in $C$) which is tempered by the related larger value of $Q^{2}_{eff}$ at which the skewed gluon density is sampled. Typical values for the effective scale, at $x\approx 0.01$ are $Q^{2}_{eff}=40,62,76$ GeV${}^{2}$ for $\Upsilon,\Upsilon^{{}^{\prime}},\Upsilon^{{}^{\prime\prime}}$ respectively, and it has a fairly weak $x$-dependence [3]. The skewed kinematics of the amplitude cause us to replace the ordinary gluon density in equation 3 by its skewed, or off-diagonal equivalent $g(x_{1},\delta,Q^{2}_{eff})$. In practice one must make an ansatz for the skewed distribution at the starting scale, and evolve using skewed splitting functions (known only to leading order at present). In practice, we follow [7] and assume that the dependence on $\delta$ may be neglected at the starting scale. However, for a sufficiently long evolution (as in this case, with large $Q^{2}_{eff}$) one becomes increasingly insensitive to the details of this starting assumption and the difference between skewed and conventional is driven by the evolution. For the leading order partons that one must use for consistency at present, this leads to an overall enhancement factor of about 2-2.6 in the cross sections for $\Upsilon$-states. The fact that in $\Upsilon$-photoproduction at HERA one is sampling the skewed gluon at rather large scales and relatively high $x$, led us to re-examine the calculation of $\beta$. Usually it is sufficient to relate it to the logarithmic derivative of the gluon distribution, which is valid if this quantity may be fitted by a simple (small) power in $x$ (this works for the $J/\psi$ case). We found that for $\Upsilon$-states, certainly for the lower values of $W^{2}$, it was more appropriate to use a two power fit to the skewed distribution and use dispersion relations to get to $\beta$. Overall the two-power fit leads to an additional factor of about 2. A numerical comparison between the two methods may be found in table 5 of [3]. In practice all of these effects are calculated precisely and systematically, at a given value of $W^{2}$, within the computer codes to produce the final result. 3 Results and discussion Taking all the above corrections into account leads to QCD cross section which are in fair agreement with the first data: see figure 2. The following values have been calculated for the $C$-factors of equation 3: $C(Q^{2}=0)=0.29,0.14,0.08$ for $\Upsilon,\Upsilon^{{}^{\prime}},\Upsilon^{{}^{\prime\prime}}$, respectively. This implies that within our model $\Upsilon(1s)$ is responsible for about $85\%$ of the signal, in contrast to the $70\%$ assumed by the experiments in extracting their $\Upsilon$(1s) cross sections. As can be clearly seen from the figure a very strong rise with energy ($W^{1.6-1.7}$) is expected. The is being driven by the (square) of the skewed gluon distribution, sampled at large effective scales, $Q^{2}_{eff}$. Ryskin et al [8] recently released a preprint on this topic which is in qualitative agreement with our results for the cross sections. However the details of the contributing correction factors of the previous section are different, with the main disagreement coming from the treatment of the wavefunction for the vector meson (see [9] for more details). It is hoped that a global analysis of all hard diffractive vector meson data will help to pin down these uncertainties. Of particular importance is the explicit calculation of the contribution of the higher order Fock states in the vector meson wavefunctions. Ratios of cross section within families as a function of $Q^{2}$ and $W^{2}$, e.g. $\psi^{{}^{\prime}}/J/\psi$, are expected to be particularly sensitive to this issue (they should also reveal the effective scales involved). While the presence of skewedness $\delta=(Q^{2}+M_{V}^{2})/(W^{2}+Q^{2})$ is a purely kinematical effect, the enhancement it provides at small x and large momentum scales is due solely to QCD evolution. As such, the enhancement is expected to be a common feature of other exclusive processes which share the same kinematics, for example the electroproduction of $\rho,J/\psi$ at HERA. All of these issues are under active investigation at present [10]. Given the quantum mechanical nature of effect of skewedness, it would be interesting to see if one can find a process which is sensitive to destructive interference effects as well as the constructive interference effects found here. With the increasingly precise data being released from H1 [11] and ZEUS [12], heavy vector meson production promises to remain a fascinating area in the next few years. References [1] L. L. Frankfurt, W. Köpf, M. Strikman, Phys. Rev. D54 (1996) 3194. [2] L. L. Frankfurt, W. Köpf, M. Strikman, Phys. Rev. D57 (1998) 512. [3] L.L. Franfurt, M.F. McDermott and M Strikman, JHEP02 (1999) 002. [4] M. Ryskin, Z. Phys. C57 (1993) 89. [5] ZEUS Collab., Phys. Lett. B437 (1998) 432. [6] H1 Collab., “Observation of $\Upsilon$-Production at HERA”, paper 574, ICHEP98, Vancouver. [7] L.L. Frankfurt et al, Phys. Lett. B418 (1998) 345, Erratum-ibid. Phys. Lett. b429 (1998) 1181. [8] A. Martin, M. Ryskin, T. Teubner, hep-ph/9901420. [9] T. Teubner, these proceedings. [10] L.L. Franfurt, M.F. McDermott and M Strikman, in preparation. [11] H1 Collab., DESY-99-026, hep-ex/9903008. [12] ZEUS Collab., hep-ex/9808020.
\setitemize noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt \setenumeratenoitemsep,topsep=0pt,parsep=0pt,partopsep=0pt 11institutetext: Fraunhofer IKS, Munich, Germany 11email: {chih-hong.cheng,emmanouil.seferis}@iks.frauhofer.de 22institutetext: Univ. Grenoble Alpes, Verimag, Grenoble, France 22email: {changshun.wu,saddek.bensalem}@univ-grenoble-alpes.fr Prioritizing Corners in OoD Detectors via Symbolic String Manipulation Chih-Hong Cheng 11    Changshun Wu 22    Emmanouil Seferis 11    Saddek Bensalem The first two authors contributed equally to this work.22 Abstract For safety assurance of deep neural networks (DNNs), out-of-distribution (OoD) monitoring techniques are essential as they filter spurious input that is distant from the training dataset. This paper studies the problem of systematically testing OoD monitors to avoid cases where an input data point is tested as in-distribution by the monitor, but the DNN produces spurious output predictions. We consider the definition of “in-distribution” characterized in the feature space by a union of hyperrectangles learned from the training dataset. Thus the testing is reduced to finding corners in hyperrectangles distant from the available training data in the feature space. Concretely, we encode the abstract location of every data point as a finite-length binary string, and the union of all binary strings is stored compactly using binary decision diagrams (BDDs). We demonstrate how to use BDDs to symbolically extract corners distant from all data points within the training set. Apart from test case generation, we explain how to use the proposed corners to fine-tune the DNN to ensure that it does not predict overly confidently. The result is evaluated over examples such as number and traffic sign recognition. Keywords: OoD monitoring test case prioritization neural network training. 1 Introduction To cope with practical concerns in autonomous driving where deep neural networks (DNN) [7] are operated in an open environment, out-of-distribution (OoD) monitoring is a commonly used technique that raises a warning if a DNN receives an input distant from the training dataset. One of the weaknesses with OoD detection is regarding inputs that fall in the OoD detector’s decision boundary while being distant from the training dataset. These inputs are considered “in-distribution” by the OoD detector but can impose safety issues due to extensive extrapolation. In this paper, we are thus addressing this issue by developing a disciplined method to identify the weakness of OoD detectors and improve the system accordingly. Precisely, we consider OoD detectors constructed using boxed abstraction-based approaches [10, 3, 25], where DNN-generated feature vectors from the training dataset are clustered and enclosed using hyperrectangles. The OoD detector raises a warning over an input, provided that its corresponding feature vector falls outside the boxed abstraction. We focus on analyzing the corners of the monitor’s hyperrectangle and differentiate whether a corner is supported or unsupported depending on having some input in the training dataset generating feature vectors located in the corner. However, the number of exponentially many corners in the abstraction reveals two challenges, namely (1) how to enumerate the unsupported corners and (2) how to prioritize unsupported corners to be analyzed. • For (1), we present an encoding technique that, for each feature vector dimension, decides if an input falls in the border subject to a closeness threshold $\delta$. This allows encoding for each input in-sample as a binary string and storing the complete set compactly via Binary decision diagrams (BDDs) [2]. With an encoding via BDD, one can compute all unsupported corners using set difference operations. • For (2), we further present an algorithm manipulated on the BDDs that allows filtering all corners that are far from all training data subject to a minimum constant Hamming distance (which may be further translated into Euclidean distance). This forms the basis of our corner prioritization technique for abstractions characterized by a single hyperrectangle. For multiple boxed-abstraction, we use a lazy approach to omit the corners when the proposed corner from one box falls inside another box. With a given corner proposal, we further encounter practical problems to produce input images that resemble “natural” images. We thus consider an alternative approach: it is feasible to have the DNN generate a prediction with low confidence for any input whose feature vectors resemble unsupported corners. This requirement leads to a DNN fine-tuning scheme as the final contribution of this paper: The fine-tuning freezes parameters for all network layers before the monitored layer, thereby keeping the validity of the OoD monitor. However, it allows all layers after the monitored features to be adjusted. Thus the algorithm feeds the unsupported corners to the fine-tunable sub-network to ensure that the modified DNN reports every class with low confidence, while keeping the same prediction for existing training data. We have evaluated our proposed techniques in applications ranging from standard digit recognition to traffic sign detection. For corners inside the monitor while distant from the training data, our experiment indicates that the DNN indeed acts over-confidently in the corresponding prediction, which is later adjusted with our local training method. Altogether the positive evaluation of the technique offers a rigorous paradigm to align DNN testing, OoD detection, and DNN repair for safety-critical systems. The rest of the paper is structured as follows: After reviewing related work in Section 2, we present in Section 3 the basic notation as well as a concise definition on abstraction-based monitors. Subsequently, in Section 4 we present our key results for prioritized corner case proposal in a single-box configuration and its extension to a multi-box setting. In Section 5 we present how to use the discovered corners in improving the DNN via local training. Finally, we present our preliminary evaluation in Section 6 and conclude in Section 7. 2 Related Work Systematically testing of DNNs has been an active research scheme, where readers may reference Section 5.1 of a recent survey [12] for an overview of existing results. Overall, the line of attack is by first defining a coverage criterion, followed by concrete test case generation utilizing techniques such as adversarial perturbation [24], constraint solving [13], or model-based exploration [22]. For white box coverage criteria, neuron coverage [21] and extensions (e.g., SS-coverage [23] or neuron combinatorial testing [19]) essentially consider the activation pattern for neurons and demands the set of test inputs to satisfy a pre-defined relative completeness criterion; the idea is essentially motivated by classical software testing coverage (e.g., branch coverage) as used in safety standards. For black-box coverage criteria, multiple results are utilizing combinatorial testing [4, 1], where by first defining the human-specified features in the input space, it is also possible to argue the relative completeness of the test data. For the above metrics, one can apply coverage-driven testing, i.e., generate test cases that maximally increase coverage. Note that the above test metrics and the associated test case prioritization techniques are not property-oriented, i.e., prioritizing the test cases does not have a direct relation with dependability attributes. This is in contrast to our work on testing the decision boundary of a DNN monitor, where our test prioritization scheme prefers corners (of the monitor) that have no input data being close-by. These corners refer to regions where DNN decisions are largely extrapolated, and it is important to ensure that inputs that may lead to these corners are properly tested. The second differentiation is that we also consider the subsequent DNN repair scheme (via local training) to incorporate the distant-yet-uncovered corners. In this paper, we are interested in testing the monitors built from an abstraction of feature vectors from the training data, where the shape of the abstraction is a union of hyperrectangles [10, 3, 25]. There exist also other types of monitors. The most typical runtime monitoring approach for DNNs is to build a logic on top of the DNN, where the logic inspects some of the DNN features and tries to access the decision quality. Popular approaches in this direction are the baseline of Hendrycks et al. [9] that looks at the output softmax value and flags it as problematic if lower than a threshold, or the ODIN approach that improves on it using temperature scaling [18]. Further, [17] looks at intermediate layers of a DNN and assumes that their features are approximately Gaussian-distributed. With that, they use the Mahanalobis distance as a confidence score for adversarial or OoD detection. The work of [17] is considered the practical state-of-the-art in the domain. In another direction, researchers have attempted to measure the uncertainty of a DNN on its decisions, using Bayesian approaches such as drop out at runtime [6] and ensemble learning. Deep Ensembles [15] achieve state-of-the-art uncertainty estimation but at a large computational overhead (since one needs to train many models), thus recent work attempts to mitigate this with various ideas [5, 8]. Although the above results surely have their benefits, for complex monitoring techniques, the decision boundary is never a single value but rather a complex geometric shape. For this, we observe a strong need in systematic testing over the decision boundaries (for rejecting an input or not), which is reflected in this work by testing or training against unsupported corners of a monitor. 3 Preliminaries Let $\mathbb{N}$ and be the sets of natural and real numbers. To refer to integer intervals, we use $[{a}\cdots{b}]$ with $a,b\in\mathbb{N}$ and $a\leq b$. To refer to real intervals, we use $[{a},{b}]$ with $a,b\in\cup\{{-\infty},\infty\}$ and if $a,b\in$, then $a\leq b$. We use square bracket when both sides are included, and use round bracket to exclude end points (e.g., $[a,b)$ for excluding $b$). For $n\in\mathbb{N}\setminus\{0\}$, ${}^{n}\stackrel{{\scriptstyle\mathrm{\scriptscriptstyle{def}}}}{{=}}\underbrace{\times\cdots\times missing}_{n\ \text{times}}$ is the space of real coordinates of dimension $n$ and its elements are called $n$-dimensional vectors. We use $\mathbf{x}=(x_{1},\ldots,x_{n})$ to denote an $n$-dimensional vector. Feedforward Neural Networks. A neuron is an elementary mathematical function. A (forward) neural network $f\stackrel{{\scriptstyle\mathrm{\scriptscriptstyle{def}}}}{{=}}(g^{L},\ldots,g^{1})$ is a sequential structure of $L\in\mathbb{N}\setminus\{0\}$ layers, where, for $i\in[{1}\cdots{L}]$, the $i$-th layer comprises $d_{i}$ neurons and implements a function $g^{i}:\mathbb{R}^{d_{i-1}}\rightarrow\mathbb{R}^{d_{i}}$. The inputs of neurons at layer $i$ comprise (1) the outputs of neurons at layer $(i-1)$ and (2) a bias. The outputs of neurons at layer $i$ are inputs for neurons at layer $i+1$. Given a network input $\mathbf{x}\in^{d_{0}}$, the output at the $i$-th layer is computed by the function composition $f^{i}(\mathbf{x})\stackrel{{\scriptstyle\mathrm{\scriptscriptstyle{def}}}}{{=}}g^{i}(\cdots g^{2}(g^{1}(\mathbf{x})))$. Therefore, $f^{L}(\mathbf{x})$ is the output of the neural network. We use $f^{i}_{j}(\mathbf{x})$ to extract the $j$-th value from the vector $f^{i}(\mathbf{x})$. Abstraction-based Monitors using Boxes [10, 3, 25]. In the following, we present the simplistic definition of abstraction-based monitors using multiple boxes. The definition is simplified in that we assume the monitor operates on all neurons within a given layer, but the technique is generic and can be used to monitor a subset of neurons across multiple layers. For a neural network $f$ whose weights and bias related to neurons are fixed, let $\mathcal{D}_{train}\stackrel{{\scriptstyle\mathrm{\scriptscriptstyle{def}}}}{{=}}\{(\mathbf{x},\mathbf{y})\;|\;\mathbf{x}\in\mathbb{R}^{d_{0}},\mathbf{y}\in\mathbb{R}^{d_{L}}\}$ be the corresponding training dataset. We call $B\stackrel{{\scriptstyle\mathrm{\scriptscriptstyle{def}}}}{{=}}\big{[}[{a_{1}},{b_{1}}],\cdots,[{a_{n}},{b_{n}}]\big{]}$ an $n$-dimensional box, where $B$ is the set of points $\{(x_{1},\ldots,x_{n})\}\subseteq^{n}$ with $\forall i\in[{1}\cdots{n}]:x_{i}\in[{a_{i}},{b_{i}}]$. Given a neural network $f$ and the corresponding training dataset, let $k$ be a positive integer constant and $l\in[{1}\cdots{L}]$. Then $\mathcal{B}_{k,l,\delta}\stackrel{{\scriptstyle\mathrm{\scriptscriptstyle{def}}}}{{=}}\{B_{1},\ldots,B_{k}\}$ is a $k$-boxed abstraction monitor over layer $l$ with buffer vector $\delta\stackrel{{\scriptstyle\mathrm{\scriptscriptstyle{def}}}}{{=}}(\delta_{1},\ldots,\delta_{d_{l}})$, provided that $\mathcal{B}_{k,l,\mathbf{\delta}}$ satisfies the following properties. 1. $\forall i\in[{1}\cdots{k}]$, $B_{i}$ is a $d_{l}$-dimensional box. 2. $\forall(\mathbf{x},\mathbf{y})\in\mathcal{D}_{train}$, there exists $i\in[{1}\cdots{k}]$ such that $f^{l}(\mathbf{x})\in B_{i}$. 3. $\forall i\in[{1}\cdots{k}]$, let $B_{i}$ be $\big{[}[{a_{1}},{b_{1}}],\cdots,[{a_{d_{l}}},{b_{d_{l}}}]\big{]}$. Then • for every $j\in[{1}\cdots{d_{l}}]$, exists $(\mathbf{x},\mathbf{y})\in\mathcal{D}_{train}$ such that $a_{j}\leq f^{l}_{j}(\mathbf{x})\leq a_{j}+\delta_{j}$, and • for every $j\in[{1}\cdots{d_{l}}]$, exists $(\mathbf{x^{\prime}},\mathbf{y^{\prime}})\in\mathcal{D}_{train}$ such that $b_{j}-\delta_{j}\leq f^{l}_{j}(\mathbf{x^{\prime}})\leq b_{j}$. The three conditions stated above can be intuitively explained as follows: Condition (1) ensures that any box is well formed, condition (2) ensures that for any training data point, its feature vector at the $l$-th layer falls into one of the boxes, and (3) the construction of boxes is relatively tight in that for any dimension, there exists one training data point whose $j$-th dimension of its feature vector is close to (subject to $\delta_{j}$) the $j$-th lower-bound of the box; the same condition also holds for the $j$-th upper-bound. Monitoring. Given a neural network $f$ and the boxed abstraction monitor $\mathcal{B}_{k,l,\delta}$, in runtime, the monitor rejects an input $\mathbf{x^{\prime}}$ if $\not\exists i\in[{1}\cdots{k}]:f^{l}(\mathbf{x^{\prime}})\in B_{i}$. That is, the feature vector of $\mathbf{x^{\prime}}$ at the $l$-th layer is not contained by any box. As the containment checking $f^{l}(\mathbf{x^{\prime}})\in B_{i}$ simply compares $f^{l}(\mathbf{x^{\prime}})$ against the box’s lower and upper bounds on each dimension, it can be done in time linear to the number of neurons being monitored. Example 1 Consider the set $\{f^{l}(\mathbf{x})\ |\ (\mathbf{x},\mathbf{y})\in\mathcal{D}_{train}\}=\{(0.1,2.9),(0.3,2.6),(0.6,\\ 2.3),(0.8,2.8),(0.9,2.1),(2.1,0.1),(2.2,0.7),(2.3,0.3),(2.6,0.6),(2.9,0.2),(2.7,0.9)\}$ of feature vectors obtained at layer $l$ that has only two neurons: Fig. 1 shows $\mathcal{B}_{2,l,\delta}=\{\big{[}[{0},{1}],[{2},{3}]\big{]},\big{[}[{2},{3}],[{0},{1}]\big{]}\}$, a $2$-boxed abstraction monitor with $\delta=(0.15,0.15)$. The area influenced by $\delta$ is visualized in yellow. Corners within monitors. As a monitor built from boxed abstraction only rejects an input if the feature vector falls outside the box, the borders of the box actually serve as a proxy for the boundary of the operational design domain (ODD) - anything inside a box is considered acceptable. With this concept in mind, we are interested in finding test inputs that can lead to corners of these boxes. As shown in Fig. 1, for the box $\big{[}[{0},{1}],[{2},{3}]\big{]}$, the bottom left corner is not occupied by a feature vector produced from any training data point. We now precise the definition of corners. Given a box $B_{i}=\big{[}[{a_{1}},{b_{1}}],\cdots,[{a_{d_{l}}},{b_{d_{l}}}]\big{]}\in\mathcal{B}_{k,l,\delta}$, the set of corners associated with $B_{i}$ is $C_{B_{i}}\stackrel{{\scriptstyle\mathrm{\scriptscriptstyle{def}}}}{{=}}\{\big{[}[{\alpha_{1}},{\beta_{1}}],\cdots,[{\alpha_{d_{l}}},{\beta_{d_{l}}}]\big{]}\}$ where $\forall j\in[{1}\cdots{d_{l}}]$, either • $[{\alpha_{j}},{\beta_{j}}]=[{a_{j}},{a_{j}+\delta_{j}}]$, or • $[{\alpha_{j}},{\beta_{j}}]=[{b_{j}-\delta_{j}},{b_{j}}]$. Without surprise, the below lemma reminded us the well known problem of combinatorial explosion, where the number of corners, although linear to the number of boxes, is exponential to the number of dimensions. Lemma 1 Given $\mathcal{B}_{k,l,\delta}$, $\sum^{k}_{1}|C_{B_{i}}|$, i.e., the total number of corners associated with the monitor, equals $k\cdot 2^{d_{l}}$. Given the set $C_{B_{i}}$ of corners associated with $B_{i}$, define $C^{s}_{B_{i}}\subseteq C_{B_{i}}$ to be the (training-data) supported corners where for each $\big{[}[{\alpha_{1}},{\beta_{1}}],\cdots,[{\alpha_{d_{l}}},{\beta_{d_{l}}}]\big{]}$ in $C^{s}_{B_{i}}$, exists $(\mathbf{x},\mathbf{y})\in\mathcal{D}_{train}$ such that $\forall j\in[{1}\cdots{d_{l}}]:f^{l}_{j}(\mathbf{x})\in[{\alpha_{j}},{\beta_{j}}]$. The set of (training-data) unsupported corners $C^{u}_{B_{i}}$ is the set complement, i.e., $C^{u}_{B_{i}}\stackrel{{\scriptstyle\mathrm{\scriptscriptstyle{def}}}}{{=}}C_{B_{i}}\setminus C^{s}_{B_{i}}$. As an example, consider the box $B_{i}$ in Fig. 2(a). The set of unsupported corners $C^{u}_{B_{i}}$ is $\{\big{[}[a_{1},a_{1}+\delta_{1}],[b_{2}-\delta_{2},b_{2}]],\big{[}[b_{1}-\delta_{1},b_{1}],[a_{2},a_{2}+\delta_{2}]]\}$, i.e, the top-left corner and the bottom-right corner. An unsupported corner reflects the possibility of having an input $\mathbf{x}_{op}$ in operation time, where the DNN-computed $l^{th}$-layer feature vector $f^{l}(\mathbf{x}_{op})$ falls into the corner of the monitor. It reflects additional risks, as we do not know the prediction result, but the monitor also will not reject the input. The consequence of Lemma 1 implies that when we only have a finite budget for testing unsupported corners, we need to develop methods to prioritize them, as detailed in the following sections. 4 Unsupported Corner Prioritization under Single-Boxed Abstraction We first consider the special case where only one box is used in the monitoring. That is, we consider $\mathcal{B}_{1,l,\delta}=\{B\}$ where $B=\{(x_{1},\ldots,x_{d_{l}})\,|\,x_{1}\in[{a_{1}},{b_{1}}],\ldots,x_{d_{l}}\in[{a_{d_{l}}},{b_{d_{l}}}]\}$. The workflow is to first consider encoding feature vectors at the $l$-th layer into fixed-length binary strings, in order to derive the set of unsupported corners. Subsequently, prioritize the unsupported corners via Hamming distance-based filtering. The algorithm stated in this section serves as the foundation for the general multi-boxed monitor setting detailed in later sections. 4.1 Encoding feature vectors using binary strings Given a finite-length Boolean string $\mathbf{b}\in\{0,1\}^{*}$, We use $\mathbf{b}_{[{i}\cdots{j}]}$ to denote the substring indexed from $i$ to $j$. For a single-boxed monitor $\mathcal{B}_{1,l,\delta}=\{B\}$ constructed from $\mathcal{D}_{train}$, let the $\phi$-bit encoding ($\phi\geq 2$) be a function ${\textsf{{enc}}}^{\phi}:^{d_{l}}\rightarrow\{0,1\}^{\phi\cdot{d_{l}}}$ that, for any $\mathbf{x}\in\mathcal{D}_{train}$, translates the feature vector $f^{l}(\mathbf{x})$ to a Boolean string $\mathbf{b}$ (with length $\phi\cdot{d_{l}}$) using the following operation: $\forall j\in[{1}\cdots{d_{l}}]$, • if $f^{l}_{j}(\mathbf{x})\in[\alpha_{j},\alpha_{j}+\delta_{j}]$, then $\mathbf{b}_{[{\phi(j-1)+1}\cdots{\phi j}]}=\underbrace{0\cdots 0}_{\phi\ \text{times}}$; • else if $f^{l}_{j}(\mathbf{x})\in[\beta_{j}-\delta_{j},\beta_{j}]$, then $\mathbf{b}_{[{\phi(j-1)+1}\cdots{\phi j}]}=\underbrace{1\cdots 1}_{\phi\ \text{times}}$; • otherwise, $\mathbf{b}_{[{\phi(j-1)+1}\cdots{\phi j}]}=\underbrace{0\cdots 0}_{\phi-\tau\ \text{times}}\underbrace{1\cdots 1}_{\tau\ \text{times}}$ when $f^{l}_{j}(\mathbf{x})\in[a_{j}+\delta_{j}+\frac{(\tau-1)(b_{j}-a_{j}-2\delta_{j})}{\phi\ -1},a_{j}+\delta_{j}+\frac{(\tau)(b_{j}-a_{j}-2\delta_{j})}{\phi\ -1})$ The $\phi$-bit encoding essentially considers $f^{l}(\mathbf{x})$ in dimension $j$, assigns the substring with all $0$s when $f^{l}_{j}(\mathbf{x})$ falls in the corner reflecting the lower-bound, assigns with all $1$s when $f^{l}_{j}(\mathbf{x})$ falls in the corner reflecting the upper-bound, and finally, splits the rest interval of length $b_{j}-a_{j}-2\delta_{j}$ into $\phi-1$ equally sized intervals and assigns each interval with an encoding. Fig. 2 illustrates the result of $2$-bit and $3$-bit partitioning under a 2-dimensional boxed monitor. For point $\mathbf{x}$ in Fig. 2(b), ${\textsf{{enc}}}^{3}(\mathbf{x})=011111$. The first part “$011$” comes as when $\tau=2$, $f^{l}_{1}(\mathbf{x})\in[a_{1}+\delta_{1}+\frac{(2-1)(b_{1}-a_{1}-2\delta_{1})}{3\ -1},a_{1}+\delta_{1}+\frac{(2)(b_{1}-a_{1}-2\delta_{1})}{3\ -1})$. The second part “$111$” comes as $f^{l}_{2}(\mathbf{x})\in[\beta_{2}-\delta_{2},\beta_{2}]$. Given an input $\mathbf{x}$ and its computed feature vector $f^{l}(\mathbf{x})$, the time required for perfotming $\phi$-bit encoding is in low degree polynomial with respect to $d_{l}$ and $\phi$. 4.2 BDD encoding and priortizing the unsupported corners This section presents Algorithm 1, a BDD-based algorithm for identifying unsupported corners. To ease understanding, we separate the algorithm into three parts. A: Encode the complete training dataset. Given the training dataset $\mathcal{D}_{train}$ and the DNN function $f$, one can easily compute $\{\mathbf{b}\ |\ \mathbf{b}={\textsf{{enc}}}^{\phi}(f^{l}(\mathbf{x}))\ \text{where}\ \mathbf{x}\in\mathcal{D}_{train}\}$ as be the set of all binary strings characterizing the complete training dataset. As each element in the set is a fixed-length binary string, the set can be compactly stored using Binary Decision Diagrams. Precisely, as the length of a binary string $\mathbf{b}={\textsf{{enc}}}^{\phi}(f^{l}(\mathbf{x}))$ equals $\phi\cdot d_{l}$, in our encoding we use $\phi\cdot d_{l}$ BDD variables, denoted as ${\textsf{{bv}}}_{1},\ldots,{\textsf{{bv}}}_{\phi d_{l}}$, such that ${\textsf{{bv}}}_{i}={\textsf{{true}}}\ \text{iff}\ \mathbf{b}_{[{i}\cdots{i}]}=1$. Line $1$ of Algorithm 1 performs such a declaration. Lines 2 to 9 perform the BDD encoding and creation of the set $S_{train}$ containing all binary strings created from the training set. Initially (line 2) $S_{train}$ is set to be an empty set. Subsequently, generate the binary string (line 4), and encode a set $S_{\mathbf{b}}$ which contains only the binary string (line 5-8). Finally, add $S_{\mathbf{b}}$ to $S_{train}$ (line 9). B: Derive the set of unsupported corners. Lines 10 to 17 of Algorithm 1 computes $S_{unsup}$, where each binary string in $S_{unsup}$ corresponds to an unsupported corner. The set is computed by a set difference operation (line 17) between the set of all corners $S_{all.corners}$ and $S_{train}$. Following the encoding in Section 4.1, we know that the set of all corners corresponds to $\{\underbrace{0\cdots 0}_{\phi\ \text{times}},\underbrace{1\cdots 1}_{\phi\ \text{times}}\}^{d_{l}}$. As an example, in Fig. 2(b), the set of all corners equals $\{000000,000111,111000,111111\}$. Lines 10 to 16 of Algorithm 1 describe how such a construction can be done symbolically using BDD, where the number of BDD operations being triggered is linear to $\phi\cdot d_{l}$. The set $S_{j0s}$, after the inner loop (line 13-15), contains the set of all possible Boolean words with the restriction that $\mathbf{b}_{[{\phi(j-1)+1}\cdots{\phi j}]}$ equals $\underbrace{0\cdots 0}_{\phi\ \text{times}}$ (similarly $S_{j1s}$ for having 1s). The “BDD.or” operation at line 16 performs a set union operation between $S_{j0s}$ and $S_{j1s}$, to explicitly allow two types of possibilities within $\mathbf{b}_{[{\phi(j-1)+1}\cdots{\phi j}]}$. C: Filter unsupported corners that are close to training data. Although at line 17 of Algorithm 1, all unsupported corners are stored compactly inside the BDD, the implication of Lemma 1 suggests that the number of unsupported corners can still be exponential. Therefore, we are interested in further filtering out some unsupported corners and only keeping those unsupported corners that are distant from the training data. Consider again the example in Fig. 2(b), where $S_{unsup}$ is the symbolic representation of two strings, namely • $000111$ reflecting the top-left corner, and • $111000$ reflecting the bottom-right corner. The algorithm thus should keep $000111$ and filter $111000$, as the bottom-right corner has a training data $\mathbf{x}^{\prime}$ being close-by. The final part of Algorithm 1 (starting at line 18) describes how to perform such an operation symbolically by utilizing the Hamming distance on the binary string level. Consider again the example in Fig. 2(b), where for training data $\mathbf{x}^{\prime}$, ${\textsf{{enc}}}^{3}(f^{l}(\mathbf{x}^{\prime}))=011000$. The Hamming distance between “$011000$” and the bottom-right corner encoding “$111000$” equals $1$. For the top-left corner having its encoding being $000111$, there exists only data points whose encoding (e.g., $\mathbf{x}$ has an encoding of $011111$) has a Hamming distance of $2$. Therefore, by filtering out the elements with Hamming distance $1$, only the top-left corner is kept. Within Algorithm 1, line 18 maintains $S^{\leq\Delta}_{train}$ as a BDD storing every binary string that has another binary string in $S_{train}$ such that the Hamming distance between these two is at most $\Delta$. Initially, $S^{\leq\Delta}_{train}$ is set to be $S_{train}$, reflecting the case of Hamming distance being $0$. The loop of Line 19 is executed $\Delta$ times to gradually increase $S^{\leq\Delta}_{train}$ to cover strings with Hamming distance from $1$ up to $\Delta$. Within the loop, first a local copy $S_{local}$ is created (line 20). Subsequently, enlarging the set by a Hamming distance 1 can be done by the inner loop within line 21-22: for each variable index $m$, perform existential quantification over the local copy to get the set of binary strings that is insensitive at variable ${\textsf{{bv}}}_{m}$. As an example, if $S_{local}=\{011000\}$, then performing existential quantification on the first variable generates a set “$\{\theta 11000\;|\;\theta\in{0,1}\}$”, and performing existential quantification on the second variable generates another set “$\{0\theta 1000\;|\;\theta\in{0,1}\}$”. A union over all these newly generated sets returns the set of strings whose Hamming distance to the original “011000” is less or equal to $1$. Finally, line 23 performs another set difference to remove elements in $S_{unsup}$ that is present in $S^{\leq\Delta}_{train}$, and the resulting set is returned as the output of the algorithm. 4.3 Corner prioritization with multi-boxed abstraction monitors In the previous section, we focus on finding corners within a box, where the corners are distant (by means of Hamming distance) to DNN-computed feature vectors from the training dataset. Nevertheless, when the monitor uses multiple boxes, is it possible that the corner being prioritized in one box has been covered by another box? An example can be found in Fig. 3, where the monitor contains two boxes $B_{1}$ and $B_{2}$. If the algorithm applied on $B_{1}$ proposes corner $c_{1}$ to be tested, it would be a waste as $c_{1}$ lies inside $B_{2}$. We propose a lazy approach to mediate this problem - whenever a corner proposal is created from one box, use a strengthened condition and check if some part of the corner is deep inside another box (subject to $\delta$). Precisely, given $\mathcal{B}_{k,l,\mathbf{\delta}}$, provided that Algorithm 1 applied on $B_{i}=\big{[}[{a_{1}},{b_{1}}],\cdots,[{a_{d_{l}}},{b_{d_{l}}}]\big{]}\in\mathcal{B}_{k,l,\mathbf{\delta}}$ suggests an unsupported corner $c\in C^{u}_{B_{i}}$ whose corresponding binary string equals $\mathbf{b}$, conduct the following: 1. Given $\mathbf{b}$, find a vertex $\mathbf{v}=(v_{1},\ldots,v_{d_{l}})$ in box $B_{i}$ that is also in the proposed corner $c$. Precisely, for $\forall j\in[{1}\cdots{d_{l}}]$, • if $\mathbf{b}_{[{\phi(j-1)+1}\cdots{\phi j}]}=\underbrace{0\cdots 0}_{\phi\ \text{times}}$, set $v_{j}$ to be $a_{j}$. • Otherwise, set $v_{j}$ to be $b_{j}$. 2. Discard the corner proposal on $c$, whenever there exists $B_{i^{\prime}}=\big{[}[{a^{\prime}_{1}},{b^{\prime}_{1}}],\cdots,\\ [{a^{\prime}_{d_{l}}},{b^{\prime}_{d_{l}}}]\big{]}\in\mathcal{B}_{k,l,\mathbf{\delta}}$, $i^{\prime}\neq i$, such that the following holds: $\forall j\in[{1}\cdots{d_{l}}]:a^{\prime}_{j}+\delta_{j}<v_{j}<b^{\prime}_{j}-\delta_{j}$. The time complexity for rejecting a corner proposal is in low degree polynomial: • For step (1), assigning each $v_{j}$ sums up the time $\mathcal{O}(d_{l})$. • For step (2), the containment check is done on every other box (the number of boxes equals $k$) over all dimensions (size $d_{l}$), leading to the time complexity $\mathcal{O}(k\cdot d_{l})$. 5 Improving the DNN against the Unsupported Corners As unsupported corners represent regions in the monitor where no training data is close-by, any input whose feature vector falls in that corner will not be rejected by the monitor, leading to safety concerns if the prediction is incorrect. For classification tasks, one possible mediation is to explicitly ensure that any input whose feature vector falls in the unsupported corner does not cause the DNN to generate a strong prediction over a particular class. As an example, if the DNN $f$ is used for digit recognition and $d_{L}$ equals $10$ with each $f^{(L)}_{i}$ indicating the possibility of the character being $i-1$, it is desirable to let an input $\mathbf{x}$, whose feature vector falls inside the unsupported corner, to produce $f^{(L)}_{1}(\mathbf{x})\cong f^{(L)}_{2}(\mathbf{x})\cong\ldots\cong f^{(L)}_{10}(\mathbf{x})\cong 0.1$, i.e., the DNN is not certain on which class this input belongs to. One can naively retrain the complete DNN against such an input $\mathbf{x}$. Nevertheless, if the DNN is completely retrained, the created monitor $\mathcal{B}_{k,l,\delta}$ is no longer valid, as the parameters before layer $l$ have been changed due to re-training. Towards this issue, Algorithm 2 presents a local DNN modification scheme111For simplicity, we only show the algorithm for 1-boxed abstraction monitor, while extensions for multi-boxed abstraction monitor can follow the same paradigm stated in Section 4.3. where the re-training is only done between layers $l+1$ and $L$. As the new DNN share the same function with the existing one from layer $1$ to layer $l$, previously constructed 1-boxed monitor remains applicable in the new DNN. As re-training is only done over a sub-network between layers $l+1$ and $L$, the input for training the sub-network is the output of layer $l$. Therefore, reflected at line 1, one prepares a new training dataset where the input is $f^{l}(\mathbf{x})$. The input for Algorithm 2 also contains $S$, which is a subset of unsupported corners derived from Algorithm 1. Lines 2 to 6 translate each binary string in $S$ into an unsupported corner (line 3) and sample $\rho$ points (line 4) to be added to the new training dataset. As stated in the previous paragraph, we wish the result of these points to be unbiased for any output class. Therefore stated at line 6, the corresponding label, under the assumption where $\mathcal{D}_{train}$ uses one-hot encoding, should be $(\frac{1}{d_{L}},\ldots,\frac{1}{d_{L}})$. 6 Evaluation This section aims to experimentally answer two questions about the unsupported corners generated by the method in Section 4. The first question is regarding the behavior of feature vectors in the unsupported corners reflected in the output (Section 6.1). The second question is regarding generating inputs that can lead to these unsupported corners (Section 6.2). Specifically, we consider monitors built on the penultimate layer of two neural networks, trained on benchmarks MNIST [16] and GTSRB [11], respectively, to classify handwritten digits (0-9) and traffic signs. Following Algorithm 1, we first encode the monitors’ supported corners using BDD representation. Subsequently, compute the unsupported corners using symbolic set difference operations. We use Pytorch222https://pytorch.org/ to train the DNN and use the python-based BDD library dd333https://github.com/tulip-control/dd for encoding the binary strings into the BDD. 6.1 Understanding unsupported corners This subsection focuses on understanding the output softmax (probability) values for the feature vectors from unsupported corners. We take $m$ unsupported corners and from each of them uniformly pick $\rho$ samples in the corresponding corner. The hyper-parameters used in the experiments are shown in Table 1. We first examine if the DNN can output overconfident softmax values for these samples. From the statistical results, as shown in the left part of Fig. 4, one can find that samples from many unsupported corners (with Hamming distance larger than $3$ from the training dataset) are assigned a high softmax value. This confirms our conjecture that additional local training is needed to suppress high-confident output against unsupported corners. After applying Algorithm 2 for fine-tuning the after-monitored-layer sub-network, these unsupported cases are all assigned an averaged softmax value of $\frac{1}{10}$, as shown in the right part of Fig. 4. Interestingly, the fine-tuning does not deteriorate the accuracy of the neural network on the original training and test sets: We observe a shift from the original accuracy of $99.34\%$ ($98.8\%$) on the training (test) dataset to a new one of $99.24\%$ ($98.84\%$). Remark 1 The repair of the sub-network essentially equips the original network an additional ability of identifying out-of-distribution samples (around the area of unsupported corners) by observing whether the softmax value of prediction is close to $\frac{1}{d_{L}}$ or not. 6.2 From test case proposal to test case generation This subsection explores two possibilities for generating inputs that yield features in specific unsupported corners of the monitored layer. • The first method is to verify whether the maximum or minimum activation value of each monitored neuron is responsible for a particular segment or local area of the input, hereafter referred to as Neuron-Wise-Excited-Input-Feature (NWEIF). If such a connection exists, since a corner is a combination of the maximum/minimum activation values of each neuron, then a new input can be formed by combining the NWEIFs of each neuron. • The second is to apply optimization techniques. Given an image in the training dataset, perform gradient descent to find a modification over the image such that the modified image generates a feature within a given unsupported corner. 6.2.1 Neuron-wise excited input-feature combination We applied the layer-wise relevance propagation (LRP) [20] technique to interpret the images that reach the maximum and minimum five values of a neuron. LRP is one of the back-propagation techniques used for redistributing neuron activation values at one layer to its precedent layers (possible up to the input layer). In a nutshell, it explains which parts of the input contribute to the neuron’s activation and to what extent. Discussion The results in Fig. 5 show that it is difficult for humans to compose new inputs based on NWEIF. The first and second rows in each bold-black block are the original images and corresponding heat maps interpreted by LRP. Although LRP can help us identify regions or features, it is very difficult to precisely associate one neuron with one specific input-feature. We can observe in Fig. 5 that for the 20km/h speed sign, the area that leads to maximum activation has considerable overlap with the area that leads to minimum activation. This makes a precise association between neurons and features difficult, justifying the need of using other methods such as optimization-based image generation for testing and Algorithm 2 for local training over unsupported corners. 6.2.2 Optimization-based test case generation Finally, we create images corresponding to corners by using an optimization method, similar to the ones used for adversarial examples generation [24]. Overall, the generated test case should allow the DNN to (1) fall inside the box of the unsupported corner and to (2) be confident in predicting a wrong class. In our implementation, the previously mentioned two objectives are integrated as a loss function, which is optimized (by minimizing the loss) with respect to the input image. We refer readers to the appendix for details regarding how such a method is implemented. Figure 6 illustrates examples of original and perturbed images, where for the bottom-right example, the perturbed images not only falls into a particular corner, but the resulting prediction also changes from the initially correct “$1$” to the incorrect “$4$”. We observe that when the buffer $\delta$ around the box is small, it can be difficult for the adversarial testing method to generate images that fall into a specific corner. However, we are unable to state that it is impossible to generate such an input; the problem can only be answered using formal verification. This further justifies the need for local DNN training. 7 Concluding Remarks In this paper, we address the issue of testing OoD monitors built from boxed abstractions of feature vectors from the training data, and we show how this testing problem could be reduced to finding corners in hyperrectangle distant from the available training data in the feature space. The key novelty lies in a rigorous method for analyzing the corners of the monitors and detecting whether a corner is supported or not according to the input in the training data set, generating feature vectors located in the corner. To the best of our knowledge, it is the first approach for testing the decision boundary of a DNN monitor, where the test prioritization scheme is based on corners (of the monitor) that have no input data being close-by. The other important result is the DNN repair scheme (via local training) to incorporate the distant-yet-uncovered corners. To this end, we have developed a tool that provides technical solutions for our OoD detectors based on boxed abstractions. Our experiments show the effectiveness of our method in different applications. This work raises a new research direction on rigorous engineering of DNN monitors to be used in safety-critical applications. An important future direction is the refinement of boxed abstractions: By considering the unrealistic corners, we can refine the abstraction by adding more boxes to remove them. Another direction is to use some probability estimation method to prioritize corners rather than using Hamming distance. (Acknowledgement) This work is funded by the Bavarian Ministry for Economic Affairs, Regional Development and Energy as part of a project to support the thematic development of the Fraunhofer Institute for Cognitive Systems. This work is also supported by the European project Horizon 2020 research and innovation programme under grant agreement No. 956123. References [1] S. Abrecht, L. Gauerhof, C. Gladisch, K. Groh, C. Heinzemann, and M. Woehrle. Testing deep learning-based visual perception for automated driving. ACM TCPS, 5(4):1–28, 2021. [2] R. E. Bryant. Symbolic boolean manipulation with ordered binary-decision diagrams. CSUR, 24(3):293–318, 1992. [3] C.-H. Cheng, C.-H. Huang, T. Brunner, and V. Hashemi. Towards safety verification of direct perception neural networks. In DATE, pages 1640–1643. IEEE, 2020. [4] C.-H. Cheng, C.-H. Huang, and H. Yasuoka. Quantitative projection coverage for testing ml-enabled autonomous systems. In ATVA, pages 126–142. Springer, 2018. [5] M. Dusenberry, G. Jerfel, Y. Wen, Y. Ma, J. Snoek, K. Heller, B. Lakshminarayanan, and D. Tran. Efficient and scalable bayesian neural nets with rank-1 factors. In ICML, pages 2782–2792. PMLR, 2020. [6] Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML, pages 1050–1059. PMLR, 2016. [7] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. MIT press, 2016. [8] M. Havasi, R. Jenatton, S. Fort, J. Z. Liu, J. Snoek, B. Lakshminarayanan, A. M. Dai, and D. Tran. Training independent subnetworks for robust prediction. arXiv preprint arXiv:2010.06610, 2020. [9] D. Hendrycks and K. Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136, 2016. [10] T. A. Henzinger, A. Lukina, and C. Schilling. Outside the box: Abstraction-based monitoring of neural networks. arXiv preprint arXiv:1911.09032, 2019. [11] S. Houben, J. Stallkamp, J. Salmen, M. Schlipsing, and C. Igel. Detection of traffic signs in real-world images: The German Traffic Sign Detection Benchmark. In IJCNN, pages 1–8. IEEE, 2013. [12] X. Huang, D. Kroening, W. Ruan, J. Sharp, Y. Sun, E. Thamo, M. Wu, and X. Yi. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37:100270, 2020. [13] G. Katz, C. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In CAV, pages 97–117. Springer, 2017. [14] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [15] B. Lakshminarayanan, A. Pritzel, and C. Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. arXiv preprint arXiv:1612.01474, 2016. [16] Y. LeCun, C. Cortes, and C. Burges. Mnist handwritten digit database, 2010. [17] K. Lee, K. Lee, H. Lee, and J. Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In NeurIPS, volume 31, 2018. [18] S. Liang, Y. Li, and R. Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. arXiv preprint arXiv:1706.02690, 2017. [19] L. Ma, F. Zhang, M. Xue, B. Li, Y. Liu, J. Zhao, and Y. Wang. Combinatorial testing for deep learning systems. arXiv preprint arXiv:1806.07723, 2018. [20] G. Montavon, A. Binder, S. Lapuschkin, W. Samek, and K.-R. Müller. Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, pages 193–209, 2019. [21] K. Pei, Y. Cao, J. Yang, and S. Jana. Deepxplore: Automated whitebox testing of deep learning systems. In SOSP, pages 1–18. ACM, 2017. [22] V. Riccio and P. Tonella. Model-based exploration of the frontier of behaviours for deep learning system testing. In FSE, pages 876–888. ACM, 2020. [23] Y. Sun, X. Huang, D. Kroening, J. Sharp, M. Hill, and R. Ashmore. Structural test coverage criteria for deep neural networks. ACM TECS, 18(5s):1–23, 2019. [24] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. [25] C. Wu, Y. Falcone, and S. Bensalem. Customizable reference runtime monitoring of neural networks using resolution boxes. arXiv preprint arXiv:2104.14435, 2021. Appendix A. Optimization-based test case generation In this section, we describe our optimization-based method for creating test cases. This method is similar to techniques used for generating adversarial examples in DNNs. An adversarial attack tries to perturb an input $\mathbf{x}$ so that its classification changes, while simultaneously staying close to the original input. This is achieved by an optimization method with minimizing a loss function that tries to enforce a miss-classification. Apart from that, adversarial attacks also try to ensure that the perturbed input will remain close enough to the original input (since this is the definition of adversarial examples). For our case, consider an input $\mathbf{x}$ with class $y$, DNN $f$, and let $f^{(l)}(\mathbf{x})$ be the output of the $l$-th layer we want to monitor and take features from. Let $\mathbf{p}_{c}$ be a point in the specified corner $\mathbf{c}$ that we want to approximate. We consider the following loss function: $loss(\mathbf{x},y,\mathbf{p}_{c})=-\lambda\cdot crossentropy(f^{L}\mathbf{(x),y)}+||f^{(l)}(\mathbf{x})-\mathbf{p}_{c}||_{2}$. In this loss, the first term is the standard cross-entropy loss for classification, while the second term is the distance of the $l$-th layers features from the corner point $\mathbf{p}_{c}$ with respect $L_{2}$ norm. The number $\lambda\geq 0$ is a parameter balancing the two objectives. Hence, we can minimize this loss to generate test cases using the Adam optimizer [14]. To understand this loss function, note that in a standard adversarial attack only the cross-entropy term is present. The attack tries to minimize the term $-\lambda\cdot crossentropy(\mathbf{x},y)$ (equivalently maximizing the cross-entropy), thus resulting in a miss-classification, e.g. a successful adversarial example. With our loss now, when it is minimized, we achieve the miss-classification objective, and at the other hand, minimization is achieved by keeping the distance term $||f^{(l)}(\mathbf{x})-\mathbf{p}_{c}||_{2}$ small. Thus, we achieve a miss-classification, while $f^{(l)}(\mathbf{x})$ is close to the corner point $\mathbf{p}_{c}$, which is what we aim to achieve.
Boson stars with generic self-interactions Franz E. Schunck${{}^{1}}$ and Diego F. Torres${{}^{2}}$ ${{}^{1}}$Institut für Theoretische Physik, Universität zu Köln, 50923 Köln, Germany ${{}^{2}}$Departamento de Física, Universidad Nacional de La Plata, C.C. 67, 1900 La Plata, Buenos Aires, Argentina (December 2, 2020) Abstract We study boson star configurations with generic, but not non-topological, self-interaction terms, i.e. we do not restrict ourselves just to consider the standard $\lambda|\psi|^{4}$ interaction but more general U(1)-symmetry-preserving profiles. We find that when compared with the usual potential, similar results for masses and number of particles appear. However, changes are of order of few percent of the star masses. We explore the stability properties of the configurations, that we analyze using catastrophe theory. We also study possible observational outputs: gravitational redshifts, rotation curves of accreted particles, and lensing phenomena, and compare with the usual case. [ ] I Introduction In the past two decades, the study of the universe in its earlier stages has given an important role to scalar fields: conventional models for inflation rely on its possible potential energy and topological defects may form when a scalar field breaks some fundamental symmetry. Both of these processes may be responsible for the formation of the large scale structure we now see. In addition, scalar fields connected with time or space-time variation of fundamental constants have produced the most powerful alternative theories of gravitation, known as scalar-tensor theories, and entered in all unification scheme trials made so far. It must be said, however, that no fundamental cosmological scalar field has been directly observed yet. If one accepts the need of scalar fields in the cosmological scenario, one interesting question naturally arise: may these scalars be the seed of astrophysical structures or of observable phenomena that could signal their existence? Objects made up of scalar massive particles were introduced as early as 1968 by Kaup [1] and Ruffini and Bonazzola [2]. These configurations are now known as boson stars. These stars, contrary to the more common neutron or fermion stars, are not supported by Pauli’s exclusion but by Heisenberg’s uncertainty principle, which effectively keeps scalars from being localized to within their Compton wavelength and prevents their collapse to a black hole. The first investigations with a nonlinear $\psi^{6}$ potential were carried out by Mielke and Scherzer [3]. They calculated this potential from a nonlinear Heisenberg-Pauli-Weyl spinor equation and found the first scalar field solutions with nodes. More recently, Colpi et al. [4] proved that the existence of a self-interaction in the boson Lagrangian could yield higher values for the masses of the configurations. This means that for some values of the free parameters existing in the theory, stellar structures of appreciable masses and extremely high density may arise. This could have incredible effects on our understanding of the non-baryonic content of the universe and have different observational effects: affecting usual objects as galaxies [5] or even light trajectories in powerful, degree-ranged, microlensing phenomena [6]. These kind of features, as well as the important point of star rotation (see [7] among others), was thoroughly studied in the past few years, including gauge charged stars, boson-fermion models (see [8] for reviews) and models of stars in alternative theories of gravity [9]. In all these works, the self-interaction was always choiced to be like $\lambda|\psi|^{4}$, thus giving a matter sector provided by, $${\cal L}_{{\rm m}}=-\frac{1}{2}g^{\mu\nu}\,\partial_{\mu}\psi^{*}\partial_{\nu% }\psi-\frac{1}{2}m^{2}|\psi|^{2}-\frac{1}{4}\lambda|\psi|^{4}\,,$$ (1) where $\lambda$ is a constant. However, a priori, there is no reason to maintain just the form adopted for the self-interaction and not a more generic potential term $V(\psi)$. Although in order to retain the $U(1)$ symmetry of the whole Lagrangian, this potential must be a function of $|\psi|^{2}$, in principle, it may have any other particular form. In this work we would like to explore whether substantial variations in the form of the potential may yield appreciable changes in the final configurations, the binding energies, or the masses of boson stars. We also analyze the changes that a different potential introduces in gravitational redshifts and microlensing phenomena. This could be important to discover if different scenarios for dark matter candidates, explanation of galactic rotation curves [5], or new observational gravitational effects may appear [10]. Additionally, new potentials may strongly affect gravitational memory and evolution scenarios [11] within theories with time variation of Newton’s constant (for the most recent account of this see Ref. [12]). Knowledge about that modifications in the boson potential may yield to profound changes in the structure of the stellar objects comes from the works of Friedberg, Lee, and Pang [13, 14, 15] on non-topological soliton stars. However, we point out that a systematic study on their possible observational signatures is still absent. We shall briefly mention their features and compare with our potential choices below. However, our main aim here is to study if potentials which are not non-topological ones may still introduce significant changes in the configurations. This work is organized as follows. In the next section we introduce the models for boson stars and search, with different self-interaction terms, for their configurations and stability properties. Section III compares the results of these generic boson star models, concerning observational outputs, with those obtained for the usual $\lambda|\psi|^{4}$ term. We give our conclusions and provide a brief discussion in the final section. II The models We shall take into account the following boson Lagrangian $${\cal L}_{{\rm m}}=-\frac{1}{2}g^{\mu\nu}\,\partial_{\mu}\psi^{*}\partial_{\nu% }\psi-\frac{1}{2}U(|\psi|^{2})\,,$$ (2) with the potential $$U(|\psi|^{2})=m^{2}|\psi|^{2}+\frac{1}{2}V(|\psi|^{2})\,.$$ (3) This action possesses an invariance under the global $U(1)$ transformation $\psi\rightarrow e^{i\theta}\psi$, which gives rise to a conserved current $$J^{\mu}=ig^{\mu\nu}\left(\psi^{*}\partial_{\nu}\psi-\psi\partial_{\nu}\psi^{*}% \right),$$ (4) and a corresponding conserved charge $$N=\int dx^{3}\sqrt{-g}J^{0}.$$ (5) This conserved quantity can be identified with the number of bosons present in the stellar structure. We shall consider the usual spherically symmetric metric $$ds^{2}=-B(r)dt^{2}+A(r)dr^{2}+r^{2}d\Omega^{2},$$ (6) and shall also demand a spherically symmetric form for the field which describe the boson, i.e. we adopt the ansatz: $$\psi(r,t)=\chi(r)\exp{[-i\varpi t]}.$$ (7) This ensures that a static solution is still possible. To establish that this time dependence is the lowest energy solution at a fixed number of particles, it is necessary to make a first order variation $\delta(E-\varpi N)$, where in general $\varpi$ is a Lagrangian multiplier associated with the conservation of $N$ and $E$ is the energy (mass) of the system. See for instance the appendix of Ref. [14] for a detailed derivation. This form for $\psi$ is independent of the potential $U$. For a more detailed derivation of the basic model we refer the reader to Refs. [1, 2, 4, 8]. Using the Einstein gravitational action we obtain the field equations, $$\displaystyle\frac{2M^{\prime}}{x^{2}}$$ $$\displaystyle=$$ $$\displaystyle\sigma^{2}\left(\frac{\Omega^{2}}{B}+1\right)+\frac{\sigma^{% \prime 2}}{A}+$$ (8) $$\displaystyle\quad\frac{1}{2}\left(\frac{4\pi}{m^{2}M_{{\rm Pl}}^{2}}\right)V(% \sigma M_{{\rm Pl}}/\sqrt{4\pi}),$$ $$\displaystyle\frac{B^{\prime}}{ABx}$$ $$\displaystyle-$$ $$\displaystyle\frac{1}{x^{2}}\left(1-\frac{1}{A}\right)=\sigma^{2}\left(\frac{% \Omega^{2}}{B}-1\right)+$$ (9) $$\displaystyle\quad\frac{\sigma^{\prime 2}}{A}-\frac{1}{2}\left(\frac{4\pi}{m^{% 2}M_{{\rm Pl}}^{2}}\right)V(\sigma M_{{\rm Pl}}/\sqrt{4\pi}),$$ and the Klein-Gordon equation for $\psi$, $$\displaystyle\sigma^{\prime\prime}$$ $$\displaystyle+$$ $$\displaystyle\sigma^{\prime}\left(\frac{2}{x}-\frac{A^{\prime}}{2A}+\frac{B^{% \prime}}{2B}\right)+A\sigma\left(\frac{\Omega^{2}}{B}-1\right)$$ (10) $$\displaystyle\quad-\frac{1}{4}\left(\frac{4\pi}{m^{2}M_{{\rm Pl}}^{2}}\right)% \frac{dV(\sigma M_{{\rm Pl}}/\sqrt{4\pi})}{d\sigma}=0.$$ In these equations, we have used dimensionless units, which are common to those introduced in the paper by Colpi et al., $$x=mr\,,$$ (11) for the radial coordinate ($\,{}^{\prime}$ stands for the derivatives with respect to $x$) and, $$\Omega=\frac{\varpi}{m},\;\;\sigma=\sqrt{4\pi}\frac{\chi(r)}{M_{{\rm Pl}}},$$ (12) where $M_{{\rm Pl}}\equiv G^{-1/2}$ is the Planck mass. In order to consider the total amount of mass contained within a radius $x$ we have changed the function $A$ in the metric to its Schwarzschild form, $$A(x)=\left(1-\frac{2M(x)}{x}\right)^{-1}.$$ (13) Note that the terms corresponding to the potential are correctly normalized. From the Lagrangian (2) one may see that $V$ has dimensions of [Energy]${}^{4}$, and all of them are divided by two squares of masses. However, in order to avoid the explicit appearance of the boson mass $m$ and the Planck mass $M_{{\rm Pl}}$, we need to define the form of $V$. We shall look at several choices. The first expansion to nonlinear potentials can be found in the 1981 paper by Mielke & Scherzer [3]. They constructed a potential for the Klein-Gordon equation from the Heisenberg-Pauli-Weyl nonlinear spinor equation. It has the general form $U=m^{2}|\psi|^{2}-\alpha_{1}|\psi|^{4}+\alpha_{2}|\psi|^{6}$, where $\alpha_{1}$ and $\alpha_{2}$ are two positive constants. They presented solutions with nodes for the first time. The standard (CSW [4]) choice is $V(\psi)=\lambda|\psi|^{4}$, with $\lambda$ a constant. Here, the usual adimensionalization appears, $$\displaystyle\frac{4\pi}{m^{2}M_{{\rm Pl}}^{2}}V(\sigma M_{{\rm Pl}}/\sqrt{4% \pi})=\;\;\;\;\;\;\mbox{}$$ $$\displaystyle\frac{4\pi}{m^{2}M_{{\rm Pl}}^{2}}\sigma^{4}\lambda\frac{M_{{\rm Pl% }}^{4}}{(4\pi)^{2}}=\lambda\frac{M_{{\rm Pl}}^{2}}{4\pi m^{2}}\sigma^{4}=% \Lambda\sigma^{4},$$ (14) where $\Lambda=\lambda M_{{\rm Pl}}^{2}/4\pi m^{2}$. As we stated in the introduction, with this choice, the order of magnitude of boson star masses is deeply enhanced. It grows from $M\sim M_{{\rm Pl}}^{2}/m$ when $\Lambda=0$ to $M\sim M_{{\rm Pl}}^{3}/m^{2}$ when $\Lambda\neq 0$. Recall that the mass of a neutron star is roughly given by the Chandrasekhar mass $M_{Ch}\sim M_{{\rm Pl}}^{3}/m_{n}^{2}$ which is close to a solar mass ($m_{n}$ is the neutron mass). Other three options we would like to explore are (note that these are options for $U(|\psi|^{2})$, given in Eq. (3), not only for $V(|\psi|^{2})$ as will be clear below): • Cosh-Gordon potential: $$U_{\cosh}=\alpha m^{2}\bigl{[}\cosh(\beta\sqrt{|\psi|^{2}})-1\bigr{]}$$ • Sine-Gordon potential: $$\displaystyle U_{\sin}$$ $$\displaystyle=$$ $$\displaystyle\alpha m^{2}\bigl{[}\sin(\pi/2[\beta\sqrt{|\psi|^{2}}-1])+1\bigr{]}$$ $$\displaystyle=$$ $$\displaystyle\alpha m^{2}\bigl{[}1-\cos(\pi/2\beta\sqrt{|\psi|^{2}})\bigr{]}$$ • $U(1)$-Liouville potential: $$U_{\exp}=\alpha m^{2}\bigl{[}\exp(\beta^{2}|\psi|^{2})-1\bigr{]}$$ The usual Liouville potential $\exp(\beta\psi)$ has to be changed so that a $U(1)$ symmetry is ensured. Let us first consider a series expansion of these potentials. In order to do so we shall consider a value of $\beta$ such that, when going from $\psi$ to the dimensionless $\sigma$, the arguments of the functions are not affected. The parameter $\beta$ is arbitrary, it enlarges the parameter space of the solutions, as was the case with $\Lambda$ in CSW’s solutions. The appearence of the factor $\beta$ is just because dimensional grounds, while $\alpha$ can be used to get a simple mass term in the boson Lagrangian, as we shall see below. Taking this into account, the series expansions are, $$\displaystyle U_{\cosh}$$ $$\displaystyle=$$ $$\displaystyle\alpha m^{2}\Bigl{[}\cosh(\beta\sigma)-1\Bigr{]}=\alpha m^{2}\times$$ (15) $$\displaystyle\left[\frac{\beta^{2}\sigma^{2}}{2}+\frac{\beta^{4}\sigma^{4}}{24% }+\frac{\beta^{6}\sigma^{6}}{720}+\frac{\beta^{8}\sigma^{8}}{40320}+\ldots% \right]\;,$$ $$\displaystyle U_{\sin}=\alpha m^{2}\Bigl{[}\sin(\pi/2\;(\beta\sigma-1))+1\Bigr% {]}=\alpha m^{2}\times$$ $$\displaystyle\left[\frac{\beta^{2}\pi^{2}\sigma^{2}}{8}-\frac{\pi^{4}\beta^{4}% \sigma^{4}}{384}+\frac{\pi^{6}\beta^{6}\sigma^{6}}{46080}-\frac{\pi^{8}\beta^{% 8}\sigma^{8}}{10321920}+\ldots\right]\;,$$ (16) $$\displaystyle U_{\exp}$$ $$\displaystyle=$$ $$\displaystyle\alpha m^{2}\Bigl{[}\exp(\beta^{2}\sigma^{2})-1\Bigr{]}=\alpha m^% {2}\times$$ (17) $$\displaystyle\left[\beta^{2}\sigma^{2}+\frac{1}{2}\beta^{4}\sigma^{4}+\frac{1}% {6}\beta^{6}\sigma^{6}+\frac{1}{24}\beta^{8}\sigma^{8}+\ldots\right]\;.$$ Note that, from each expansion, we are recognizing a usual mass term (proportional just to $m^{2}$). This term is very important: without it, it is impossible to find solutions with exponential decrease of the scalar field, something relevant for the definition of the star radius. Then, in order to be consistent with equations (3) and (8-10), the field equations and the definition of $U$, and to avoid a useless double counting of the mass term, we must take particular choices for $\alpha$; the parameter $\beta$ is still free for choice. • Cosh-Gordon potential: $$\alpha=2(M_{{\rm Pl}}^{2}/4\pi)/\beta^{2}$$ • Sine-Gordon potential: $$\alpha=(8/\pi^{2})(M_{{\rm Pl}}^{2}/4\pi)/\beta^{2}$$ • $U(1)$-Liouville potential: $$\alpha=(M_{{\rm Pl}}^{2}/4\pi)/\beta^{2}$$ In this way, the potential $V$ is everything but the first terms in each of the previous series. Hence, it is a series of attractive-repulsive self-interactions in the case of the Sine-Gordon potential and a series of repulsive potentials in the Cosh-Gordon case. The case of the $U(1)$-Liouville potential is reminiscent of the Cosh-Gordon one, in the sense of being a series of repulsive power law self-interactions, just the coefficients differ. The form of these potentials and others mentioned above are shown in Fig. 1. For the numerical procedure, it is best to make a redefinition of the scalar field mass $$\widetilde{m}^{2}=\alpha m^{2}\;,$$ (18) and with this also redefine the coordinate $x$ and the frequency $\Omega$. Notice that $\beta$ still appears within the differential equations while $\alpha$ does not. II.1 Soliton stars A potential with symmetry breaking was investigated by Lee et al. [13, 14, 15]. They called the solutions non-topological soliton stars, and found that the mass has units of $M_{\rm Pl}^{4}/(m\sigma_{0}^{2})$ which is huge in comparison with a boson or neutron star (for the case of comparable boson and fermion masses). The potential investigated was $U=m^{2}|\psi|^{2}(1-|\psi|^{2}/\sigma_{0}^{2})^{2}$ where $\sigma_{0}$ is a constant; this belongs to the more general forms of potentials derived in [3]. Compared with the usual boson star case, non-topological soliton stars have to fulfill two characteristics: 1. The Lagrangian must be invariant under a global $U(1)$ transformation. 2. In the absence of gravity, the theory must have non-topological solutions; i.e. solutions with a finite mass, confined to a finite region of space, and non-dispersive. In general, boson stars accomplish the requirement 1. but not 2. Invariance under $U(1)$ only requires that the potential be a function of $\psi^{*}\psi$, and in order to fulfill condition 2., $U$ must contain attractive terms. This is why the coefficient of $(\psi^{*}\psi)^{2}$ of Lee’s potential has a negative sign. Finally, when $|\psi|\rightarrow\infty$, $U$ must be positive, which leads, minimally, to a sixth order function of $\psi$ for the self-interaction. It is then clear that CSW’s, $U_{{\cosh}}$, and $U_{{\rm exp}}$ choices are not non-topological potentials. Neither of them have attractive terms. This is why, a priori, we may say that the order of magnitudes for the boson star masses remains the same as in CSW’s case. The Sine-Gordon potential $U_{{\rm sin}}$ has, on the contrary, a similar series expansion, up to the sixth order, to that corresponding to Lee’s potential. But here, what happens is that $U_{{\rm sin}}$, in the absence of gravity and for a real scalar field, has not a non-topological soliton solution, instead, it has a topological one. It has a degenerate vacuum: an infinite set of $\sigma$ values for which $U_{{\rm sin}}=0$. For a detailed account of this, we refer the reader to Lee’s book, especially Chapter 7 and exercise 7.1 [16]. Then, $U_{{\rm sin}}$ neither is a non-topological soliton potential. II.2 Numerical solutions Fig. 2 shows the usual plot of boson star configurations for the case of the Cosh-Gordon potential. The maximal mass is slightly higher than in the standard case due to the additional higher-order repulsive terms in the potential. For $\beta=1$, we find $M_{\rm max}=0.638$ $M_{\rm Pl}^{2}/m$ and $N_{\rm max}=0.658$ $M_{\rm Pl}^{2}/m^{2}$ what for $M$ and $N$ is higher by 0.5%. Fig. 3 shows the stability analysis, which can be done using catastrophe theory [17, 18, 19]. One necessary condition for the configurations to be stable is a negative binding energy. However, this is not sufficient. Fig. 3 shows the appearence of two cusps signaling that there is a change in the star stability. The first branch is the only stable one, while the second and third are both unstable. Fig. 4 represents similar profiles, but for the Sine-Gordon potential. In this case, the maximal values of mass and particle number are below the pure mass potential case. The influence of the higher order attractive terms is noticeably. For $M$ and $N$, the maximal values are lower by about 2%: $M_{\rm max}=0.620$ $M_{\rm Pl}^{2}/m$ and $N_{\rm max}=0.639$ $M_{\rm Pl}^{2}/m^{2}$. Fig. 5 shows the bifurcation plot for this case. The diagram in Fig. 5 shows cusps again, where is a change in stability. A remark on the calculation of mass is in order. We check that the calculation of the mass is correct by applying two different mass definitions. The first is the Schwarzschild mass, which is defined by the energy density $\rho$ and which also appears in the asymptotic spherically symmetric space-time, $B(r)\rightarrow 1-2M/r$, where $M$ is the mass of the boson star. The formula for the Schwarzschild mass is $$\displaystyle M_{S}$$ $$\displaystyle=$$ $$\displaystyle 4\pi\int\limits_{0}^{\infty}\rho r^{2}dr$$ (19) $$\displaystyle=$$ $$\displaystyle 4\pi\int\limits_{0}^{\infty}\left(\varpi^{2}\frac{\chi(r)^{2}}{B% }+\left(\frac{d\chi(r)}{dr}\right)^{2}\frac{1}{A}+U\right)\;r^{2}\,dr\,.$$ (20) A second mass formula can be derived for a general quasi-static isolated mass insula where a time-like Killing vector $\xi^{\alpha}=2n^{\alpha}$ exists [7]. Tolman’s expression [20] is: $$\displaystyle M_{T}=\int(2T_{0}^{\;0}-T_{\mu}^{\;\mu})\sqrt{\mid g\mid}\;d^{3}x.$$ (21) For an asymptotically flat spherically symmetric space-time, both masses agree with each other. Finally, Figs. 6 and 7 shows the behavior of the $U(1)$-Liouville potential. It is both, qualitatively and quantitatively similar to the usual CSW’s case. We have stable and unstable branches. The maximal mass and particle number are higher by about 5% which are the largest deviations from the standard case. The repulsive potential terms yield larger contributions in comparison with the Cosh-Gordon potential. III Observational outputs At this stage, we have succeeded in proving that a change in the form of the self-interaction among the bosons yields appreciable -but small- changes in the form of the star configurations. We shall discuss below the feasibility of detecting observational consequences of these results concerning gravitational phenomena. In particular, we are interested in seeing if appreciable differences appear in the computation of gravitational redshifts, rotation curves, and gravitational lensing features. For the Sine-Gordon potential, and because of the smaller masses it produces, gravitational phenomena are diminished. So we shall be mainly interested in Cosh-Gordon and $U(1)$-Liouville cases. III.1 Gravitational redshifts In this section, we follow Ref. [10], and make use of the assumption that the scalar particles have no interaction -other than gravitational- with baryonic matter. Thus, this normal matter can penetrate the boson star up to the center and if it emits radiation there, well within the gravitational potential, we expect the spectral features to be redshifted. The gravitational redshift $z$ of our static line element is given by $$1+z=\sqrt{\frac{B(\infty)}{B(int)}},$$ (22) where $int$ stands for the position of the emitter particle with respect to the star center. As the receiver is practically at infinity, $B(\infty)\sim 1$. The maximum possible redshift is obtained when the particle emits exactly at the center of the boson star, where the metric deviates maximal from outside vacuum space-time. We are only interested in stable configurations, the maximum redshift is then provided by the maximum value of $\sigma(0)$, which gives the biggest mass. As it was shown in Ref. [10], the simple mass term produced a maximum redshift of 0.45 while for CSW’s choice, with $\Lambda$ tending to infinity, one gets 0.69. Here we find $z_{max}=0.46$ for the Cosh-Gordon potential and $z_{max}=0.49$ for the $U(1)$-Liouville potential. For comparison, we quote the result $z_{max}=0.47$ for neutron stars. III.2 Rotation curves Another gravitational effect considered for CSW’s choice [10], and which we would like to compare with the more generic potentials here studied are the rotation curves of test particles moving around boson stars. For our metric, circular geodesics have a rotation velocity (as measured by an observer at infinity) given by, $$v_{\varphi}^{2}=\frac{xB^{\prime}}{2}.$$ (23) In Fig. 8 we compare the rotation curves for the case $\Lambda=0$, $\Lambda=100$ of CSW’s choice, and our potentials: Cosh-Gordon and $U(1)$-Liouville. Already from the usual case, it was shown that the possible velocities that particles can reach are a notable amount of the speed of light, and matter can have an impressive kinetic energy. If such kinetic energy were transferred to radiation, we could expect very luminous boson stars, orders of magnitude more luminous than the Sun. This provides speculative alternatives to accretion disks around black holes, and would make boson stars almost indistinguishable from its final effects. This is currently being analyzed, especially concerning the possibility of having boson stars at the center of some galaxies, as was proposed with neutrino balls. For the new potentials we are analyzing here, we obtain that $U_{{\cosh}}$ and $U_{{\rm exp}}$ produce similar angular velocities to the $\Lambda=0$ case. Their maximum velocity happens for values $x\sim 5$ and have typical magnitudes of 100 000 km s${}^{-1}$. III.3 Gravitational lensing Boson star microlensing effects were first investigated by Da̧browski and Schunck [6], for CSW’s $\Lambda=0$ case, also known as the mini-boson star. Their procedure, which we closely follow, consist in studying the photon trajectories along the curved (boson star generated) space-time. For particular details of the derivation of quoted formulae see Refs. [6, 21]. In Ref. [21], a related study on microlensing features was made, taking a scalar-field-generated naked singularity as lens. It has the property of producing both, positive and negative binding angles; in this later case, in a way similar to the recently studied wormhole microlensing scenario [22]. The light traveling from a distant source is deflected, because of the presence of the boson star, with a deflection angle given by (see Fig. 9 for a schematic drawing of the geometry), $$\hat{\alpha}=\Delta\varphi-\pi,$$ (24) where, $$\Delta\varphi=2\int_{r_{0}}^{\infty}\frac{dr}{r}\frac{\sqrt{AB}}{\sqrt{r^{2}/b% ^{2}-B}},$$ (25) with impact parameter $b=r_{0}\sqrt{1/B(r_{0})}$ ($r_{0}$ is the closest distance between the light ray and the center of the boson star: the first point where the square root in the denominator is non-negative). The lens equation can be expressed as the difference between the true angular position, $\beta$, and the image positions, $\vartheta$, as [23, 24] $$\sin(\vartheta-\beta)=\frac{D_{ps}}{D_{os}}\sin\hat{\alpha}\;,$$ (26) where $D_{ps}$ ($D_{os}$) stands for the angular distance between the point P close to the lens and the source (the observer and the source). Also, from the geometry of the lens we have $\sin\vartheta=b/D_{ol}$. Hence, choosing $\vartheta$ and the distance, we have $b$, and $\Delta\varphi$ may be computed afterwards. In the numerical program, we use again the dimensionless quantities of Eqs. (11-12) and instead of the impact parameter $b$, we follow [6] and take $\vartheta$. The term $(r/b)^{2}$ in (25) is then $x^{2}/(\varpi^{2}D_{ol}^{2}\sin^{2}\vartheta)$ or just $x^{2}/(\varpi^{2}D_{ol}^{2}\vartheta^{2})$ for small $\vartheta$, respectively. Our numerical program uses always the correct $\sin\vartheta$ without any abbreviations so that also angles in the degree regime can be calculated. The examples in Figs. 10 and 11 apply $\vartheta$ in arc-seconds having the additional unit factor 1/206265 for one arc-second in radians. The change to other units can then be explained by an additional distance factor $n$. For instance, if $\vartheta$ has to be measured in milli-arc-seconds, $n$ equals $10^{-3}$. In order to get rid of the distance factor within the numerical program, it is chosen $D_{ol}=206265/(\varpi n)$. Furthermore, the reduced angular deflection angle is defined to be $$\alpha=\vartheta-\beta=\arcsin\left(\frac{D_{ps}}{D_{os}}\sin\hat{\alpha}% \right)\;.$$ (27) A second lens equation can be derived for large deflection angles, where $D_{ls}$ cannot be considered as being similar to $D_{ps}$. Of course, by its construction the source is always within a plane with constant distance to the observer, and studying the diagramatic view depicted in Fig. 9, it can be obtained that (for a detailed derivation see Appendix of Ref.[6]), $$\displaystyle\sin{\alpha}$$ $$\displaystyle=$$ $$\displaystyle\frac{D_{ls}}{D_{os}}\cos{\vartheta}\cos\left[\mbox{arcsin}\left(% \frac{D_{os}}{D_{ls}}\sin(\vartheta-\alpha)\right)\right]\times$$ (28) $$\displaystyle\times\left[\tan{\vartheta}+\tan(\hat{\alpha}-\vartheta)\right]\,,$$ where $D_{ls}$ stands for the angular distance between the lens and the source. Lens equation (26) requires only the proportion between $D_{ps}$ and $D_{os}$ so that, in general, the position of source S describes a more complicated surface. For the physical situation considered in our paper, the differences for $\alpha$ amounts a few parts per thousand of a degree at most so that our Figs. 10 and 11 describe both cases. Assuming that the boson star lens is half-way between the observer and the source, such that $D_{ls}/D_{os}=1/2$ and $D_{ps}/D_{os}=1/2$, we performed numerical computations of the reduced deflection angle for our new potentials, which we show in Figs. 10 and 11. The difference among these cases and the simple mass term (corresponding to the mini-boson star) is clearly observable. We have taken for the plot the maximum central density (which produces the maximum deflection angle). In the case of the mini-boson star, the biggest possible value of $\alpha$ is 23.03 degrees with an image at about $\vartheta=n\times 2.88$ arc-secs with the distance factor $n=n(D_{ol},\Omega)$ which is a function of the distance from the observer to the lens and the scalar field frequency, whose inverse can be associated with the star radius. In our examples we assume that $n=1$, which fixes $\vartheta$ to be measured in arc-sec. The characteristics of the boson stars produce the deflection angles which depend on the observer-to-lens-distance $D_{ol}$ and the mass. As mentioned above, if $\vartheta$ is chosen to be of the order of arc-secs (distance factor $n=1$), then the distance $D_{ol}$ is measured in units of 206265$/\varpi$. Under the assumption that the mass of the boson star is 10${}^{10}$M${}_{\odot}$ one has $\varpi\sim 10^{-15}$cm${}^{-1}$. Then, the distance $D_{ol}$ is about 100pc. If the distance factor is $n=10^{-3}$, and so $\vartheta$ measured in milli-arc-secs, the boson-star-lens is at about 100kpc. We assumed that the boson star is transparent and calculated the deflection angles. All qualitative features of a non-singular spherically symmetric transparent lens can be revealed using Figs. 10 and 11: the lens curve for the maximal boson star. Three images exist, two of them being inside the Einstein radius and one outside. An Einstein ring with infinite tangential magnification (tangential critical curve) is found, and also a radial critical curve for which two internal images merge. The appearance of the radial critical curve distinguish boson stars from other extended and non-transparent lenses. For a black hole or a neutron star, the radial critical curve does not exist because it is inside the event horizon or the star. Two bright images near the center of the boson star and the third image at some very large distance from the center are found. For non-relativistic cases smaller angles will be found. An interesting point can be made if one considers an extended source. In such a case one finds the two radially and tangentially elongated images very close to each other. Then, looking along the line defined by these two images the third one can be detected at a very large distance. For the Cosh-Gordon potential, we obtain as maximal reduced deflection angle 23.229 degrees, and for $U_{\exp}$, an even larger deviation, the biggest possible being 24.391 degrees with an image about at the same place. Differences among these cases and the usual one is between 0.2 and 0.4 degrees. IV Discussion In a recent communication, Ho et al. [25] have studied the maximum masses of boson stars formed with different self-interaction terms (all of them, however, of power law form and of positive sign). By just comparing the order of magnitude of the terms involved in the self-interaction with the mass term, and asking for them to be of the same order, they were able to see that the contribution of higher order terms goes as power of $(m/M_{\rm Pl})^{2}$. The fact that the contribution to the maximum masses of these different boson stars decreases (if $m<M_{\rm Pl}$), does not automatically yield to unobservable effects, as we have discussed in the previous sections. The star masses maintain the order of magnitude, for equal single boson masses, when compared to those cases studied by CSW [4], which is in agreement with the results of Ho et al. [25]. This also stems from the fact that all Lagrangians analyzed are not non-topological ones. Changes are of order of several percent of the star mass. For the first time, we have investigated the systems of differential equations of Einstein-Cosh-Gordon, Einstein-Sine-Gordon, and Einstein-$U(1)$-Liouville. The different potentials studied so far showed similar gravitational redshifts and rotational curves, with high angular velocities, when compared among them, and between them and the CSW’s case. However, large deviations in the maximum angular deflection have appeared, with differences amounting appreciable parts of a degree. Some observational effects distinguish boson stars from other non-transparent compact objects. However, fair is to say that if an observable determination proves the existence of a boson star, the effective form of the Lagrangian may be hidden within the percentage of possible errors. In that case, facing with the problem of degeneracy –i.e. different physical theories giving the same observational effects– Occam’s razor would probably lead us to consider just the CSW’s choice. Only a detailed knowledge of the boson involved, and the average form of the interactions, all of them encompassed in the self-interaction term, may shed light on the explicit model for the Lagrangian. The definition of the actual boson which participates in the construction of the boson star may also influence the transparent consideration for gravitational lensing phenomena. For instance, if the star is made up of Higgs particles, we may expect some kind of interaction apart from the gravitational one that may yield the star to be a non-transparent object. Furthermore, we become aware of the possibility that some potentials may give rise of tunneling of parts of the scalar field. Of course, this effect can only occur in the quantum regime, hence, if boson stars are in the order of magnitude of atoms or even atomic nuclei. Our first preliminary results for the Newtonian case show that especially the form of potentials of Lee et al. and of Sine-Gordon can lead to instability due to tunneling. The effect could mean two things: (i) The boson (soliton) star is destroyed: it disperses or it forms a black hole. (ii) The boson (soliton) star experiences an internal rearrangement. We expect to report on these issues on a forthcoming article. Acknowledgments We would like to thank Salvatore Capozziello, Mariusz Da̧browski, Gaetano Lambiase, Eckehard Mielke, and Andrew Whinnett for discussions and comments. D.F.T. was supported by CONICET as well as by funds granted by Fundación Antorchas. He thanks the hospitality provided by the International Centre of Theoretical Physics at Trieste and the Universitá degli Studi at Salerno during the latest stages $\vartheta$ of this research. References [1] D. J. Kaup, Phys. Rev. 172, 1331 (1968). [2] R. Ruffini and S. Bonazzola, Phys. Rev. 187, 1767 (1969). [3] E.W. Mielke and R. Scherzer, Phys. Rev. D24, 2111 (1981). [4] M. Colpi, S. L. Shapiro and I. Wasserman, Phys. Rev. Lett. 57, 2485 (1986). [5] F. E. Schunck, “A scalar field matter model for dark halos of galaxies and gravitational redshift”, astro-ph/9802258. [6] M. P. Da̧browski and F. E. Schunck, “Gravitational lensing of boson stars”, astro-ph/9807207, accepted by Astrophys. J. [7] F. E. Schunck and E. W. Mielke, Phys. Lett. A249, 389 (1998). [8] P. Jetzer, Phys. Rep. 220, 163 (1992); A. R. Liddle and M. S. Madsen, Int. J. Mod. Phys. D1, 101 (1992); E.W. Mielke and F.E. Schunck: “Boson stars: Early history and recent prospects”, Proceedings of the 8th Marcel Grossmann meeting in Jerusalem, (World Scientific Publ., Singapore 1999), gr-qc/9801063. [9] D. F. Torres, Phys. Rev. D56, 3478 (1997). [10] F. E. Schunck and A. R. Liddle, Phys. Lett. B404, 25 (1997). [11] D. F. Torres, A. R. Liddle, and F. E. Schunck, Phys. Rev. D57, 4821 (1998); D. F. Torres, F. E. Schunck, and A. R. Liddle, Class. Quantum Grav. 15, 3701 (1998). [12] A. W. Whinnett and D. F. Torres, Phys. Rev. D60, 104050 (1999). [13] T.D. Lee, Phys. Rev. D35, 3637 (1987). [14] R. Friedberg, T.D. Lee, and Y. Pang, Phys. Rev. D35 3640, 3658, 3678 (1987). [15] T.D. Lee and Y. Pang, Phys. Rep. 221, 251 (1992). [16] T. D. Lee, Particle physics and introduction to field theory, Chapter 7, (Harwood Academic Publishers, Chur, Switzerland 1981). [17] F. V. Kusmartsev, E. W. Mielke, and F. E. Schunck, Phys. Rev. D43, 3895 (1991). [18] F. V. Kusmartsev, E. W. Mielke, and F. E. Schunck, Phys. Lett. A157, 465 (1991). [19] F. V. Kusmartsev and F. E. Schunck, Physica B178, 24 (1992). [20] R.C. Tolman, Phys. Rev. 35 (1930) 875; R.C. Tolman, Relativity, Thermodynamics and Cosmology (Clarendon, Oxford, 1934). [21] K. S. Virbhadra, D. Narashima, and S. M. Chitre, A & A 337, 1 (1998). [22] J. G. Cramer, R. L. Forward, M. S. Morris, M. Visser, G. Benford, and G. A. Landis, Phys. Rev D51, 3117 (1995), D. F. Torres, G. E. Romero, and L. A. Anchordoqui, Phys. Rev. D58, 123001 (1998); ibid. Mod. Phys. Lett. A13, 1575 (1998). [23] P. Schneider, J. Ehlers, and E.E. Falco, “Gravitational Lenses” (Springer, Berlin 1992). [24] R. Narayan and M. Bartelmann: “Lectures on gravitational lensing”, in: Formation of Structure in the Universe, Proceedings of the 1995 Jerusalem Winter School; edited by A.Dekel and J.P. Ostriker (Cambridge University Press, Cambridge); astro-ph/9606001. [25] J. Ho, S. Kim, and B.–H. Lee: “Maximum mass of boson star formed by self-interacting scalar fields”, gr-qc/9902040.